Oct 01 15:58:07 localhost kernel: Linux version 5.14.0-620.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025
Oct 01 15:58:07 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct 01 15:58:07 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 01 15:58:07 localhost kernel: BIOS-provided physical RAM map:
Oct 01 15:58:07 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct 01 15:58:07 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct 01 15:58:07 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct 01 15:58:07 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct 01 15:58:07 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct 01 15:58:07 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct 01 15:58:07 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct 01 15:58:07 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Oct 01 15:58:07 localhost kernel: NX (Execute Disable) protection: active
Oct 01 15:58:07 localhost kernel: APIC: Static calls initialized
Oct 01 15:58:07 localhost kernel: SMBIOS 2.8 present.
Oct 01 15:58:07 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct 01 15:58:07 localhost kernel: Hypervisor detected: KVM
Oct 01 15:58:07 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct 01 15:58:07 localhost kernel: kvm-clock: using sched offset of 4577219780 cycles
Oct 01 15:58:07 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct 01 15:58:07 localhost kernel: tsc: Detected 2800.000 MHz processor
Oct 01 15:58:07 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Oct 01 15:58:07 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Oct 01 15:58:07 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct 01 15:58:07 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct 01 15:58:07 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct 01 15:58:07 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct 01 15:58:07 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct 01 15:58:07 localhost kernel: Using GB pages for direct mapping
Oct 01 15:58:07 localhost kernel: RAMDISK: [mem 0x2d7c4000-0x32bd9fff]
Oct 01 15:58:07 localhost kernel: ACPI: Early table checksum verification disabled
Oct 01 15:58:07 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct 01 15:58:07 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 01 15:58:07 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 01 15:58:07 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 01 15:58:07 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct 01 15:58:07 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 01 15:58:07 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 01 15:58:07 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Oct 01 15:58:07 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Oct 01 15:58:07 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct 01 15:58:07 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Oct 01 15:58:07 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Oct 01 15:58:07 localhost kernel: No NUMA configuration found
Oct 01 15:58:07 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Oct 01 15:58:07 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Oct 01 15:58:07 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Oct 01 15:58:07 localhost kernel: Zone ranges:
Oct 01 15:58:07 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct 01 15:58:07 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct 01 15:58:07 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Oct 01 15:58:07 localhost kernel:   Device   empty
Oct 01 15:58:07 localhost kernel: Movable zone start for each node
Oct 01 15:58:07 localhost kernel: Early memory node ranges
Oct 01 15:58:07 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct 01 15:58:07 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct 01 15:58:07 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Oct 01 15:58:07 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Oct 01 15:58:07 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct 01 15:58:07 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct 01 15:58:07 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct 01 15:58:07 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Oct 01 15:58:07 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct 01 15:58:07 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct 01 15:58:07 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct 01 15:58:07 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct 01 15:58:07 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct 01 15:58:07 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct 01 15:58:07 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct 01 15:58:07 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct 01 15:58:07 localhost kernel: TSC deadline timer available
Oct 01 15:58:07 localhost kernel: CPU topo: Max. logical packages:   8
Oct 01 15:58:07 localhost kernel: CPU topo: Max. logical dies:       8
Oct 01 15:58:07 localhost kernel: CPU topo: Max. dies per package:   1
Oct 01 15:58:07 localhost kernel: CPU topo: Max. threads per core:   1
Oct 01 15:58:07 localhost kernel: CPU topo: Num. cores per package:     1
Oct 01 15:58:07 localhost kernel: CPU topo: Num. threads per package:   1
Oct 01 15:58:07 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Oct 01 15:58:07 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct 01 15:58:07 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct 01 15:58:07 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct 01 15:58:07 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct 01 15:58:07 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct 01 15:58:07 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct 01 15:58:07 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct 01 15:58:07 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct 01 15:58:07 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct 01 15:58:07 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct 01 15:58:07 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct 01 15:58:07 localhost kernel: Booting paravirtualized kernel on KVM
Oct 01 15:58:07 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct 01 15:58:07 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct 01 15:58:07 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Oct 01 15:58:07 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Oct 01 15:58:07 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Oct 01 15:58:07 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Oct 01 15:58:07 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 01 15:58:07 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64", will be passed to user space.
Oct 01 15:58:07 localhost kernel: random: crng init done
Oct 01 15:58:07 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct 01 15:58:07 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct 01 15:58:07 localhost kernel: Fallback order for Node 0: 0 
Oct 01 15:58:07 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct 01 15:58:07 localhost kernel: Policy zone: Normal
Oct 01 15:58:07 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct 01 15:58:07 localhost kernel: software IO TLB: area num 8.
Oct 01 15:58:07 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct 01 15:58:07 localhost kernel: ftrace: allocating 49370 entries in 193 pages
Oct 01 15:58:07 localhost kernel: ftrace: allocated 193 pages with 3 groups
Oct 01 15:58:07 localhost kernel: Dynamic Preempt: voluntary
Oct 01 15:58:07 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Oct 01 15:58:07 localhost kernel: rcu:         RCU event tracing is enabled.
Oct 01 15:58:07 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct 01 15:58:07 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Oct 01 15:58:07 localhost kernel:         Rude variant of Tasks RCU enabled.
Oct 01 15:58:07 localhost kernel:         Tracing variant of Tasks RCU enabled.
Oct 01 15:58:07 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct 01 15:58:07 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct 01 15:58:07 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 01 15:58:07 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 01 15:58:07 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 01 15:58:07 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct 01 15:58:07 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct 01 15:58:07 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct 01 15:58:07 localhost kernel: Console: colour VGA+ 80x25
Oct 01 15:58:07 localhost kernel: printk: console [ttyS0] enabled
Oct 01 15:58:07 localhost kernel: ACPI: Core revision 20230331
Oct 01 15:58:07 localhost kernel: APIC: Switch to symmetric I/O mode setup
Oct 01 15:58:07 localhost kernel: x2apic enabled
Oct 01 15:58:07 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Oct 01 15:58:07 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct 01 15:58:07 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Oct 01 15:58:07 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct 01 15:58:07 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct 01 15:58:07 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct 01 15:58:07 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct 01 15:58:07 localhost kernel: Spectre V2 : Mitigation: Retpolines
Oct 01 15:58:07 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct 01 15:58:07 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct 01 15:58:07 localhost kernel: RETBleed: Mitigation: untrained return thunk
Oct 01 15:58:07 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct 01 15:58:07 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct 01 15:58:07 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct 01 15:58:07 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct 01 15:58:07 localhost kernel: x86/bugs: return thunk changed
Oct 01 15:58:07 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct 01 15:58:07 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct 01 15:58:07 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct 01 15:58:07 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct 01 15:58:07 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct 01 15:58:07 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Oct 01 15:58:07 localhost kernel: Freeing SMP alternatives memory: 40K
Oct 01 15:58:07 localhost kernel: pid_max: default: 32768 minimum: 301
Oct 01 15:58:07 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct 01 15:58:07 localhost kernel: landlock: Up and running.
Oct 01 15:58:07 localhost kernel: Yama: becoming mindful.
Oct 01 15:58:07 localhost kernel: SELinux:  Initializing.
Oct 01 15:58:07 localhost kernel: LSM support for eBPF active
Oct 01 15:58:07 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 01 15:58:07 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 01 15:58:07 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct 01 15:58:07 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct 01 15:58:07 localhost kernel: ... version:                0
Oct 01 15:58:07 localhost kernel: ... bit width:              48
Oct 01 15:58:07 localhost kernel: ... generic registers:      6
Oct 01 15:58:07 localhost kernel: ... value mask:             0000ffffffffffff
Oct 01 15:58:07 localhost kernel: ... max period:             00007fffffffffff
Oct 01 15:58:07 localhost kernel: ... fixed-purpose events:   0
Oct 01 15:58:07 localhost kernel: ... event mask:             000000000000003f
Oct 01 15:58:07 localhost kernel: signal: max sigframe size: 1776
Oct 01 15:58:07 localhost kernel: rcu: Hierarchical SRCU implementation.
Oct 01 15:58:07 localhost kernel: rcu:         Max phase no-delay instances is 400.
Oct 01 15:58:07 localhost kernel: smp: Bringing up secondary CPUs ...
Oct 01 15:58:07 localhost kernel: smpboot: x86: Booting SMP configuration:
Oct 01 15:58:07 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct 01 15:58:07 localhost kernel: smp: Brought up 1 node, 8 CPUs
Oct 01 15:58:07 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Oct 01 15:58:07 localhost kernel: node 0 deferred pages initialised in 29ms
Oct 01 15:58:07 localhost kernel: Memory: 7765680K/8388068K available (16384K kernel code, 5784K rwdata, 13996K rodata, 4068K init, 7304K bss, 616508K reserved, 0K cma-reserved)
Oct 01 15:58:07 localhost kernel: devtmpfs: initialized
Oct 01 15:58:07 localhost kernel: x86/mm: Memory block size: 128MB
Oct 01 15:58:07 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct 01 15:58:07 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct 01 15:58:07 localhost kernel: pinctrl core: initialized pinctrl subsystem
Oct 01 15:58:07 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct 01 15:58:07 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct 01 15:58:07 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct 01 15:58:07 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct 01 15:58:07 localhost kernel: audit: initializing netlink subsys (disabled)
Oct 01 15:58:07 localhost kernel: audit: type=2000 audit(1759334285.110:1): state=initialized audit_enabled=0 res=1
Oct 01 15:58:07 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct 01 15:58:07 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct 01 15:58:07 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Oct 01 15:58:07 localhost kernel: cpuidle: using governor menu
Oct 01 15:58:07 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct 01 15:58:07 localhost kernel: PCI: Using configuration type 1 for base access
Oct 01 15:58:07 localhost kernel: PCI: Using configuration type 1 for extended access
Oct 01 15:58:07 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct 01 15:58:07 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct 01 15:58:07 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct 01 15:58:07 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct 01 15:58:07 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct 01 15:58:07 localhost kernel: Demotion targets for Node 0: null
Oct 01 15:58:07 localhost kernel: cryptd: max_cpu_qlen set to 1000
Oct 01 15:58:07 localhost kernel: ACPI: Added _OSI(Module Device)
Oct 01 15:58:07 localhost kernel: ACPI: Added _OSI(Processor Device)
Oct 01 15:58:07 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct 01 15:58:07 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct 01 15:58:07 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct 01 15:58:07 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct 01 15:58:07 localhost kernel: ACPI: Interpreter enabled
Oct 01 15:58:07 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct 01 15:58:07 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Oct 01 15:58:07 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct 01 15:58:07 localhost kernel: PCI: Using E820 reservations for host bridge windows
Oct 01 15:58:07 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct 01 15:58:07 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct 01 15:58:07 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct 01 15:58:07 localhost kernel: acpiphp: Slot [3] registered
Oct 01 15:58:07 localhost kernel: acpiphp: Slot [4] registered
Oct 01 15:58:07 localhost kernel: acpiphp: Slot [5] registered
Oct 01 15:58:07 localhost kernel: acpiphp: Slot [6] registered
Oct 01 15:58:07 localhost kernel: acpiphp: Slot [7] registered
Oct 01 15:58:07 localhost kernel: acpiphp: Slot [8] registered
Oct 01 15:58:07 localhost kernel: acpiphp: Slot [9] registered
Oct 01 15:58:07 localhost kernel: acpiphp: Slot [10] registered
Oct 01 15:58:07 localhost kernel: acpiphp: Slot [11] registered
Oct 01 15:58:07 localhost kernel: acpiphp: Slot [12] registered
Oct 01 15:58:07 localhost kernel: acpiphp: Slot [13] registered
Oct 01 15:58:07 localhost kernel: acpiphp: Slot [14] registered
Oct 01 15:58:07 localhost kernel: acpiphp: Slot [15] registered
Oct 01 15:58:07 localhost kernel: acpiphp: Slot [16] registered
Oct 01 15:58:07 localhost kernel: acpiphp: Slot [17] registered
Oct 01 15:58:07 localhost kernel: acpiphp: Slot [18] registered
Oct 01 15:58:07 localhost kernel: acpiphp: Slot [19] registered
Oct 01 15:58:07 localhost kernel: acpiphp: Slot [20] registered
Oct 01 15:58:07 localhost kernel: acpiphp: Slot [21] registered
Oct 01 15:58:07 localhost kernel: acpiphp: Slot [22] registered
Oct 01 15:58:07 localhost kernel: acpiphp: Slot [23] registered
Oct 01 15:58:07 localhost kernel: acpiphp: Slot [24] registered
Oct 01 15:58:07 localhost kernel: acpiphp: Slot [25] registered
Oct 01 15:58:07 localhost kernel: acpiphp: Slot [26] registered
Oct 01 15:58:07 localhost kernel: acpiphp: Slot [27] registered
Oct 01 15:58:07 localhost kernel: acpiphp: Slot [28] registered
Oct 01 15:58:07 localhost kernel: acpiphp: Slot [29] registered
Oct 01 15:58:07 localhost kernel: acpiphp: Slot [30] registered
Oct 01 15:58:07 localhost kernel: acpiphp: Slot [31] registered
Oct 01 15:58:07 localhost kernel: PCI host bridge to bus 0000:00
Oct 01 15:58:07 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct 01 15:58:07 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct 01 15:58:07 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct 01 15:58:07 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct 01 15:58:07 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Oct 01 15:58:07 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct 01 15:58:07 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Oct 01 15:58:07 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Oct 01 15:58:07 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Oct 01 15:58:07 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Oct 01 15:58:07 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Oct 01 15:58:07 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Oct 01 15:58:07 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Oct 01 15:58:07 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Oct 01 15:58:07 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct 01 15:58:07 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Oct 01 15:58:07 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Oct 01 15:58:07 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct 01 15:58:07 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct 01 15:58:07 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct 01 15:58:07 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Oct 01 15:58:07 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Oct 01 15:58:07 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Oct 01 15:58:07 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Oct 01 15:58:07 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct 01 15:58:07 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 01 15:58:07 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Oct 01 15:58:07 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Oct 01 15:58:07 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Oct 01 15:58:07 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Oct 01 15:58:07 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Oct 01 15:58:07 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Oct 01 15:58:07 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Oct 01 15:58:07 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct 01 15:58:07 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Oct 01 15:58:07 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Oct 01 15:58:07 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct 01 15:58:07 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Oct 01 15:58:07 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Oct 01 15:58:07 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Oct 01 15:58:07 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct 01 15:58:07 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct 01 15:58:07 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct 01 15:58:07 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct 01 15:58:07 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct 01 15:58:07 localhost kernel: iommu: Default domain type: Translated
Oct 01 15:58:07 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct 01 15:58:07 localhost kernel: SCSI subsystem initialized
Oct 01 15:58:07 localhost kernel: ACPI: bus type USB registered
Oct 01 15:58:07 localhost kernel: usbcore: registered new interface driver usbfs
Oct 01 15:58:07 localhost kernel: usbcore: registered new interface driver hub
Oct 01 15:58:07 localhost kernel: usbcore: registered new device driver usb
Oct 01 15:58:07 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Oct 01 15:58:07 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct 01 15:58:07 localhost kernel: PTP clock support registered
Oct 01 15:58:07 localhost kernel: EDAC MC: Ver: 3.0.0
Oct 01 15:58:07 localhost kernel: NetLabel: Initializing
Oct 01 15:58:07 localhost kernel: NetLabel:  domain hash size = 128
Oct 01 15:58:07 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct 01 15:58:07 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Oct 01 15:58:07 localhost kernel: PCI: Using ACPI for IRQ routing
Oct 01 15:58:07 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Oct 01 15:58:07 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Oct 01 15:58:07 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Oct 01 15:58:07 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct 01 15:58:07 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct 01 15:58:07 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct 01 15:58:07 localhost kernel: vgaarb: loaded
Oct 01 15:58:07 localhost kernel: clocksource: Switched to clocksource kvm-clock
Oct 01 15:58:07 localhost kernel: VFS: Disk quotas dquot_6.6.0
Oct 01 15:58:07 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct 01 15:58:07 localhost kernel: pnp: PnP ACPI init
Oct 01 15:58:07 localhost kernel: pnp 00:03: [dma 2]
Oct 01 15:58:07 localhost kernel: pnp: PnP ACPI: found 5 devices
Oct 01 15:58:07 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct 01 15:58:07 localhost kernel: NET: Registered PF_INET protocol family
Oct 01 15:58:07 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct 01 15:58:07 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct 01 15:58:07 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct 01 15:58:07 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct 01 15:58:07 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct 01 15:58:07 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct 01 15:58:07 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct 01 15:58:07 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 01 15:58:07 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 01 15:58:07 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct 01 15:58:07 localhost kernel: NET: Registered PF_XDP protocol family
Oct 01 15:58:07 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct 01 15:58:07 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct 01 15:58:07 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct 01 15:58:07 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct 01 15:58:07 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Oct 01 15:58:07 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct 01 15:58:07 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct 01 15:58:07 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct 01 15:58:07 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 105863 usecs
Oct 01 15:58:07 localhost kernel: PCI: CLS 0 bytes, default 64
Oct 01 15:58:07 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct 01 15:58:07 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Oct 01 15:58:07 localhost kernel: ACPI: bus type thunderbolt registered
Oct 01 15:58:07 localhost kernel: Trying to unpack rootfs image as initramfs...
Oct 01 15:58:07 localhost kernel: Initialise system trusted keyrings
Oct 01 15:58:07 localhost kernel: Key type blacklist registered
Oct 01 15:58:07 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct 01 15:58:07 localhost kernel: zbud: loaded
Oct 01 15:58:07 localhost kernel: integrity: Platform Keyring initialized
Oct 01 15:58:07 localhost kernel: integrity: Machine keyring initialized
Oct 01 15:58:07 localhost kernel: Freeing initrd memory: 86104K
Oct 01 15:58:07 localhost kernel: NET: Registered PF_ALG protocol family
Oct 01 15:58:07 localhost kernel: xor: automatically using best checksumming function   avx       
Oct 01 15:58:07 localhost kernel: Key type asymmetric registered
Oct 01 15:58:07 localhost kernel: Asymmetric key parser 'x509' registered
Oct 01 15:58:07 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct 01 15:58:07 localhost kernel: io scheduler mq-deadline registered
Oct 01 15:58:07 localhost kernel: io scheduler kyber registered
Oct 01 15:58:07 localhost kernel: io scheduler bfq registered
Oct 01 15:58:07 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct 01 15:58:07 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct 01 15:58:07 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct 01 15:58:07 localhost kernel: ACPI: button: Power Button [PWRF]
Oct 01 15:58:07 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct 01 15:58:07 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct 01 15:58:07 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct 01 15:58:07 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct 01 15:58:07 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct 01 15:58:07 localhost kernel: Non-volatile memory driver v1.3
Oct 01 15:58:07 localhost kernel: rdac: device handler registered
Oct 01 15:58:07 localhost kernel: hp_sw: device handler registered
Oct 01 15:58:07 localhost kernel: emc: device handler registered
Oct 01 15:58:07 localhost kernel: alua: device handler registered
Oct 01 15:58:07 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct 01 15:58:07 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct 01 15:58:07 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct 01 15:58:07 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Oct 01 15:58:07 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct 01 15:58:07 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct 01 15:58:07 localhost kernel: usb usb1: Product: UHCI Host Controller
Oct 01 15:58:07 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-620.el9.x86_64 uhci_hcd
Oct 01 15:58:07 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct 01 15:58:07 localhost kernel: hub 1-0:1.0: USB hub found
Oct 01 15:58:07 localhost kernel: hub 1-0:1.0: 2 ports detected
Oct 01 15:58:07 localhost kernel: usbcore: registered new interface driver usbserial_generic
Oct 01 15:58:07 localhost kernel: usbserial: USB Serial support registered for generic
Oct 01 15:58:07 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct 01 15:58:07 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct 01 15:58:07 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct 01 15:58:07 localhost kernel: mousedev: PS/2 mouse device common for all mice
Oct 01 15:58:07 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Oct 01 15:58:07 localhost kernel: rtc_cmos 00:04: registered as rtc0
Oct 01 15:58:07 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct 01 15:58:07 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-10-01T15:58:06 UTC (1759334286)
Oct 01 15:58:07 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct 01 15:58:07 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct 01 15:58:07 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Oct 01 15:58:07 localhost kernel: usbcore: registered new interface driver usbhid
Oct 01 15:58:07 localhost kernel: usbhid: USB HID core driver
Oct 01 15:58:07 localhost kernel: drop_monitor: Initializing network drop monitor service
Oct 01 15:58:07 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct 01 15:58:07 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct 01 15:58:07 localhost kernel: Initializing XFRM netlink socket
Oct 01 15:58:07 localhost kernel: NET: Registered PF_INET6 protocol family
Oct 01 15:58:07 localhost kernel: Segment Routing with IPv6
Oct 01 15:58:07 localhost kernel: NET: Registered PF_PACKET protocol family
Oct 01 15:58:07 localhost kernel: mpls_gso: MPLS GSO support
Oct 01 15:58:07 localhost kernel: IPI shorthand broadcast: enabled
Oct 01 15:58:07 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Oct 01 15:58:07 localhost kernel: AES CTR mode by8 optimization enabled
Oct 01 15:58:07 localhost kernel: sched_clock: Marking stable (1313016260, 143279210)->(1588104850, -131809380)
Oct 01 15:58:07 localhost kernel: registered taskstats version 1
Oct 01 15:58:07 localhost kernel: Loading compiled-in X.509 certificates
Oct 01 15:58:07 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct 01 15:58:07 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct 01 15:58:07 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct 01 15:58:07 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct 01 15:58:07 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct 01 15:58:07 localhost kernel: Demotion targets for Node 0: null
Oct 01 15:58:07 localhost kernel: page_owner is disabled
Oct 01 15:58:07 localhost kernel: Key type .fscrypt registered
Oct 01 15:58:07 localhost kernel: Key type fscrypt-provisioning registered
Oct 01 15:58:07 localhost kernel: Key type big_key registered
Oct 01 15:58:07 localhost kernel: Key type encrypted registered
Oct 01 15:58:07 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Oct 01 15:58:07 localhost kernel: Loading compiled-in module X.509 certificates
Oct 01 15:58:07 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct 01 15:58:07 localhost kernel: ima: Allocated hash algorithm: sha256
Oct 01 15:58:07 localhost kernel: ima: No architecture policies found
Oct 01 15:58:07 localhost kernel: evm: Initialising EVM extended attributes:
Oct 01 15:58:07 localhost kernel: evm: security.selinux
Oct 01 15:58:07 localhost kernel: evm: security.SMACK64 (disabled)
Oct 01 15:58:07 localhost kernel: evm: security.SMACK64EXEC (disabled)
Oct 01 15:58:07 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct 01 15:58:07 localhost kernel: evm: security.SMACK64MMAP (disabled)
Oct 01 15:58:07 localhost kernel: evm: security.apparmor (disabled)
Oct 01 15:58:07 localhost kernel: evm: security.ima
Oct 01 15:58:07 localhost kernel: evm: security.capability
Oct 01 15:58:07 localhost kernel: evm: HMAC attrs: 0x1
Oct 01 15:58:07 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct 01 15:58:07 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct 01 15:58:07 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct 01 15:58:07 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Oct 01 15:58:07 localhost kernel: usb 1-1: Manufacturer: QEMU
Oct 01 15:58:07 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct 01 15:58:07 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct 01 15:58:07 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct 01 15:58:07 localhost kernel: Running certificate verification RSA selftest
Oct 01 15:58:07 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct 01 15:58:07 localhost kernel: Running certificate verification ECDSA selftest
Oct 01 15:58:07 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct 01 15:58:07 localhost kernel: clk: Disabling unused clocks
Oct 01 15:58:07 localhost kernel: Freeing unused decrypted memory: 2028K
Oct 01 15:58:07 localhost kernel: Freeing unused kernel image (initmem) memory: 4068K
Oct 01 15:58:07 localhost kernel: Write protecting the kernel read-only data: 30720k
Oct 01 15:58:07 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 340K
Oct 01 15:58:07 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct 01 15:58:07 localhost kernel: Run /init as init process
Oct 01 15:58:07 localhost kernel:   with arguments:
Oct 01 15:58:07 localhost kernel:     /init
Oct 01 15:58:07 localhost kernel:   with environment:
Oct 01 15:58:07 localhost kernel:     HOME=/
Oct 01 15:58:07 localhost kernel:     TERM=linux
Oct 01 15:58:07 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64
Oct 01 15:58:07 localhost systemd[1]: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 01 15:58:07 localhost systemd[1]: Detected virtualization kvm.
Oct 01 15:58:07 localhost systemd[1]: Detected architecture x86-64.
Oct 01 15:58:07 localhost systemd[1]: Running in initrd.
Oct 01 15:58:07 localhost systemd[1]: No hostname configured, using default hostname.
Oct 01 15:58:07 localhost systemd[1]: Hostname set to <localhost>.
Oct 01 15:58:07 localhost systemd[1]: Initializing machine ID from VM UUID.
Oct 01 15:58:07 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Oct 01 15:58:07 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 01 15:58:07 localhost systemd[1]: Reached target Local Encrypted Volumes.
Oct 01 15:58:07 localhost systemd[1]: Reached target Initrd /usr File System.
Oct 01 15:58:07 localhost systemd[1]: Reached target Local File Systems.
Oct 01 15:58:07 localhost systemd[1]: Reached target Path Units.
Oct 01 15:58:07 localhost systemd[1]: Reached target Slice Units.
Oct 01 15:58:07 localhost systemd[1]: Reached target Swaps.
Oct 01 15:58:07 localhost systemd[1]: Reached target Timer Units.
Oct 01 15:58:07 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 01 15:58:07 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Oct 01 15:58:07 localhost systemd[1]: Listening on Journal Socket.
Oct 01 15:58:07 localhost systemd[1]: Listening on udev Control Socket.
Oct 01 15:58:07 localhost systemd[1]: Listening on udev Kernel Socket.
Oct 01 15:58:07 localhost systemd[1]: Reached target Socket Units.
Oct 01 15:58:07 localhost systemd[1]: Starting Create List of Static Device Nodes...
Oct 01 15:58:07 localhost systemd[1]: Starting Journal Service...
Oct 01 15:58:07 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 01 15:58:07 localhost systemd[1]: Starting Apply Kernel Variables...
Oct 01 15:58:07 localhost systemd[1]: Starting Create System Users...
Oct 01 15:58:07 localhost systemd[1]: Starting Setup Virtual Console...
Oct 01 15:58:07 localhost systemd[1]: Finished Create List of Static Device Nodes.
Oct 01 15:58:07 localhost systemd[1]: Finished Apply Kernel Variables.
Oct 01 15:58:07 localhost systemd[1]: Finished Create System Users.
Oct 01 15:58:07 localhost systemd-journald[306]: Journal started
Oct 01 15:58:07 localhost systemd-journald[306]: Runtime Journal (/run/log/journal/815dd0efd3784739986b1e44e6c1aa0a) is 8.0M, max 153.5M, 145.5M free.
Oct 01 15:58:07 localhost systemd-sysusers[311]: Creating group 'users' with GID 100.
Oct 01 15:58:07 localhost systemd-sysusers[311]: Creating group 'dbus' with GID 81.
Oct 01 15:58:07 localhost systemd-sysusers[311]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct 01 15:58:07 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 01 15:58:07 localhost systemd[1]: Started Journal Service.
Oct 01 15:58:07 localhost systemd[1]: Starting Create Volatile Files and Directories...
Oct 01 15:58:07 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 01 15:58:07 localhost systemd[1]: Finished Create Volatile Files and Directories.
Oct 01 15:58:07 localhost systemd[1]: Finished Setup Virtual Console.
Oct 01 15:58:07 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct 01 15:58:07 localhost systemd[1]: Starting dracut cmdline hook...
Oct 01 15:58:07 localhost dracut-cmdline[325]: dracut-9 dracut-057-102.git20250818.el9
Oct 01 15:58:07 localhost dracut-cmdline[325]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 01 15:58:07 localhost systemd[1]: Finished dracut cmdline hook.
Oct 01 15:58:07 localhost systemd[1]: Starting dracut pre-udev hook...
Oct 01 15:58:07 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct 01 15:58:07 localhost kernel: device-mapper: uevent: version 1.0.3
Oct 01 15:58:07 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct 01 15:58:08 localhost kernel: RPC: Registered named UNIX socket transport module.
Oct 01 15:58:08 localhost kernel: RPC: Registered udp transport module.
Oct 01 15:58:08 localhost kernel: RPC: Registered tcp transport module.
Oct 01 15:58:08 localhost kernel: RPC: Registered tcp-with-tls transport module.
Oct 01 15:58:08 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct 01 15:58:08 localhost rpc.statd[443]: Version 2.5.4 starting
Oct 01 15:58:08 localhost rpc.statd[443]: Initializing NSM state
Oct 01 15:58:08 localhost rpc.idmapd[448]: Setting log level to 0
Oct 01 15:58:08 localhost systemd[1]: Finished dracut pre-udev hook.
Oct 01 15:58:08 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 01 15:58:08 localhost systemd-udevd[461]: Using default interface naming scheme 'rhel-9.0'.
Oct 01 15:58:08 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 01 15:58:08 localhost systemd[1]: Starting dracut pre-trigger hook...
Oct 01 15:58:08 localhost systemd[1]: Finished dracut pre-trigger hook.
Oct 01 15:58:08 localhost systemd[1]: Starting Coldplug All udev Devices...
Oct 01 15:58:08 localhost systemd[1]: Created slice Slice /system/modprobe.
Oct 01 15:58:08 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 01 15:58:08 localhost systemd[1]: Finished Coldplug All udev Devices.
Oct 01 15:58:08 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 01 15:58:08 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 01 15:58:08 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 01 15:58:08 localhost systemd[1]: Reached target Network.
Oct 01 15:58:08 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 01 15:58:08 localhost systemd[1]: Starting dracut initqueue hook...
Oct 01 15:58:08 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Oct 01 15:58:08 localhost kernel: libata version 3.00 loaded.
Oct 01 15:58:08 localhost systemd[1]: Mounting Kernel Configuration File System...
Oct 01 15:58:08 localhost systemd[1]: Mounted Kernel Configuration File System.
Oct 01 15:58:08 localhost systemd[1]: Reached target System Initialization.
Oct 01 15:58:08 localhost systemd[1]: Reached target Basic System.
Oct 01 15:58:08 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Oct 01 15:58:08 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct 01 15:58:08 localhost kernel: scsi host0: ata_piix
Oct 01 15:58:08 localhost kernel: scsi host1: ata_piix
Oct 01 15:58:08 localhost kernel:  vda: vda1
Oct 01 15:58:08 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Oct 01 15:58:08 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Oct 01 15:58:08 localhost kernel: ata1: found unknown device (class 0)
Oct 01 15:58:08 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct 01 15:58:08 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct 01 15:58:08 localhost systemd-udevd[475]: Network interface NamePolicy= disabled on kernel command line.
Oct 01 15:58:08 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct 01 15:58:08 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct 01 15:58:08 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct 01 15:58:08 localhost systemd[1]: Found device /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct 01 15:58:08 localhost systemd[1]: Reached target Initrd Root Device.
Oct 01 15:58:08 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Oct 01 15:58:08 localhost systemd[1]: Finished dracut initqueue hook.
Oct 01 15:58:08 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Oct 01 15:58:08 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Oct 01 15:58:08 localhost systemd[1]: Reached target Remote File Systems.
Oct 01 15:58:08 localhost systemd[1]: Starting dracut pre-mount hook...
Oct 01 15:58:08 localhost systemd[1]: Finished dracut pre-mount hook.
Oct 01 15:58:08 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458...
Oct 01 15:58:08 localhost systemd-fsck[555]: /usr/sbin/fsck.xfs: XFS file system.
Oct 01 15:58:08 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct 01 15:58:08 localhost systemd[1]: Mounting /sysroot...
Oct 01 15:58:09 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct 01 15:58:09 localhost kernel: XFS (vda1): Mounting V5 Filesystem 1631a6ad-43b8-436d-ae76-16fa14b94458
Oct 01 15:58:09 localhost kernel: XFS (vda1): Ending clean mount
Oct 01 15:58:09 localhost systemd[1]: Mounted /sysroot.
Oct 01 15:58:09 localhost systemd[1]: Reached target Initrd Root File System.
Oct 01 15:58:09 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct 01 15:58:09 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct 01 15:58:09 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct 01 15:58:09 localhost systemd[1]: Reached target Initrd File Systems.
Oct 01 15:58:09 localhost systemd[1]: Reached target Initrd Default Target.
Oct 01 15:58:09 localhost systemd[1]: Starting dracut mount hook...
Oct 01 15:58:09 localhost systemd[1]: Finished dracut mount hook.
Oct 01 15:58:09 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct 01 15:58:09 localhost rpc.idmapd[448]: exiting on signal 15
Oct 01 15:58:09 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct 01 15:58:09 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct 01 15:58:09 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct 01 15:58:09 localhost systemd[1]: Stopped target Network.
Oct 01 15:58:09 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Oct 01 15:58:09 localhost systemd[1]: Stopped target Timer Units.
Oct 01 15:58:09 localhost systemd[1]: dbus.socket: Deactivated successfully.
Oct 01 15:58:09 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Oct 01 15:58:09 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct 01 15:58:09 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct 01 15:58:09 localhost systemd[1]: Stopped target Initrd Default Target.
Oct 01 15:58:09 localhost systemd[1]: Stopped target Basic System.
Oct 01 15:58:09 localhost systemd[1]: Stopped target Initrd Root Device.
Oct 01 15:58:09 localhost systemd[1]: Stopped target Initrd /usr File System.
Oct 01 15:58:09 localhost systemd[1]: Stopped target Path Units.
Oct 01 15:58:09 localhost systemd[1]: Stopped target Remote File Systems.
Oct 01 15:58:09 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Oct 01 15:58:09 localhost systemd[1]: Stopped target Slice Units.
Oct 01 15:58:09 localhost systemd[1]: Stopped target Socket Units.
Oct 01 15:58:09 localhost systemd[1]: Stopped target System Initialization.
Oct 01 15:58:09 localhost systemd[1]: Stopped target Local File Systems.
Oct 01 15:58:09 localhost systemd[1]: Stopped target Swaps.
Oct 01 15:58:09 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Oct 01 15:58:09 localhost systemd[1]: Stopped dracut mount hook.
Oct 01 15:58:09 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct 01 15:58:09 localhost systemd[1]: Stopped dracut pre-mount hook.
Oct 01 15:58:09 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Oct 01 15:58:09 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct 01 15:58:09 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct 01 15:58:09 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct 01 15:58:09 localhost systemd[1]: Stopped dracut initqueue hook.
Oct 01 15:58:09 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 01 15:58:09 localhost systemd[1]: Stopped Apply Kernel Variables.
Oct 01 15:58:09 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct 01 15:58:09 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Oct 01 15:58:09 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct 01 15:58:09 localhost systemd[1]: Stopped Coldplug All udev Devices.
Oct 01 15:58:09 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct 01 15:58:09 localhost systemd[1]: Stopped dracut pre-trigger hook.
Oct 01 15:58:09 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct 01 15:58:09 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct 01 15:58:09 localhost systemd[1]: Stopped Setup Virtual Console.
Oct 01 15:58:09 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Oct 01 15:58:09 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 01 15:58:09 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct 01 15:58:09 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct 01 15:58:09 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct 01 15:58:09 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct 01 15:58:09 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct 01 15:58:09 localhost systemd[1]: Closed udev Control Socket.
Oct 01 15:58:09 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct 01 15:58:09 localhost systemd[1]: Closed udev Kernel Socket.
Oct 01 15:58:09 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct 01 15:58:09 localhost systemd[1]: Stopped dracut pre-udev hook.
Oct 01 15:58:09 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct 01 15:58:09 localhost systemd[1]: Stopped dracut cmdline hook.
Oct 01 15:58:09 localhost systemd[1]: Starting Cleanup udev Database...
Oct 01 15:58:09 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct 01 15:58:09 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct 01 15:58:09 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct 01 15:58:09 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Oct 01 15:58:09 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct 01 15:58:09 localhost systemd[1]: Stopped Create System Users.
Oct 01 15:58:09 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Oct 01 15:58:09 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Oct 01 15:58:09 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct 01 15:58:09 localhost systemd[1]: Finished Cleanup udev Database.
Oct 01 15:58:09 localhost systemd[1]: Reached target Switch Root.
Oct 01 15:58:09 localhost systemd[1]: Starting Switch Root...
Oct 01 15:58:09 localhost systemd[1]: Switching root.
Oct 01 15:58:09 localhost systemd-journald[306]: Journal stopped
Oct 01 15:58:11 localhost systemd-journald[306]: Received SIGTERM from PID 1 (systemd).
Oct 01 15:58:11 localhost kernel: audit: type=1404 audit(1759334289.990:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct 01 15:58:11 localhost kernel: SELinux:  policy capability network_peer_controls=1
Oct 01 15:58:11 localhost kernel: SELinux:  policy capability open_perms=1
Oct 01 15:58:11 localhost kernel: SELinux:  policy capability extended_socket_class=1
Oct 01 15:58:11 localhost kernel: SELinux:  policy capability always_check_network=0
Oct 01 15:58:11 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 01 15:58:11 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 01 15:58:11 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 01 15:58:11 localhost kernel: audit: type=1403 audit(1759334290.170:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct 01 15:58:11 localhost systemd[1]: Successfully loaded SELinux policy in 183.257ms.
Oct 01 15:58:11 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.843ms.
Oct 01 15:58:11 localhost systemd[1]: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 01 15:58:11 localhost systemd[1]: Detected virtualization kvm.
Oct 01 15:58:11 localhost systemd[1]: Detected architecture x86-64.
Oct 01 15:58:11 localhost systemd-rc-local-generator[635]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 15:58:11 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Oct 01 15:58:11 localhost systemd[1]: Stopped Switch Root.
Oct 01 15:58:11 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct 01 15:58:11 localhost systemd[1]: Created slice Slice /system/getty.
Oct 01 15:58:11 localhost systemd[1]: Created slice Slice /system/serial-getty.
Oct 01 15:58:11 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Oct 01 15:58:11 localhost systemd[1]: Created slice User and Session Slice.
Oct 01 15:58:11 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 01 15:58:11 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Oct 01 15:58:11 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct 01 15:58:11 localhost systemd[1]: Reached target Local Encrypted Volumes.
Oct 01 15:58:11 localhost systemd[1]: Stopped target Switch Root.
Oct 01 15:58:11 localhost systemd[1]: Stopped target Initrd File Systems.
Oct 01 15:58:11 localhost systemd[1]: Stopped target Initrd Root File System.
Oct 01 15:58:11 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Oct 01 15:58:11 localhost systemd[1]: Reached target Path Units.
Oct 01 15:58:11 localhost systemd[1]: Reached target rpc_pipefs.target.
Oct 01 15:58:11 localhost systemd[1]: Reached target Slice Units.
Oct 01 15:58:11 localhost systemd[1]: Reached target Swaps.
Oct 01 15:58:11 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Oct 01 15:58:11 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Oct 01 15:58:11 localhost systemd[1]: Reached target RPC Port Mapper.
Oct 01 15:58:11 localhost systemd[1]: Listening on Process Core Dump Socket.
Oct 01 15:58:11 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Oct 01 15:58:11 localhost systemd[1]: Listening on udev Control Socket.
Oct 01 15:58:11 localhost systemd[1]: Listening on udev Kernel Socket.
Oct 01 15:58:11 localhost systemd[1]: Mounting Huge Pages File System...
Oct 01 15:58:11 localhost systemd[1]: Mounting POSIX Message Queue File System...
Oct 01 15:58:11 localhost systemd[1]: Mounting Kernel Debug File System...
Oct 01 15:58:11 localhost systemd[1]: Mounting Kernel Trace File System...
Oct 01 15:58:11 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 01 15:58:11 localhost systemd[1]: Starting Create List of Static Device Nodes...
Oct 01 15:58:11 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 01 15:58:11 localhost systemd[1]: Starting Load Kernel Module drm...
Oct 01 15:58:11 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Oct 01 15:58:11 localhost systemd[1]: Starting Load Kernel Module fuse...
Oct 01 15:58:11 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct 01 15:58:11 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Oct 01 15:58:11 localhost systemd[1]: Stopped File System Check on Root Device.
Oct 01 15:58:11 localhost systemd[1]: Stopped Journal Service.
Oct 01 15:58:11 localhost kernel: fuse: init (API version 7.37)
Oct 01 15:58:11 localhost systemd[1]: Starting Journal Service...
Oct 01 15:58:11 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 01 15:58:11 localhost systemd[1]: Starting Generate network units from Kernel command line...
Oct 01 15:58:11 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 01 15:58:11 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Oct 01 15:58:11 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct 01 15:58:11 localhost systemd[1]: Starting Apply Kernel Variables...
Oct 01 15:58:11 localhost systemd[1]: Starting Coldplug All udev Devices...
Oct 01 15:58:11 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct 01 15:58:11 localhost systemd-journald[676]: Journal started
Oct 01 15:58:11 localhost systemd-journald[676]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.5M, 145.5M free.
Oct 01 15:58:10 localhost systemd[1]: Queued start job for default target Multi-User System.
Oct 01 15:58:10 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Oct 01 15:58:11 localhost kernel: ACPI: bus type drm_connector registered
Oct 01 15:58:11 localhost systemd[1]: Started Journal Service.
Oct 01 15:58:11 localhost systemd[1]: Mounted Huge Pages File System.
Oct 01 15:58:11 localhost systemd[1]: Mounted POSIX Message Queue File System.
Oct 01 15:58:11 localhost systemd[1]: Mounted Kernel Debug File System.
Oct 01 15:58:11 localhost systemd[1]: Mounted Kernel Trace File System.
Oct 01 15:58:11 localhost systemd[1]: Finished Create List of Static Device Nodes.
Oct 01 15:58:11 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 01 15:58:11 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 01 15:58:11 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct 01 15:58:11 localhost systemd[1]: Finished Load Kernel Module drm.
Oct 01 15:58:11 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct 01 15:58:11 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Oct 01 15:58:11 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct 01 15:58:11 localhost systemd[1]: Finished Load Kernel Module fuse.
Oct 01 15:58:11 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct 01 15:58:11 localhost systemd[1]: Finished Generate network units from Kernel command line.
Oct 01 15:58:11 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Oct 01 15:58:11 localhost systemd[1]: Finished Apply Kernel Variables.
Oct 01 15:58:11 localhost systemd[1]: Mounting FUSE Control File System...
Oct 01 15:58:11 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 01 15:58:11 localhost systemd[1]: Starting Rebuild Hardware Database...
Oct 01 15:58:11 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Oct 01 15:58:11 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct 01 15:58:11 localhost systemd[1]: Starting Load/Save OS Random Seed...
Oct 01 15:58:11 localhost systemd[1]: Starting Create System Users...
Oct 01 15:58:11 localhost systemd-journald[676]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.5M, 145.5M free.
Oct 01 15:58:11 localhost systemd-journald[676]: Received client request to flush runtime journal.
Oct 01 15:58:11 localhost systemd[1]: Mounted FUSE Control File System.
Oct 01 15:58:11 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Oct 01 15:58:11 localhost systemd[1]: Finished Load/Save OS Random Seed.
Oct 01 15:58:11 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 01 15:58:11 localhost systemd[1]: Finished Coldplug All udev Devices.
Oct 01 15:58:11 localhost systemd[1]: Finished Create System Users.
Oct 01 15:58:11 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 01 15:58:11 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 01 15:58:11 localhost systemd[1]: Reached target Preparation for Local File Systems.
Oct 01 15:58:11 localhost systemd[1]: Reached target Local File Systems.
Oct 01 15:58:11 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Oct 01 15:58:11 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct 01 15:58:11 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct 01 15:58:11 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct 01 15:58:11 localhost systemd[1]: Starting Automatic Boot Loader Update...
Oct 01 15:58:11 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct 01 15:58:11 localhost systemd[1]: Starting Create Volatile Files and Directories...
Oct 01 15:58:11 localhost bootctl[693]: Couldn't find EFI system partition, skipping.
Oct 01 15:58:11 localhost systemd[1]: Finished Automatic Boot Loader Update.
Oct 01 15:58:11 localhost systemd[1]: Finished Create Volatile Files and Directories.
Oct 01 15:58:11 localhost systemd[1]: Starting Security Auditing Service...
Oct 01 15:58:11 localhost systemd[1]: Starting RPC Bind...
Oct 01 15:58:11 localhost systemd[1]: Starting Rebuild Journal Catalog...
Oct 01 15:58:11 localhost auditd[699]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct 01 15:58:11 localhost auditd[699]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct 01 15:58:11 localhost systemd[1]: Started RPC Bind.
Oct 01 15:58:11 localhost systemd[1]: Finished Rebuild Journal Catalog.
Oct 01 15:58:11 localhost augenrules[704]: /sbin/augenrules: No change
Oct 01 15:58:11 localhost augenrules[719]: No rules
Oct 01 15:58:11 localhost augenrules[719]: enabled 1
Oct 01 15:58:11 localhost augenrules[719]: failure 1
Oct 01 15:58:11 localhost augenrules[719]: pid 699
Oct 01 15:58:11 localhost augenrules[719]: rate_limit 0
Oct 01 15:58:11 localhost augenrules[719]: backlog_limit 8192
Oct 01 15:58:11 localhost augenrules[719]: lost 0
Oct 01 15:58:11 localhost augenrules[719]: backlog 0
Oct 01 15:58:11 localhost augenrules[719]: backlog_wait_time 60000
Oct 01 15:58:11 localhost augenrules[719]: backlog_wait_time_actual 0
Oct 01 15:58:11 localhost augenrules[719]: enabled 1
Oct 01 15:58:11 localhost augenrules[719]: failure 1
Oct 01 15:58:11 localhost augenrules[719]: pid 699
Oct 01 15:58:11 localhost augenrules[719]: rate_limit 0
Oct 01 15:58:11 localhost augenrules[719]: backlog_limit 8192
Oct 01 15:58:11 localhost augenrules[719]: lost 0
Oct 01 15:58:11 localhost augenrules[719]: backlog 0
Oct 01 15:58:11 localhost augenrules[719]: backlog_wait_time 60000
Oct 01 15:58:11 localhost augenrules[719]: backlog_wait_time_actual 0
Oct 01 15:58:11 localhost augenrules[719]: enabled 1
Oct 01 15:58:11 localhost augenrules[719]: failure 1
Oct 01 15:58:11 localhost augenrules[719]: pid 699
Oct 01 15:58:11 localhost augenrules[719]: rate_limit 0
Oct 01 15:58:11 localhost augenrules[719]: backlog_limit 8192
Oct 01 15:58:11 localhost augenrules[719]: lost 0
Oct 01 15:58:11 localhost augenrules[719]: backlog 0
Oct 01 15:58:11 localhost augenrules[719]: backlog_wait_time 60000
Oct 01 15:58:11 localhost augenrules[719]: backlog_wait_time_actual 0
Oct 01 15:58:11 localhost systemd[1]: Started Security Auditing Service.
Oct 01 15:58:11 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct 01 15:58:11 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct 01 15:58:11 localhost systemd[1]: Finished Rebuild Hardware Database.
Oct 01 15:58:11 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 01 15:58:11 localhost systemd-udevd[727]: Using default interface naming scheme 'rhel-9.0'.
Oct 01 15:58:11 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 01 15:58:11 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 01 15:58:11 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Oct 01 15:58:11 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 01 15:58:11 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 01 15:58:11 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct 01 15:58:11 localhost systemd[1]: Starting Update is Completed...
Oct 01 15:58:11 localhost systemd[1]: Finished Update is Completed.
Oct 01 15:58:11 localhost systemd[1]: Reached target System Initialization.
Oct 01 15:58:11 localhost systemd[1]: Started dnf makecache --timer.
Oct 01 15:58:11 localhost systemd[1]: Started Daily rotation of log files.
Oct 01 15:58:11 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct 01 15:58:11 localhost systemd[1]: Reached target Timer Units.
Oct 01 15:58:11 localhost systemd-udevd[734]: Network interface NamePolicy= disabled on kernel command line.
Oct 01 15:58:11 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 01 15:58:11 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct 01 15:58:11 localhost systemd[1]: Reached target Socket Units.
Oct 01 15:58:11 localhost systemd[1]: Starting D-Bus System Message Bus...
Oct 01 15:58:11 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 01 15:58:11 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct 01 15:58:11 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Oct 01 15:58:11 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct 01 15:58:11 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct 01 15:58:11 localhost systemd[1]: Started D-Bus System Message Bus.
Oct 01 15:58:11 localhost systemd[1]: Reached target Basic System.
Oct 01 15:58:11 localhost dbus-broker-lau[768]: Ready
Oct 01 15:58:11 localhost systemd[1]: Starting NTP client/server...
Oct 01 15:58:11 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct 01 15:58:11 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct 01 15:58:11 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Oct 01 15:58:11 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Oct 01 15:58:11 localhost systemd[1]: Starting IPv4 firewall with iptables...
Oct 01 15:58:11 localhost kernel: Console: switching to colour dummy device 80x25
Oct 01 15:58:11 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct 01 15:58:11 localhost kernel: [drm] features: -context_init
Oct 01 15:58:11 localhost kernel: [drm] number of scanouts: 1
Oct 01 15:58:11 localhost kernel: [drm] number of cap sets: 0
Oct 01 15:58:11 localhost systemd[1]: Started irqbalance daemon.
Oct 01 15:58:11 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct 01 15:58:11 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 01 15:58:11 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 01 15:58:11 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 01 15:58:11 localhost systemd[1]: Reached target sshd-keygen.target.
Oct 01 15:58:11 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct 01 15:58:11 localhost systemd[1]: Reached target User and Group Name Lookups.
Oct 01 15:58:12 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Oct 01 15:58:12 localhost systemd[1]: Starting User Login Management...
Oct 01 15:58:12 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct 01 15:58:12 localhost kernel: Console: switching to colour frame buffer device 128x48
Oct 01 15:58:12 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct 01 15:58:12 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct 01 15:58:12 localhost chronyd[803]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 01 15:58:12 localhost chronyd[803]: Loaded 0 symmetric keys
Oct 01 15:58:12 localhost chronyd[803]: Using right/UTC timezone to obtain leap second data
Oct 01 15:58:12 localhost chronyd[803]: Loaded seccomp filter (level 2)
Oct 01 15:58:12 localhost systemd[1]: Started NTP client/server.
Oct 01 15:58:12 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct 01 15:58:12 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct 01 15:58:12 localhost systemd-logind[788]: New seat seat0.
Oct 01 15:58:12 localhost systemd-logind[788]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 01 15:58:12 localhost systemd-logind[788]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 01 15:58:12 localhost systemd[1]: Started User Login Management.
Oct 01 15:58:12 localhost kernel: kvm_amd: TSC scaling supported
Oct 01 15:58:12 localhost kernel: kvm_amd: Nested Virtualization enabled
Oct 01 15:58:12 localhost kernel: kvm_amd: Nested Paging enabled
Oct 01 15:58:12 localhost kernel: kvm_amd: LBR virtualization supported
Oct 01 15:58:12 localhost iptables.init[780]: iptables: Applying firewall rules: [  OK  ]
Oct 01 15:58:12 localhost systemd[1]: Finished IPv4 firewall with iptables.
Oct 01 15:58:12 localhost cloud-init[837]: Cloud-init v. 24.4-7.el9 running 'init-local' at Wed, 01 Oct 2025 15:58:12 +0000. Up 7.42 seconds.
Oct 01 15:58:12 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Oct 01 15:58:12 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Oct 01 15:58:12 localhost systemd[1]: run-cloud\x2dinit-tmp-tmp6ano0ch6.mount: Deactivated successfully.
Oct 01 15:58:13 localhost systemd[1]: Starting Hostname Service...
Oct 01 15:58:13 localhost systemd[1]: Started Hostname Service.
Oct 01 15:58:13 np0005464933.novalocal systemd-hostnamed[851]: Hostname set to <np0005464933.novalocal> (static)
Oct 01 15:58:13 np0005464933.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct 01 15:58:13 np0005464933.novalocal systemd[1]: Reached target Preparation for Network.
Oct 01 15:58:13 np0005464933.novalocal systemd[1]: Starting Network Manager...
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.3954] NetworkManager (version 1.54.1-1.el9) is starting... (boot:60f5f1a4-b8dd-4af4-b8bb-2f6fb4fb4541)
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.3961] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.4147] manager[0x562652d18080]: monitoring kernel firmware directory '/lib/firmware'.
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.4218] hostname: hostname: using hostnamed
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.4220] hostname: static hostname changed from (none) to "np0005464933.novalocal"
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.4225] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.4433] manager[0x562652d18080]: rfkill: Wi-Fi hardware radio set enabled
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.4434] manager[0x562652d18080]: rfkill: WWAN hardware radio set enabled
Oct 01 15:58:13 np0005464933.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.4570] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.4570] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.4571] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.4571] manager: Networking is enabled by state file
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.4574] settings: Loaded settings plugin: keyfile (internal)
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.4646] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.4681] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.4726] dhcp: init: Using DHCP client 'internal'
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.4730] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.4750] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.4772] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.4788] device (lo): Activation: starting connection 'lo' (203f8aae-0043-4f00-be85-213b46013acc)
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.4800] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.4803] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 01 15:58:13 np0005464933.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.4863] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.4868] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.4872] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.4876] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.4881] device (eth0): carrier: link connected
Oct 01 15:58:13 np0005464933.novalocal systemd[1]: Started Network Manager.
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.4885] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 01 15:58:13 np0005464933.novalocal systemd[1]: Reached target Network.
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.4911] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.4940] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.4948] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.4950] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.4954] manager: NetworkManager state is now CONNECTING
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.4957] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 01 15:58:13 np0005464933.novalocal systemd[1]: Starting Network Manager Wait Online...
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.4968] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.4974] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 01 15:58:13 np0005464933.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.5028] dhcp4 (eth0): state changed new lease, address=38.129.56.223
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.5041] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 01 15:58:13 np0005464933.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.5076] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.5090] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.5094] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.5108] device (lo): Activation: successful, device activated.
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.5123] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.5127] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.5133] manager: NetworkManager state is now CONNECTED_SITE
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.5140] device (eth0): Activation: successful, device activated.
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.5150] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 01 15:58:13 np0005464933.novalocal NetworkManager[855]: <info>  [1759334293.5155] manager: startup complete
Oct 01 15:58:13 np0005464933.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Oct 01 15:58:13 np0005464933.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 01 15:58:13 np0005464933.novalocal systemd[1]: Reached target NFS client services.
Oct 01 15:58:13 np0005464933.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Oct 01 15:58:13 np0005464933.novalocal systemd[1]: Reached target Remote File Systems.
Oct 01 15:58:13 np0005464933.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 01 15:58:13 np0005464933.novalocal systemd[1]: Finished Network Manager Wait Online.
Oct 01 15:58:13 np0005464933.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Oct 01 15:58:13 np0005464933.novalocal cloud-init[919]: Cloud-init v. 24.4-7.el9 running 'init' at Wed, 01 Oct 2025 15:58:13 +0000. Up 8.59 seconds.
Oct 01 15:58:13 np0005464933.novalocal cloud-init[919]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Oct 01 15:58:13 np0005464933.novalocal cloud-init[919]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 01 15:58:13 np0005464933.novalocal cloud-init[919]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Oct 01 15:58:13 np0005464933.novalocal cloud-init[919]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 01 15:58:13 np0005464933.novalocal cloud-init[919]: ci-info: |  eth0  | True |        38.129.56.223         | 255.255.255.0 | global | fa:16:3e:97:75:a6 |
Oct 01 15:58:13 np0005464933.novalocal cloud-init[919]: ci-info: |  eth0  | True | fe80::f816:3eff:fe97:75a6/64 |       .       |  link  | fa:16:3e:97:75:a6 |
Oct 01 15:58:13 np0005464933.novalocal cloud-init[919]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Oct 01 15:58:13 np0005464933.novalocal cloud-init[919]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Oct 01 15:58:13 np0005464933.novalocal cloud-init[919]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 01 15:58:13 np0005464933.novalocal cloud-init[919]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++++
Oct 01 15:58:13 np0005464933.novalocal cloud-init[919]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Oct 01 15:58:13 np0005464933.novalocal cloud-init[919]: ci-info: | Route |   Destination   |   Gateway   |     Genmask     | Interface | Flags |
Oct 01 15:58:13 np0005464933.novalocal cloud-init[919]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Oct 01 15:58:13 np0005464933.novalocal cloud-init[919]: ci-info: |   0   |     0.0.0.0     | 38.129.56.1 |     0.0.0.0     |    eth0   |   UG  |
Oct 01 15:58:13 np0005464933.novalocal cloud-init[919]: ci-info: |   1   |   38.129.56.0   |   0.0.0.0   |  255.255.255.0  |    eth0   |   U   |
Oct 01 15:58:13 np0005464933.novalocal cloud-init[919]: ci-info: |   2   | 169.254.169.254 | 38.129.56.5 | 255.255.255.255 |    eth0   |  UGH  |
Oct 01 15:58:13 np0005464933.novalocal cloud-init[919]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Oct 01 15:58:13 np0005464933.novalocal cloud-init[919]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct 01 15:58:13 np0005464933.novalocal cloud-init[919]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 01 15:58:13 np0005464933.novalocal cloud-init[919]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct 01 15:58:13 np0005464933.novalocal cloud-init[919]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 01 15:58:13 np0005464933.novalocal cloud-init[919]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Oct 01 15:58:13 np0005464933.novalocal cloud-init[919]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Oct 01 15:58:13 np0005464933.novalocal cloud-init[919]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 01 15:58:14 np0005464933.novalocal useradd[985]: new group: name=cloud-user, GID=1001
Oct 01 15:58:14 np0005464933.novalocal useradd[985]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Oct 01 15:58:14 np0005464933.novalocal useradd[985]: add 'cloud-user' to group 'adm'
Oct 01 15:58:14 np0005464933.novalocal useradd[985]: add 'cloud-user' to group 'systemd-journal'
Oct 01 15:58:14 np0005464933.novalocal useradd[985]: add 'cloud-user' to shadow group 'adm'
Oct 01 15:58:14 np0005464933.novalocal useradd[985]: add 'cloud-user' to shadow group 'systemd-journal'
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: Generating public/private rsa key pair.
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: The key fingerprint is:
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: SHA256:5GtJGU4P3haMsV8CKtaP2szoa/SDLEbLuLnFGH3+3+g root@np0005464933.novalocal
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: The key's randomart image is:
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: +---[RSA 3072]----+
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: |        o        |
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: |     . . *       |
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: |    o o B + .    |
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: | . . . O B +     |
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: |. . . . S =      |
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: | +.o.* . +       |
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: |.+o++o+ +        |
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: |.o=.+.o. o       |
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: |++ oo..+E .      |
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: +----[SHA256]-----+
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: Generating public/private ecdsa key pair.
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: The key fingerprint is:
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: SHA256:MOLykisI2aAIpkpzKcTUujd13fHxekoGJQfakwqr7E4 root@np0005464933.novalocal
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: The key's randomart image is:
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: +---[ECDSA 256]---+
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: |  .        ..    |
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: | . .      oo.+   |
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: |o . . o....+* o  |
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: |o= . o +o..o.. . |
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: |B+o + ..S.  . .  |
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: |*=.O. .      + . |
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: |= B oE      o o  |
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: |+  oo        .   |
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: | .. .o           |
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: +----[SHA256]-----+
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: Generating public/private ed25519 key pair.
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: The key fingerprint is:
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: SHA256:2FRwDCzAasK4XZMPfU6RIbgFZ2oD+NQxT7Hr9i1jnM0 root@np0005464933.novalocal
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: The key's randomart image is:
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: +--[ED25519 256]--+
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: | ..o==*o+*o      |
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: |. ..+Boo+o.      |
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: |oo .+=o...       |
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: |o.+.*..=o        |
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: | = . +o+S        |
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: |. .  .. .        |
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: |      o. +       |
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: |     . .*.E      |
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: |       ..o.      |
Oct 01 15:58:15 np0005464933.novalocal cloud-init[919]: +----[SHA256]-----+
Oct 01 15:58:15 np0005464933.novalocal sm-notify[1000]: Version 2.5.4 starting
Oct 01 15:58:15 np0005464933.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Oct 01 15:58:15 np0005464933.novalocal systemd[1]: Reached target Cloud-config availability.
Oct 01 15:58:15 np0005464933.novalocal systemd[1]: Reached target Network is Online.
Oct 01 15:58:15 np0005464933.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Oct 01 15:58:15 np0005464933.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Oct 01 15:58:15 np0005464933.novalocal systemd[1]: Starting System Logging Service...
Oct 01 15:58:15 np0005464933.novalocal systemd[1]: Starting OpenSSH server daemon...
Oct 01 15:58:15 np0005464933.novalocal systemd[1]: Starting Permit User Sessions...
Oct 01 15:58:15 np0005464933.novalocal systemd[1]: Started Notify NFS peers of a restart.
Oct 01 15:58:15 np0005464933.novalocal sshd[1002]: Server listening on 0.0.0.0 port 22.
Oct 01 15:58:15 np0005464933.novalocal sshd[1002]: Server listening on :: port 22.
Oct 01 15:58:15 np0005464933.novalocal systemd[1]: Started OpenSSH server daemon.
Oct 01 15:58:15 np0005464933.novalocal systemd[1]: Finished Permit User Sessions.
Oct 01 15:58:15 np0005464933.novalocal systemd[1]: Started Command Scheduler.
Oct 01 15:58:15 np0005464933.novalocal systemd[1]: Started Getty on tty1.
Oct 01 15:58:15 np0005464933.novalocal systemd[1]: Started Serial Getty on ttyS0.
Oct 01 15:58:15 np0005464933.novalocal systemd[1]: Reached target Login Prompts.
Oct 01 15:58:15 np0005464933.novalocal crond[1004]: (CRON) STARTUP (1.5.7)
Oct 01 15:58:15 np0005464933.novalocal crond[1004]: (CRON) INFO (Syslog will be used instead of sendmail.)
Oct 01 15:58:15 np0005464933.novalocal crond[1004]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 94% if used.)
Oct 01 15:58:15 np0005464933.novalocal crond[1004]: (CRON) INFO (running with inotify support)
Oct 01 15:58:15 np0005464933.novalocal rsyslogd[1001]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1001" x-info="https://www.rsyslog.com"] start
Oct 01 15:58:15 np0005464933.novalocal rsyslogd[1001]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Oct 01 15:58:15 np0005464933.novalocal systemd[1]: Started System Logging Service.
Oct 01 15:58:15 np0005464933.novalocal systemd[1]: Reached target Multi-User System.
Oct 01 15:58:15 np0005464933.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Oct 01 15:58:15 np0005464933.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct 01 15:58:15 np0005464933.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Oct 01 15:58:15 np0005464933.novalocal rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 15:58:15 np0005464933.novalocal sshd-session[1014]: Unable to negotiate with 38.102.83.114 port 53778: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Oct 01 15:58:15 np0005464933.novalocal sshd-session[1017]: Connection closed by 38.102.83.114 port 53788 [preauth]
Oct 01 15:58:15 np0005464933.novalocal sshd-session[1020]: Unable to negotiate with 38.102.83.114 port 53794: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Oct 01 15:58:15 np0005464933.novalocal sshd-session[1022]: Unable to negotiate with 38.102.83.114 port 53808: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Oct 01 15:58:15 np0005464933.novalocal sshd-session[1012]: Connection closed by 38.102.83.114 port 53764 [preauth]
Oct 01 15:58:15 np0005464933.novalocal cloud-init[1025]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Wed, 01 Oct 2025 15:58:15 +0000. Up 10.63 seconds.
Oct 01 15:58:15 np0005464933.novalocal sshd-session[1024]: Connection reset by 38.102.83.114 port 53822 [preauth]
Oct 01 15:58:15 np0005464933.novalocal sshd-session[1029]: Unable to negotiate with 38.102.83.114 port 53830: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Oct 01 15:58:15 np0005464933.novalocal sshd-session[1031]: Unable to negotiate with 38.102.83.114 port 53838: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Oct 01 15:58:15 np0005464933.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Oct 01 15:58:16 np0005464933.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Oct 01 15:58:16 np0005464933.novalocal sshd-session[1027]: Connection closed by 38.102.83.114 port 53824 [preauth]
Oct 01 15:58:16 np0005464933.novalocal cloud-init[1036]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Wed, 01 Oct 2025 15:58:16 +0000. Up 11.06 seconds.
Oct 01 15:58:16 np0005464933.novalocal cloud-init[1038]: #############################################################
Oct 01 15:58:16 np0005464933.novalocal cloud-init[1039]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Oct 01 15:58:16 np0005464933.novalocal cloud-init[1041]: 256 SHA256:MOLykisI2aAIpkpzKcTUujd13fHxekoGJQfakwqr7E4 root@np0005464933.novalocal (ECDSA)
Oct 01 15:58:16 np0005464933.novalocal cloud-init[1043]: 256 SHA256:2FRwDCzAasK4XZMPfU6RIbgFZ2oD+NQxT7Hr9i1jnM0 root@np0005464933.novalocal (ED25519)
Oct 01 15:58:16 np0005464933.novalocal cloud-init[1045]: 3072 SHA256:5GtJGU4P3haMsV8CKtaP2szoa/SDLEbLuLnFGH3+3+g root@np0005464933.novalocal (RSA)
Oct 01 15:58:16 np0005464933.novalocal cloud-init[1046]: -----END SSH HOST KEY FINGERPRINTS-----
Oct 01 15:58:16 np0005464933.novalocal cloud-init[1047]: #############################################################
Oct 01 15:58:16 np0005464933.novalocal cloud-init[1036]: Cloud-init v. 24.4-7.el9 finished at Wed, 01 Oct 2025 15:58:16 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 11.28 seconds
Oct 01 15:58:16 np0005464933.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Oct 01 15:58:16 np0005464933.novalocal systemd[1]: Reached target Cloud-init target.
Oct 01 15:58:16 np0005464933.novalocal systemd[1]: Startup finished in 1.831s (kernel) + 2.912s (initrd) + 6.599s (userspace) = 11.344s.
Oct 01 15:58:17 np0005464933.novalocal chronyd[803]: Selected source 172.97.210.214 (2.centos.pool.ntp.org)
Oct 01 15:58:17 np0005464933.novalocal chronyd[803]: System clock TAI offset set to 37 seconds
Oct 01 15:58:22 np0005464933.novalocal irqbalance[786]: Cannot change IRQ 25 affinity: Operation not permitted
Oct 01 15:58:22 np0005464933.novalocal irqbalance[786]: IRQ 25 affinity is now unmanaged
Oct 01 15:58:22 np0005464933.novalocal irqbalance[786]: Cannot change IRQ 31 affinity: Operation not permitted
Oct 01 15:58:22 np0005464933.novalocal irqbalance[786]: IRQ 31 affinity is now unmanaged
Oct 01 15:58:22 np0005464933.novalocal irqbalance[786]: Cannot change IRQ 28 affinity: Operation not permitted
Oct 01 15:58:22 np0005464933.novalocal irqbalance[786]: IRQ 28 affinity is now unmanaged
Oct 01 15:58:22 np0005464933.novalocal irqbalance[786]: Cannot change IRQ 32 affinity: Operation not permitted
Oct 01 15:58:22 np0005464933.novalocal irqbalance[786]: IRQ 32 affinity is now unmanaged
Oct 01 15:58:22 np0005464933.novalocal irqbalance[786]: Cannot change IRQ 30 affinity: Operation not permitted
Oct 01 15:58:22 np0005464933.novalocal irqbalance[786]: IRQ 30 affinity is now unmanaged
Oct 01 15:58:22 np0005464933.novalocal irqbalance[786]: Cannot change IRQ 29 affinity: Operation not permitted
Oct 01 15:58:22 np0005464933.novalocal irqbalance[786]: IRQ 29 affinity is now unmanaged
Oct 01 15:58:23 np0005464933.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 01 15:58:29 np0005464933.novalocal sshd-session[1053]: Accepted publickey for zuul from 38.102.83.114 port 38804 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Oct 01 15:58:29 np0005464933.novalocal systemd[1]: Created slice User Slice of UID 1000.
Oct 01 15:58:29 np0005464933.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct 01 15:58:29 np0005464933.novalocal systemd-logind[788]: New session 1 of user zuul.
Oct 01 15:58:29 np0005464933.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct 01 15:58:29 np0005464933.novalocal systemd[1]: Starting User Manager for UID 1000...
Oct 01 15:58:29 np0005464933.novalocal systemd[1057]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 15:58:29 np0005464933.novalocal systemd[1057]: Queued start job for default target Main User Target.
Oct 01 15:58:29 np0005464933.novalocal systemd[1057]: Created slice User Application Slice.
Oct 01 15:58:29 np0005464933.novalocal systemd[1057]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 01 15:58:29 np0005464933.novalocal systemd[1057]: Started Daily Cleanup of User's Temporary Directories.
Oct 01 15:58:29 np0005464933.novalocal systemd[1057]: Reached target Paths.
Oct 01 15:58:29 np0005464933.novalocal systemd[1057]: Reached target Timers.
Oct 01 15:58:29 np0005464933.novalocal systemd[1057]: Starting D-Bus User Message Bus Socket...
Oct 01 15:58:29 np0005464933.novalocal systemd[1057]: Starting Create User's Volatile Files and Directories...
Oct 01 15:58:29 np0005464933.novalocal systemd[1057]: Finished Create User's Volatile Files and Directories.
Oct 01 15:58:29 np0005464933.novalocal systemd[1057]: Listening on D-Bus User Message Bus Socket.
Oct 01 15:58:29 np0005464933.novalocal systemd[1057]: Reached target Sockets.
Oct 01 15:58:29 np0005464933.novalocal systemd[1057]: Reached target Basic System.
Oct 01 15:58:29 np0005464933.novalocal systemd[1057]: Reached target Main User Target.
Oct 01 15:58:29 np0005464933.novalocal systemd[1057]: Startup finished in 119ms.
Oct 01 15:58:29 np0005464933.novalocal systemd[1]: Started User Manager for UID 1000.
Oct 01 15:58:29 np0005464933.novalocal systemd[1]: Started Session 1 of User zuul.
Oct 01 15:58:29 np0005464933.novalocal sshd-session[1053]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 15:58:30 np0005464933.novalocal python3[1139]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 15:58:32 np0005464933.novalocal python3[1167]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 15:58:38 np0005464933.novalocal python3[1225]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 15:58:39 np0005464933.novalocal python3[1265]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Oct 01 15:58:41 np0005464933.novalocal python3[1291]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCslorEMFYThyOXaaJYUy9jZAzhKDXFeUq4wfTugBYSiVYToShUmvmBQzocLWPku2dKybFwtznGbm6+pgEyVT31YFb+hf/CYV3OGFWalUiPkTDqpnJqTW3QkkgfRc8tICg0rLNzvLL9LrZ6oL7ppEt8fvMat0XtxsprSszLvZt5mA6QUnWfpKCzYsT8Qn6HKdoj1ZItEN+R9BRCe7eG8KMwlyUPy/oR5L4mYM3/WsH+/u0HZPI16yuDokrMG1m65bVVofcz7dgMrJg3brZifwjYdlFZWLdMsEIaO2uLQyyG831nFiSQEsnlxB2AiMx/ZWq1KLHyAJ/9XOgghQOPNnhBGQ4GH/kNdskiYfRgCtHA2JGfbiqn3z0+mkJ5ntfrBm4j/sEV/1LkKV7Pt76NNrtbzjaa2YiCmZ/N4xGpl+8uxsD0fDa8R9TsCnP42fCB+Rsr2dQ4e9jXKOzuXRw1JWYqYf+6rrSiGwmLfiV0mmswrtmyDCoKWr9mLXYzGO0kkeE= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 15:58:41 np0005464933.novalocal python3[1315]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 15:58:42 np0005464933.novalocal python3[1414]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 15:58:42 np0005464933.novalocal python3[1485]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759334321.9893625-207-83674430212977/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=4cfb50a0157c421ca28dcb954024733e_id_rsa follow=False checksum=5573d0e3208b81c28d0b119eda1a83d33a70d801 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 15:58:43 np0005464933.novalocal python3[1608]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 15:58:43 np0005464933.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 01 15:58:43 np0005464933.novalocal python3[1681]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759334322.940313-240-130970383099940/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=4cfb50a0157c421ca28dcb954024733e_id_rsa.pub follow=False checksum=e1015bb9a4f9e0878dc9686973f5c5514882ea5a backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 15:58:45 np0005464933.novalocal python3[1729]: ansible-ping Invoked with data=pong
Oct 01 15:58:45 np0005464933.novalocal python3[1753]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 15:58:47 np0005464933.novalocal python3[1811]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Oct 01 15:58:48 np0005464933.novalocal python3[1843]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 15:58:48 np0005464933.novalocal python3[1867]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 15:58:49 np0005464933.novalocal python3[1891]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 15:58:49 np0005464933.novalocal python3[1915]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 15:58:49 np0005464933.novalocal python3[1939]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 15:58:50 np0005464933.novalocal python3[1963]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 15:58:51 np0005464933.novalocal sudo[1988]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjfutkbcxaxdtcyiizreblpzzabssllg ; /usr/bin/python3'
Oct 01 15:58:51 np0005464933.novalocal sudo[1988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 15:58:51 np0005464933.novalocal python3[1991]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 15:58:51 np0005464933.novalocal sudo[1988]: pam_unix(sudo:session): session closed for user root
Oct 01 15:58:51 np0005464933.novalocal sshd-session[1985]: Invalid user  from 64.62.156.97 port 17497
Oct 01 15:58:51 np0005464933.novalocal sudo[2067]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pekqjqengaysywifuobbppkzkxptddaw ; /usr/bin/python3'
Oct 01 15:58:51 np0005464933.novalocal sudo[2067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 15:58:52 np0005464933.novalocal python3[2069]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 15:58:52 np0005464933.novalocal sudo[2067]: pam_unix(sudo:session): session closed for user root
Oct 01 15:58:52 np0005464933.novalocal sudo[2140]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enaxjbyjyjqkyfzixflnhtsbmsjngvxm ; /usr/bin/python3'
Oct 01 15:58:52 np0005464933.novalocal sudo[2140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 15:58:52 np0005464933.novalocal python3[2142]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759334331.7181926-21-144724284569780/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 15:58:52 np0005464933.novalocal sudo[2140]: pam_unix(sudo:session): session closed for user root
Oct 01 15:58:53 np0005464933.novalocal python3[2190]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 15:58:53 np0005464933.novalocal python3[2214]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 15:58:53 np0005464933.novalocal python3[2238]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 15:58:54 np0005464933.novalocal python3[2262]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 15:58:54 np0005464933.novalocal python3[2286]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 15:58:54 np0005464933.novalocal python3[2310]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 15:58:55 np0005464933.novalocal python3[2334]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 15:58:55 np0005464933.novalocal python3[2358]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 15:58:55 np0005464933.novalocal sshd-session[1985]: Connection closed by invalid user  64.62.156.97 port 17497 [preauth]
Oct 01 15:58:55 np0005464933.novalocal python3[2382]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 15:58:55 np0005464933.novalocal python3[2406]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 15:58:56 np0005464933.novalocal python3[2430]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 15:58:56 np0005464933.novalocal python3[2454]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 15:58:56 np0005464933.novalocal python3[2478]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 15:58:57 np0005464933.novalocal python3[2502]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 15:58:57 np0005464933.novalocal python3[2526]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 15:58:57 np0005464933.novalocal python3[2550]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 15:58:58 np0005464933.novalocal python3[2574]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 15:58:58 np0005464933.novalocal python3[2598]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 15:58:58 np0005464933.novalocal python3[2622]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 15:58:58 np0005464933.novalocal python3[2646]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 15:58:59 np0005464933.novalocal python3[2670]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 15:58:59 np0005464933.novalocal python3[2694]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 15:58:59 np0005464933.novalocal python3[2718]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 15:59:00 np0005464933.novalocal python3[2742]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 15:59:00 np0005464933.novalocal python3[2766]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 15:59:00 np0005464933.novalocal python3[2790]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 15:59:02 np0005464933.novalocal sudo[2814]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcrmbjyxttbyjbrecqoavmxzvdsbwjxf ; /usr/bin/python3'
Oct 01 15:59:02 np0005464933.novalocal sudo[2814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 15:59:03 np0005464933.novalocal python3[2816]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 01 15:59:03 np0005464933.novalocal systemd[1]: Starting Time & Date Service...
Oct 01 15:59:03 np0005464933.novalocal systemd[1]: Started Time & Date Service.
Oct 01 15:59:03 np0005464933.novalocal systemd-timedated[2818]: Changed time zone to 'UTC' (UTC).
Oct 01 15:59:03 np0005464933.novalocal sudo[2814]: pam_unix(sudo:session): session closed for user root
Oct 01 15:59:03 np0005464933.novalocal sudo[2845]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwplamlpvqwokyrcfhvxlfculspxswjn ; /usr/bin/python3'
Oct 01 15:59:03 np0005464933.novalocal sudo[2845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 15:59:03 np0005464933.novalocal python3[2847]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 15:59:03 np0005464933.novalocal sudo[2845]: pam_unix(sudo:session): session closed for user root
Oct 01 15:59:04 np0005464933.novalocal python3[2923]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 15:59:04 np0005464933.novalocal python3[2994]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1759334343.9129078-153-71806838960980/source _original_basename=tmpw0fmf0my follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 15:59:05 np0005464933.novalocal python3[3094]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 15:59:05 np0005464933.novalocal python3[3165]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759334344.7720296-183-259007966431711/source _original_basename=tmpc44oaw2y follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 15:59:05 np0005464933.novalocal sudo[3265]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmlvmnwoosplpccelthrefmfeocjtojk ; /usr/bin/python3'
Oct 01 15:59:05 np0005464933.novalocal sudo[3265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 15:59:06 np0005464933.novalocal python3[3267]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 15:59:06 np0005464933.novalocal sudo[3265]: pam_unix(sudo:session): session closed for user root
Oct 01 15:59:06 np0005464933.novalocal sudo[3338]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-modyhfcaqczqefsqqcylvneccroxekkb ; /usr/bin/python3'
Oct 01 15:59:06 np0005464933.novalocal sudo[3338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 15:59:06 np0005464933.novalocal python3[3340]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759334345.8151479-231-275245133563177/source _original_basename=tmpj8ngpo4a follow=False checksum=274dc6a85f2bb9a5576252e19941cf9f4791beb8 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 15:59:06 np0005464933.novalocal sudo[3338]: pam_unix(sudo:session): session closed for user root
Oct 01 15:59:07 np0005464933.novalocal python3[3388]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 15:59:07 np0005464933.novalocal python3[3414]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 15:59:07 np0005464933.novalocal sudo[3492]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybqpqlwwiljfcuqaakxfinwsqxdawayu ; /usr/bin/python3'
Oct 01 15:59:07 np0005464933.novalocal sudo[3492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 15:59:07 np0005464933.novalocal python3[3494]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 15:59:07 np0005464933.novalocal sudo[3492]: pam_unix(sudo:session): session closed for user root
Oct 01 15:59:08 np0005464933.novalocal sudo[3565]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vutprjaqkipznplnrivdmysjgvkcrhpn ; /usr/bin/python3'
Oct 01 15:59:08 np0005464933.novalocal sudo[3565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 15:59:08 np0005464933.novalocal python3[3567]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1759334347.463964-273-19811864407188/source _original_basename=tmpyiltx61i follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 15:59:08 np0005464933.novalocal sudo[3565]: pam_unix(sudo:session): session closed for user root
Oct 01 15:59:08 np0005464933.novalocal sudo[3616]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxyexuogzuxayqbzpzpxzewsaavlzshs ; /usr/bin/python3'
Oct 01 15:59:08 np0005464933.novalocal sudo[3616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 15:59:08 np0005464933.novalocal python3[3618]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-a1e0-a090-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 15:59:08 np0005464933.novalocal sudo[3616]: pam_unix(sudo:session): session closed for user root
Oct 01 15:59:09 np0005464933.novalocal python3[3646]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-a1e0-a090-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Oct 01 15:59:10 np0005464933.novalocal python3[3675]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 15:59:12 np0005464933.novalocal irqbalance[786]: Cannot change IRQ 26 affinity: Operation not permitted
Oct 01 15:59:12 np0005464933.novalocal irqbalance[786]: IRQ 26 affinity is now unmanaged
Oct 01 15:59:28 np0005464933.novalocal sudo[3699]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nocuqpatzzukjpqevxczyfdsrzesfkim ; /usr/bin/python3'
Oct 01 15:59:28 np0005464933.novalocal sudo[3699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 15:59:28 np0005464933.novalocal python3[3701]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 15:59:28 np0005464933.novalocal sudo[3699]: pam_unix(sudo:session): session closed for user root
Oct 01 15:59:33 np0005464933.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 01 16:00:06 np0005464933.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 01 16:00:06 np0005464933.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Oct 01 16:00:06 np0005464933.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Oct 01 16:00:06 np0005464933.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Oct 01 16:00:06 np0005464933.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Oct 01 16:00:06 np0005464933.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Oct 01 16:00:06 np0005464933.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Oct 01 16:00:06 np0005464933.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Oct 01 16:00:06 np0005464933.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Oct 01 16:00:06 np0005464933.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Oct 01 16:00:06 np0005464933.novalocal NetworkManager[855]: <info>  [1759334406.1216] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 01 16:00:06 np0005464933.novalocal systemd-udevd[3705]: Network interface NamePolicy= disabled on kernel command line.
Oct 01 16:00:06 np0005464933.novalocal NetworkManager[855]: <info>  [1759334406.1448] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 01 16:00:06 np0005464933.novalocal NetworkManager[855]: <info>  [1759334406.1478] settings: (eth1): created default wired connection 'Wired connection 1'
Oct 01 16:00:06 np0005464933.novalocal NetworkManager[855]: <info>  [1759334406.1482] device (eth1): carrier: link connected
Oct 01 16:00:06 np0005464933.novalocal NetworkManager[855]: <info>  [1759334406.1484] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 01 16:00:06 np0005464933.novalocal NetworkManager[855]: <info>  [1759334406.1491] policy: auto-activating connection 'Wired connection 1' (01b669a8-fa91-383c-b7dc-5c5d3c1764d8)
Oct 01 16:00:06 np0005464933.novalocal NetworkManager[855]: <info>  [1759334406.1495] device (eth1): Activation: starting connection 'Wired connection 1' (01b669a8-fa91-383c-b7dc-5c5d3c1764d8)
Oct 01 16:00:06 np0005464933.novalocal NetworkManager[855]: <info>  [1759334406.1496] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 01 16:00:06 np0005464933.novalocal NetworkManager[855]: <info>  [1759334406.1499] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 01 16:00:06 np0005464933.novalocal NetworkManager[855]: <info>  [1759334406.1503] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 01 16:00:06 np0005464933.novalocal NetworkManager[855]: <info>  [1759334406.1508] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 01 16:00:07 np0005464933.novalocal python3[3731]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-1312-c4a4-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:00:17 np0005464933.novalocal sudo[3810]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvuomcnubbxczbdkyguvltvasfvwalyo ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 01 16:00:17 np0005464933.novalocal sudo[3810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:00:17 np0005464933.novalocal python3[3812]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 16:00:17 np0005464933.novalocal sudo[3810]: pam_unix(sudo:session): session closed for user root
Oct 01 16:00:17 np0005464933.novalocal sudo[3883]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpdbxykffviebdmvmoockcssjclnidst ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 01 16:00:17 np0005464933.novalocal sudo[3883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:00:17 np0005464933.novalocal python3[3885]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759334416.9210424-102-110403244547621/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=43cddb69544fbcd2d54a6f644bbbeeabbe0c2732 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:00:17 np0005464933.novalocal sudo[3883]: pam_unix(sudo:session): session closed for user root
Oct 01 16:00:18 np0005464933.novalocal sudo[3933]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rprduzqlkwmxlbdedrghxmbkknjzvyji ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 01 16:00:18 np0005464933.novalocal sudo[3933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:00:18 np0005464933.novalocal python3[3935]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 01 16:00:18 np0005464933.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct 01 16:00:18 np0005464933.novalocal systemd[1]: Stopped Network Manager Wait Online.
Oct 01 16:00:18 np0005464933.novalocal systemd[1]: Stopping Network Manager Wait Online...
Oct 01 16:00:18 np0005464933.novalocal systemd[1]: Stopping Network Manager...
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[855]: <info>  [1759334418.5972] caught SIGTERM, shutting down normally.
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[855]: <info>  [1759334418.5983] dhcp4 (eth0): canceled DHCP transaction
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[855]: <info>  [1759334418.5983] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[855]: <info>  [1759334418.5983] dhcp4 (eth0): state changed no lease
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[855]: <info>  [1759334418.5986] manager: NetworkManager state is now CONNECTING
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[855]: <info>  [1759334418.6160] dhcp4 (eth1): canceled DHCP transaction
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[855]: <info>  [1759334418.6161] dhcp4 (eth1): state changed no lease
Oct 01 16:00:18 np0005464933.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 01 16:00:18 np0005464933.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[855]: <info>  [1759334418.7207] exiting (success)
Oct 01 16:00:18 np0005464933.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Oct 01 16:00:18 np0005464933.novalocal systemd[1]: Stopped Network Manager.
Oct 01 16:00:18 np0005464933.novalocal systemd[1]: Starting Network Manager...
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.7966] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:60f5f1a4-b8dd-4af4-b8bb-2f6fb4fb4541)
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.7967] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.8038] manager[0x565300521070]: monitoring kernel firmware directory '/lib/firmware'.
Oct 01 16:00:18 np0005464933.novalocal systemd[1]: Starting Hostname Service...
Oct 01 16:00:18 np0005464933.novalocal systemd[1]: Started Hostname Service.
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9243] hostname: hostname: using hostnamed
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9247] hostname: static hostname changed from (none) to "np0005464933.novalocal"
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9253] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9258] manager[0x565300521070]: rfkill: Wi-Fi hardware radio set enabled
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9259] manager[0x565300521070]: rfkill: WWAN hardware radio set enabled
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9293] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9294] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9294] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9295] manager: Networking is enabled by state file
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9298] settings: Loaded settings plugin: keyfile (internal)
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9304] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9336] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9348] dhcp: init: Using DHCP client 'internal'
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9352] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9359] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9370] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9383] device (lo): Activation: starting connection 'lo' (203f8aae-0043-4f00-be85-213b46013acc)
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9394] device (eth0): carrier: link connected
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9401] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9407] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9408] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9416] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9425] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9431] device (eth1): carrier: link connected
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9436] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9441] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (01b669a8-fa91-383c-b7dc-5c5d3c1764d8) (indicated)
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9442] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9447] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9455] device (eth1): Activation: starting connection 'Wired connection 1' (01b669a8-fa91-383c-b7dc-5c5d3c1764d8)
Oct 01 16:00:18 np0005464933.novalocal systemd[1]: Started Network Manager.
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9462] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9479] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9483] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9486] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9489] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9492] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9495] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9498] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9502] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9510] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9514] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9524] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9526] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9555] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9556] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9564] device (lo): Activation: successful, device activated.
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9573] dhcp4 (eth0): state changed new lease, address=38.129.56.223
Oct 01 16:00:18 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334418.9580] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 01 16:00:18 np0005464933.novalocal systemd[1]: Starting Network Manager Wait Online...
Oct 01 16:00:18 np0005464933.novalocal sudo[3933]: pam_unix(sudo:session): session closed for user root
Oct 01 16:00:19 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334419.0487] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 01 16:00:19 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334419.0511] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 01 16:00:19 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334419.0513] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 01 16:00:19 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334419.0517] manager: NetworkManager state is now CONNECTED_SITE
Oct 01 16:00:19 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334419.0523] device (eth0): Activation: successful, device activated.
Oct 01 16:00:19 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334419.0529] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 01 16:00:19 np0005464933.novalocal python3[4020]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-1312-c4a4-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:00:29 np0005464933.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 01 16:00:48 np0005464933.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 01 16:01:01 np0005464933.novalocal CROND[4026]: (root) CMD (run-parts /etc/cron.hourly)
Oct 01 16:01:01 np0005464933.novalocal run-parts[4029]: (/etc/cron.hourly) starting 0anacron
Oct 01 16:01:01 np0005464933.novalocal anacron[4037]: Anacron started on 2025-10-01
Oct 01 16:01:01 np0005464933.novalocal anacron[4037]: Will run job `cron.daily' in 28 min.
Oct 01 16:01:01 np0005464933.novalocal anacron[4037]: Will run job `cron.weekly' in 48 min.
Oct 01 16:01:01 np0005464933.novalocal anacron[4037]: Will run job `cron.monthly' in 68 min.
Oct 01 16:01:01 np0005464933.novalocal anacron[4037]: Jobs will be executed sequentially
Oct 01 16:01:01 np0005464933.novalocal run-parts[4039]: (/etc/cron.hourly) finished 0anacron
Oct 01 16:01:01 np0005464933.novalocal CROND[4025]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 01 16:01:02 np0005464933.novalocal systemd[1057]: Starting Mark boot as successful...
Oct 01 16:01:02 np0005464933.novalocal systemd[1057]: Finished Mark boot as successful.
Oct 01 16:01:04 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334464.2335] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 01 16:01:04 np0005464933.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 01 16:01:04 np0005464933.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 01 16:01:04 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334464.2699] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 01 16:01:04 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334464.2704] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 01 16:01:04 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334464.2713] device (eth1): Activation: successful, device activated.
Oct 01 16:01:04 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334464.2721] manager: startup complete
Oct 01 16:01:04 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334464.2724] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Oct 01 16:01:04 np0005464933.novalocal NetworkManager[3952]: <warn>  [1759334464.2732] device (eth1): Activation: failed for connection 'Wired connection 1'
Oct 01 16:01:04 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334464.2742] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Oct 01 16:01:04 np0005464933.novalocal systemd[1]: Finished Network Manager Wait Online.
Oct 01 16:01:04 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334464.2830] dhcp4 (eth1): canceled DHCP transaction
Oct 01 16:01:04 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334464.2831] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 01 16:01:04 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334464.2831] dhcp4 (eth1): state changed no lease
Oct 01 16:01:04 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334464.2854] policy: auto-activating connection 'ci-private-network' (d4aeb451-37af-5c94-b881-be0e7e424ee8)
Oct 01 16:01:04 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334464.2862] device (eth1): Activation: starting connection 'ci-private-network' (d4aeb451-37af-5c94-b881-be0e7e424ee8)
Oct 01 16:01:04 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334464.2864] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 01 16:01:04 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334464.2869] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 01 16:01:04 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334464.2882] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 01 16:01:04 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334464.2897] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 01 16:01:04 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334464.3317] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 01 16:01:04 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334464.3322] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 01 16:01:04 np0005464933.novalocal NetworkManager[3952]: <info>  [1759334464.3332] device (eth1): Activation: successful, device activated.
Oct 01 16:01:14 np0005464933.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 01 16:01:17 np0005464933.novalocal sudo[4139]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqkxsnyjdtnnziintlxpiwdwpsweqzth ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 01 16:01:17 np0005464933.novalocal sudo[4139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:01:17 np0005464933.novalocal python3[4141]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 16:01:17 np0005464933.novalocal sudo[4139]: pam_unix(sudo:session): session closed for user root
Oct 01 16:01:17 np0005464933.novalocal sudo[4212]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxywvjrturjuhwjrnnyfwgaycqbgnpos ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 01 16:01:17 np0005464933.novalocal sudo[4212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:01:18 np0005464933.novalocal python3[4214]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759334477.2279484-267-185482756758255/source _original_basename=tmp8l65b4no follow=False checksum=453e5e8395c58bf7742ec40b95df2ccd50821fea backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:01:18 np0005464933.novalocal sudo[4212]: pam_unix(sudo:session): session closed for user root
Oct 01 16:02:18 np0005464933.novalocal sshd-session[1066]: Received disconnect from 38.102.83.114 port 38804:11: disconnected by user
Oct 01 16:02:18 np0005464933.novalocal sshd-session[1066]: Disconnected from user zuul 38.102.83.114 port 38804
Oct 01 16:02:18 np0005464933.novalocal sshd-session[1053]: pam_unix(sshd:session): session closed for user zuul
Oct 01 16:02:18 np0005464933.novalocal systemd-logind[788]: Session 1 logged out. Waiting for processes to exit.
Oct 01 16:03:59 np0005464933.novalocal sshd-session[4240]: Invalid user telecomadmin from 185.156.73.233 port 26888
Oct 01 16:03:59 np0005464933.novalocal sshd-session[4240]: Connection closed by invalid user telecomadmin 185.156.73.233 port 26888 [preauth]
Oct 01 16:04:02 np0005464933.novalocal systemd[1057]: Created slice User Background Tasks Slice.
Oct 01 16:04:02 np0005464933.novalocal systemd[1057]: Starting Cleanup of User's Temporary Files and Directories...
Oct 01 16:04:02 np0005464933.novalocal systemd[1057]: Finished Cleanup of User's Temporary Files and Directories.
Oct 01 16:06:45 np0005464933.novalocal sshd-session[4246]: Accepted publickey for zuul from 38.102.83.114 port 42478 ssh2: RSA SHA256:5lTJU/gEmQ/yi1WTLiMVGJft7+lcRZSTGB6P0Q6MG20
Oct 01 16:06:45 np0005464933.novalocal systemd-logind[788]: New session 3 of user zuul.
Oct 01 16:06:45 np0005464933.novalocal systemd[1]: Started Session 3 of User zuul.
Oct 01 16:06:45 np0005464933.novalocal sshd-session[4246]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 16:06:45 np0005464933.novalocal sudo[4273]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqehnarfpzeaqgbcwscbfdtxnieqbvuq ; /usr/bin/python3'
Oct 01 16:06:45 np0005464933.novalocal sudo[4273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:06:45 np0005464933.novalocal python3[4275]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-f1ed-25bf-000000001ce8-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:06:45 np0005464933.novalocal sudo[4273]: pam_unix(sudo:session): session closed for user root
Oct 01 16:06:45 np0005464933.novalocal sudo[4301]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqyzsdmkvmgqhtlnrqqavaelzoxxppmm ; /usr/bin/python3'
Oct 01 16:06:45 np0005464933.novalocal sudo[4301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:06:46 np0005464933.novalocal python3[4303]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:06:46 np0005464933.novalocal sudo[4301]: pam_unix(sudo:session): session closed for user root
Oct 01 16:06:46 np0005464933.novalocal sudo[4327]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjoikzwxkahfdmdsscqkcxiwoahfhqvv ; /usr/bin/python3'
Oct 01 16:06:46 np0005464933.novalocal sudo[4327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:06:46 np0005464933.novalocal python3[4329]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:06:46 np0005464933.novalocal sudo[4327]: pam_unix(sudo:session): session closed for user root
Oct 01 16:06:46 np0005464933.novalocal sudo[4354]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmnrabqppxvtfhbpkkbepenougwkzumd ; /usr/bin/python3'
Oct 01 16:06:46 np0005464933.novalocal sudo[4354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:06:46 np0005464933.novalocal python3[4356]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:06:46 np0005464933.novalocal sudo[4354]: pam_unix(sudo:session): session closed for user root
Oct 01 16:06:46 np0005464933.novalocal sudo[4380]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmtinurvxcyxnqmnvvcvmbokfslxoydi ; /usr/bin/python3'
Oct 01 16:06:46 np0005464933.novalocal sudo[4380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:06:46 np0005464933.novalocal python3[4382]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:06:46 np0005464933.novalocal sudo[4380]: pam_unix(sudo:session): session closed for user root
Oct 01 16:06:47 np0005464933.novalocal sudo[4406]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cafzhfjibduhuujyotfiyweffgdhbmjw ; /usr/bin/python3'
Oct 01 16:06:47 np0005464933.novalocal sudo[4406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:06:47 np0005464933.novalocal python3[4408]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/system.conf regexp=^#DefaultIOAccounting=no line=DefaultIOAccounting=yes state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:06:47 np0005464933.novalocal python3[4408]: ansible-ansible.builtin.lineinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Oct 01 16:06:47 np0005464933.novalocal sudo[4406]: pam_unix(sudo:session): session closed for user root
Oct 01 16:06:47 np0005464933.novalocal sudo[4432]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kovxbzfzrvinzcsjfjghcccngkaegdnl ; /usr/bin/python3'
Oct 01 16:06:47 np0005464933.novalocal sudo[4432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:06:48 np0005464933.novalocal python3[4434]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 01 16:06:48 np0005464933.novalocal systemd[1]: Reloading.
Oct 01 16:06:48 np0005464933.novalocal systemd-rc-local-generator[4457]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:06:48 np0005464933.novalocal sudo[4432]: pam_unix(sudo:session): session closed for user root
Oct 01 16:06:49 np0005464933.novalocal sudo[4488]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zujxsuuodhtylsciheqomuxgapmrmvml ; /usr/bin/python3'
Oct 01 16:06:49 np0005464933.novalocal sudo[4488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:06:49 np0005464933.novalocal python3[4490]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Oct 01 16:06:49 np0005464933.novalocal sudo[4488]: pam_unix(sudo:session): session closed for user root
Oct 01 16:06:49 np0005464933.novalocal sudo[4514]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jezmnmewhyhprbwmvbqzymyudoaqpwzd ; /usr/bin/python3'
Oct 01 16:06:49 np0005464933.novalocal sudo[4514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:06:50 np0005464933.novalocal python3[4516]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:06:50 np0005464933.novalocal sudo[4514]: pam_unix(sudo:session): session closed for user root
Oct 01 16:06:50 np0005464933.novalocal sudo[4542]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvfpjklavmcemtjbbwduxsuigkgygral ; /usr/bin/python3'
Oct 01 16:06:50 np0005464933.novalocal sudo[4542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:06:50 np0005464933.novalocal python3[4544]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:06:50 np0005464933.novalocal sudo[4542]: pam_unix(sudo:session): session closed for user root
Oct 01 16:06:50 np0005464933.novalocal sudo[4570]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mclzyydgsgjuhrpryqwvaxlcseoxkuau ; /usr/bin/python3'
Oct 01 16:06:50 np0005464933.novalocal sudo[4570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:06:50 np0005464933.novalocal python3[4572]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:06:50 np0005464933.novalocal sudo[4570]: pam_unix(sudo:session): session closed for user root
Oct 01 16:06:50 np0005464933.novalocal sudo[4598]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdofunudnhpiixduxipxgufdxxvjojvh ; /usr/bin/python3'
Oct 01 16:06:50 np0005464933.novalocal sudo[4598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:06:50 np0005464933.novalocal python3[4600]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:06:50 np0005464933.novalocal sudo[4598]: pam_unix(sudo:session): session closed for user root
Oct 01 16:06:51 np0005464933.novalocal python3[4627]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-f1ed-25bf-000000001cee-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:06:52 np0005464933.novalocal python3[4657]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:06:53 np0005464933.novalocal sshd-session[4249]: Connection closed by 38.102.83.114 port 42478
Oct 01 16:06:53 np0005464933.novalocal sshd-session[4246]: pam_unix(sshd:session): session closed for user zuul
Oct 01 16:06:53 np0005464933.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Oct 01 16:06:53 np0005464933.novalocal systemd[1]: session-3.scope: Consumed 3.565s CPU time.
Oct 01 16:06:53 np0005464933.novalocal systemd-logind[788]: Session 3 logged out. Waiting for processes to exit.
Oct 01 16:06:53 np0005464933.novalocal systemd-logind[788]: Removed session 3.
Oct 01 16:06:55 np0005464933.novalocal sshd-session[4663]: Accepted publickey for zuul from 38.102.83.114 port 35122 ssh2: RSA SHA256:5lTJU/gEmQ/yi1WTLiMVGJft7+lcRZSTGB6P0Q6MG20
Oct 01 16:06:55 np0005464933.novalocal systemd-logind[788]: New session 4 of user zuul.
Oct 01 16:06:55 np0005464933.novalocal systemd[1]: Started Session 4 of User zuul.
Oct 01 16:06:55 np0005464933.novalocal sshd-session[4663]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 16:06:55 np0005464933.novalocal sudo[4690]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzquejychdybomieqgbcenhphrlbauge ; /usr/bin/python3'
Oct 01 16:06:55 np0005464933.novalocal sudo[4690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:06:55 np0005464933.novalocal python3[4692]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 01 16:07:11 np0005464933.novalocal kernel: SELinux:  Converting 364 SID table entries...
Oct 01 16:07:11 np0005464933.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 01 16:07:11 np0005464933.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 01 16:07:11 np0005464933.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 01 16:07:11 np0005464933.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 01 16:07:11 np0005464933.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 01 16:07:11 np0005464933.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 01 16:07:11 np0005464933.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 01 16:07:20 np0005464933.novalocal kernel: SELinux:  Converting 364 SID table entries...
Oct 01 16:07:20 np0005464933.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 01 16:07:20 np0005464933.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 01 16:07:20 np0005464933.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 01 16:07:20 np0005464933.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 01 16:07:20 np0005464933.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 01 16:07:20 np0005464933.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 01 16:07:20 np0005464933.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 01 16:07:29 np0005464933.novalocal kernel: SELinux:  Converting 364 SID table entries...
Oct 01 16:07:29 np0005464933.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 01 16:07:29 np0005464933.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 01 16:07:29 np0005464933.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 01 16:07:29 np0005464933.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 01 16:07:29 np0005464933.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 01 16:07:29 np0005464933.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 01 16:07:29 np0005464933.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 01 16:07:31 np0005464933.novalocal setsebool[4755]: The virt_use_nfs policy boolean was changed to 1 by root
Oct 01 16:07:31 np0005464933.novalocal setsebool[4755]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Oct 01 16:07:42 np0005464933.novalocal kernel: SELinux:  Converting 367 SID table entries...
Oct 01 16:07:42 np0005464933.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 01 16:07:42 np0005464933.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 01 16:07:42 np0005464933.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 01 16:07:42 np0005464933.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 01 16:07:42 np0005464933.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 01 16:07:42 np0005464933.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 01 16:07:42 np0005464933.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 01 16:08:00 np0005464933.novalocal dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 01 16:08:00 np0005464933.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 01 16:08:00 np0005464933.novalocal systemd[1]: Starting man-db-cache-update.service...
Oct 01 16:08:00 np0005464933.novalocal systemd[1]: Reloading.
Oct 01 16:08:00 np0005464933.novalocal systemd-rc-local-generator[5511]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:08:00 np0005464933.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Oct 01 16:08:02 np0005464933.novalocal systemd[1]: Starting PackageKit Daemon...
Oct 01 16:08:02 np0005464933.novalocal PackageKit[6432]: daemon start
Oct 01 16:08:02 np0005464933.novalocal systemd[1]: Starting Authorization Manager...
Oct 01 16:08:02 np0005464933.novalocal polkitd[6497]: Started polkitd version 0.117
Oct 01 16:08:02 np0005464933.novalocal polkitd[6497]: Loading rules from directory /etc/polkit-1/rules.d
Oct 01 16:08:02 np0005464933.novalocal polkitd[6497]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 01 16:08:02 np0005464933.novalocal polkitd[6497]: Finished loading, compiling and executing 3 rules
Oct 01 16:08:02 np0005464933.novalocal systemd[1]: Started Authorization Manager.
Oct 01 16:08:02 np0005464933.novalocal polkitd[6497]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Oct 01 16:08:02 np0005464933.novalocal systemd[1]: Started PackageKit Daemon.
Oct 01 16:08:03 np0005464933.novalocal sudo[4690]: pam_unix(sudo:session): session closed for user root
Oct 01 16:08:09 np0005464933.novalocal python3[9753]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-f4db-0565-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:08:10 np0005464933.novalocal kernel: evm: overlay not supported
Oct 01 16:08:10 np0005464933.novalocal systemd[1057]: Starting D-Bus User Message Bus...
Oct 01 16:08:10 np0005464933.novalocal dbus-broker-launch[10445]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Oct 01 16:08:10 np0005464933.novalocal dbus-broker-launch[10445]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Oct 01 16:08:10 np0005464933.novalocal systemd[1057]: Started D-Bus User Message Bus.
Oct 01 16:08:10 np0005464933.novalocal dbus-broker-lau[10445]: Ready
Oct 01 16:08:10 np0005464933.novalocal systemd[1057]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 01 16:08:10 np0005464933.novalocal systemd[1057]: Created slice Slice /user.
Oct 01 16:08:10 np0005464933.novalocal systemd[1057]: podman-10351.scope: unit configures an IP firewall, but not running as root.
Oct 01 16:08:10 np0005464933.novalocal systemd[1057]: (This warning is only shown for the first unit using IP firewalling.)
Oct 01 16:08:10 np0005464933.novalocal systemd[1057]: Started podman-10351.scope.
Oct 01 16:08:10 np0005464933.novalocal systemd[1057]: Started podman-pause-76d857cd.scope.
Oct 01 16:08:10 np0005464933.novalocal sudo[10568]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvhxkfbzfarokxkofgvbdsrqjdnlfezt ; /usr/bin/python3'
Oct 01 16:08:10 np0005464933.novalocal sudo[10568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:08:10 np0005464933.novalocal python3[10579]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.166:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.166:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:08:10 np0005464933.novalocal sudo[10568]: pam_unix(sudo:session): session closed for user root
Oct 01 16:08:11 np0005464933.novalocal sshd-session[4666]: Connection closed by 38.102.83.114 port 35122
Oct 01 16:08:11 np0005464933.novalocal sshd-session[4663]: pam_unix(sshd:session): session closed for user zuul
Oct 01 16:08:11 np0005464933.novalocal systemd-logind[788]: Session 4 logged out. Waiting for processes to exit.
Oct 01 16:08:11 np0005464933.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Oct 01 16:08:11 np0005464933.novalocal systemd[1]: session-4.scope: Consumed 59.564s CPU time.
Oct 01 16:08:11 np0005464933.novalocal systemd-logind[788]: Removed session 4.
Oct 01 16:08:12 np0005464933.novalocal sshd-session[9041]: Connection reset by 205.210.31.132 port 65534 [preauth]
Oct 01 16:08:30 np0005464933.novalocal sshd-session[18142]: Connection closed by 38.129.56.198 port 45364 [preauth]
Oct 01 16:08:30 np0005464933.novalocal sshd-session[18145]: Unable to negotiate with 38.129.56.198 port 45372: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Oct 01 16:08:30 np0005464933.novalocal sshd-session[18144]: Unable to negotiate with 38.129.56.198 port 45388: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Oct 01 16:08:30 np0005464933.novalocal sshd-session[18143]: Unable to negotiate with 38.129.56.198 port 45398: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Oct 01 16:08:30 np0005464933.novalocal sshd-session[18140]: Connection closed by 38.129.56.198 port 45362 [preauth]
Oct 01 16:08:32 np0005464933.novalocal irqbalance[786]: Cannot change IRQ 27 affinity: Operation not permitted
Oct 01 16:08:32 np0005464933.novalocal irqbalance[786]: IRQ 27 affinity is now unmanaged
Oct 01 16:08:34 np0005464933.novalocal sshd-session[19506]: Accepted publickey for zuul from 38.102.83.114 port 34250 ssh2: RSA SHA256:5lTJU/gEmQ/yi1WTLiMVGJft7+lcRZSTGB6P0Q6MG20
Oct 01 16:08:34 np0005464933.novalocal systemd-logind[788]: New session 5 of user zuul.
Oct 01 16:08:34 np0005464933.novalocal systemd[1]: Started Session 5 of User zuul.
Oct 01 16:08:34 np0005464933.novalocal sshd-session[19506]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 16:08:35 np0005464933.novalocal python3[19594]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLBr8dOQVAGQ/4kJadVo1v3ZfoAXQXKcctiIqwpQQfW54svyd4WmTM9NrOLnCeEW0IMO36uJvFDnfGVW80YND7A= zuul@np0005464932.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 16:08:35 np0005464933.novalocal sudo[19791]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdaylmgpjpzspdtwziitolhtvsdpjnxp ; /usr/bin/python3'
Oct 01 16:08:35 np0005464933.novalocal sudo[19791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:08:35 np0005464933.novalocal python3[19800]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLBr8dOQVAGQ/4kJadVo1v3ZfoAXQXKcctiIqwpQQfW54svyd4WmTM9NrOLnCeEW0IMO36uJvFDnfGVW80YND7A= zuul@np0005464932.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 16:08:35 np0005464933.novalocal sudo[19791]: pam_unix(sudo:session): session closed for user root
Oct 01 16:08:36 np0005464933.novalocal sudo[20169]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgwcroskdsmugqnytarnnyfkcvusuyyq ; /usr/bin/python3'
Oct 01 16:08:36 np0005464933.novalocal sudo[20169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:08:36 np0005464933.novalocal python3[20178]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005464933.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Oct 01 16:08:36 np0005464933.novalocal useradd[20252]: new group: name=cloud-admin, GID=1002
Oct 01 16:08:36 np0005464933.novalocal useradd[20252]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Oct 01 16:08:36 np0005464933.novalocal sudo[20169]: pam_unix(sudo:session): session closed for user root
Oct 01 16:08:36 np0005464933.novalocal sudo[20375]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyblylepsmcnfehajkjyiszrycmbyieu ; /usr/bin/python3'
Oct 01 16:08:36 np0005464933.novalocal sudo[20375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:08:36 np0005464933.novalocal python3[20384]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLBr8dOQVAGQ/4kJadVo1v3ZfoAXQXKcctiIqwpQQfW54svyd4WmTM9NrOLnCeEW0IMO36uJvFDnfGVW80YND7A= zuul@np0005464932.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 16:08:36 np0005464933.novalocal sudo[20375]: pam_unix(sudo:session): session closed for user root
Oct 01 16:08:36 np0005464933.novalocal sudo[20632]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxdfqntxzpcscukimowylgiiqpqiploe ; /usr/bin/python3'
Oct 01 16:08:36 np0005464933.novalocal sudo[20632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:08:37 np0005464933.novalocal python3[20641]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 16:08:37 np0005464933.novalocal sudo[20632]: pam_unix(sudo:session): session closed for user root
Oct 01 16:08:37 np0005464933.novalocal sudo[20891]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkxntxjfrfxfeqmblnfnjutzjblhmjsb ; /usr/bin/python3'
Oct 01 16:08:37 np0005464933.novalocal sudo[20891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:08:37 np0005464933.novalocal python3[20900]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759334916.8582354-135-227003118367656/source _original_basename=tmpl9yyr2b5 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:08:37 np0005464933.novalocal sudo[20891]: pam_unix(sudo:session): session closed for user root
Oct 01 16:08:38 np0005464933.novalocal sudo[21144]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecubjwvxfwfebxvpcwuhpkjtmdsxsozh ; /usr/bin/python3'
Oct 01 16:08:38 np0005464933.novalocal sudo[21144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:08:38 np0005464933.novalocal python3[21152]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Oct 01 16:08:38 np0005464933.novalocal systemd[1]: Starting Hostname Service...
Oct 01 16:08:38 np0005464933.novalocal systemd[1]: Started Hostname Service.
Oct 01 16:08:38 np0005464933.novalocal systemd-hostnamed[21269]: Changed pretty hostname to 'compute-0'
Oct 01 16:08:38 compute-0 systemd-hostnamed[21269]: Hostname set to <compute-0> (static)
Oct 01 16:08:38 compute-0 NetworkManager[3952]: <info>  [1759334918.6677] hostname: static hostname changed from "np0005464933.novalocal" to "compute-0"
Oct 01 16:08:38 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 01 16:08:38 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 01 16:08:38 compute-0 sudo[21144]: pam_unix(sudo:session): session closed for user root
Oct 01 16:08:39 compute-0 sshd-session[19544]: Connection closed by 38.102.83.114 port 34250
Oct 01 16:08:39 compute-0 sshd-session[19506]: pam_unix(sshd:session): session closed for user zuul
Oct 01 16:08:39 compute-0 systemd[1]: session-5.scope: Deactivated successfully.
Oct 01 16:08:39 compute-0 systemd[1]: session-5.scope: Consumed 2.274s CPU time.
Oct 01 16:08:39 compute-0 systemd-logind[788]: Session 5 logged out. Waiting for processes to exit.
Oct 01 16:08:39 compute-0 systemd-logind[788]: Removed session 5.
Oct 01 16:08:48 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 01 16:08:55 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 01 16:08:55 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 01 16:08:55 compute-0 systemd[1]: man-db-cache-update.service: Consumed 57.156s CPU time.
Oct 01 16:08:55 compute-0 systemd[1]: run-r97b93ff5dfcc48d498ead9d8d21f8cdf.service: Deactivated successfully.
Oct 01 16:09:08 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 01 16:12:25 compute-0 sshd-session[26563]: Accepted publickey for zuul from 38.129.56.198 port 48030 ssh2: RSA SHA256:5lTJU/gEmQ/yi1WTLiMVGJft7+lcRZSTGB6P0Q6MG20
Oct 01 16:12:25 compute-0 systemd-logind[788]: New session 6 of user zuul.
Oct 01 16:12:25 compute-0 systemd[1]: Started Session 6 of User zuul.
Oct 01 16:12:25 compute-0 sshd-session[26563]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 16:12:26 compute-0 python3[26639]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:12:27 compute-0 sudo[26753]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xznidebcmvkbfxtqkxqvvvoeylvpvlzm ; /usr/bin/python3'
Oct 01 16:12:27 compute-0 sudo[26753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:12:27 compute-0 python3[26755]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 16:12:27 compute-0 sudo[26753]: pam_unix(sudo:session): session closed for user root
Oct 01 16:12:27 compute-0 sudo[26826]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtgjrrnerotlkwwkwkzxlfsibuekwvew ; /usr/bin/python3'
Oct 01 16:12:27 compute-0 sudo[26826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:12:28 compute-0 python3[26828]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759335147.3628778-30219-45994363793075/source mode=0755 _original_basename=delorean.repo follow=False checksum=bb4c2ff9dad546f135d54d9729ea11b84117755d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:12:28 compute-0 sudo[26826]: pam_unix(sudo:session): session closed for user root
Oct 01 16:12:28 compute-0 sudo[26852]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjnsammeiglzrmwkculsubxovqghqwpi ; /usr/bin/python3'
Oct 01 16:12:28 compute-0 sudo[26852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:12:28 compute-0 python3[26854]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 16:12:28 compute-0 sudo[26852]: pam_unix(sudo:session): session closed for user root
Oct 01 16:12:28 compute-0 sudo[26925]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efdradoihzwwrlbyzxkpidnbrgtoqecb ; /usr/bin/python3'
Oct 01 16:12:28 compute-0 sudo[26925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:12:28 compute-0 python3[26927]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759335147.3628778-30219-45994363793075/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:12:28 compute-0 sudo[26925]: pam_unix(sudo:session): session closed for user root
Oct 01 16:12:28 compute-0 sudo[26951]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgijgdxdmqwtteowspwfmsehkwwbitpw ; /usr/bin/python3'
Oct 01 16:12:28 compute-0 sudo[26951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:12:28 compute-0 python3[26953]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 16:12:28 compute-0 sudo[26951]: pam_unix(sudo:session): session closed for user root
Oct 01 16:12:29 compute-0 sudo[27024]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klodjkzsekgvlhaokpzmlepenadbuili ; /usr/bin/python3'
Oct 01 16:12:29 compute-0 sudo[27024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:12:29 compute-0 python3[27026]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759335147.3628778-30219-45994363793075/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:12:29 compute-0 sudo[27024]: pam_unix(sudo:session): session closed for user root
Oct 01 16:12:29 compute-0 sudo[27050]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqhqbibzrogohmvxpbrxmcksikypqvko ; /usr/bin/python3'
Oct 01 16:12:29 compute-0 sudo[27050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:12:29 compute-0 python3[27052]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 16:12:29 compute-0 sudo[27050]: pam_unix(sudo:session): session closed for user root
Oct 01 16:12:29 compute-0 sudo[27123]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pylnmzemruuqwportpwohnlrqwmbjpme ; /usr/bin/python3'
Oct 01 16:12:29 compute-0 sudo[27123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:12:29 compute-0 python3[27125]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759335147.3628778-30219-45994363793075/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:12:29 compute-0 sudo[27123]: pam_unix(sudo:session): session closed for user root
Oct 01 16:12:29 compute-0 sudo[27149]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdhuoabevlgcgczqteqvirmegxrjlegg ; /usr/bin/python3'
Oct 01 16:12:29 compute-0 sudo[27149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:12:30 compute-0 python3[27151]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 16:12:30 compute-0 sudo[27149]: pam_unix(sudo:session): session closed for user root
Oct 01 16:12:30 compute-0 sudo[27222]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kotsfubpmryxxekspyfefczobyadzjyo ; /usr/bin/python3'
Oct 01 16:12:30 compute-0 sudo[27222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:12:30 compute-0 python3[27224]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759335147.3628778-30219-45994363793075/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:12:30 compute-0 sudo[27222]: pam_unix(sudo:session): session closed for user root
Oct 01 16:12:30 compute-0 sudo[27248]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmgmpxigdndpxafejxdjzlrazxdekpfo ; /usr/bin/python3'
Oct 01 16:12:30 compute-0 sudo[27248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:12:30 compute-0 python3[27250]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 16:12:30 compute-0 sudo[27248]: pam_unix(sudo:session): session closed for user root
Oct 01 16:12:30 compute-0 sudo[27321]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akivylgimehomljbmgubjlmbtkpojjof ; /usr/bin/python3'
Oct 01 16:12:30 compute-0 sudo[27321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:12:30 compute-0 python3[27323]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759335147.3628778-30219-45994363793075/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:12:30 compute-0 sudo[27321]: pam_unix(sudo:session): session closed for user root
Oct 01 16:12:31 compute-0 sudo[27347]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfdnktgmtejsuftczmujzpwpevlrmkmu ; /usr/bin/python3'
Oct 01 16:12:31 compute-0 sudo[27347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:12:31 compute-0 python3[27349]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 16:12:31 compute-0 sudo[27347]: pam_unix(sudo:session): session closed for user root
Oct 01 16:12:31 compute-0 sudo[27420]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcmhdtbwbwwjvmoazrzmojwimjxetstt ; /usr/bin/python3'
Oct 01 16:12:31 compute-0 sudo[27420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:12:31 compute-0 python3[27422]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759335147.3628778-30219-45994363793075/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=d911291791b114a72daf18f370e91cb1ae300933 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:12:31 compute-0 sudo[27420]: pam_unix(sudo:session): session closed for user root
Oct 01 16:12:33 compute-0 sshd-session[27447]: Connection closed by 192.168.122.11 port 46304 [preauth]
Oct 01 16:12:33 compute-0 sshd-session[27449]: Connection closed by 192.168.122.11 port 46314 [preauth]
Oct 01 16:12:33 compute-0 sshd-session[27448]: Unable to negotiate with 192.168.122.11 port 46318: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Oct 01 16:12:33 compute-0 sshd-session[27450]: Unable to negotiate with 192.168.122.11 port 46320: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Oct 01 16:12:33 compute-0 sshd-session[27451]: Unable to negotiate with 192.168.122.11 port 46330: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Oct 01 16:12:43 compute-0 python3[27480]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:13:08 compute-0 PackageKit[6432]: daemon quit
Oct 01 16:13:08 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Oct 01 16:13:08 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct 01 16:13:08 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct 01 16:13:08 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Oct 01 16:13:08 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct 01 16:16:54 compute-0 sshd-session[27489]: Invalid user user from 185.156.73.233 port 52810
Oct 01 16:16:54 compute-0 sshd-session[27489]: Connection closed by invalid user user 185.156.73.233 port 52810 [preauth]
Oct 01 16:17:42 compute-0 sshd-session[26566]: Received disconnect from 38.129.56.198 port 48030:11: disconnected by user
Oct 01 16:17:42 compute-0 sshd-session[26566]: Disconnected from user zuul 38.129.56.198 port 48030
Oct 01 16:17:42 compute-0 sshd-session[26563]: pam_unix(sshd:session): session closed for user zuul
Oct 01 16:17:42 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Oct 01 16:17:42 compute-0 systemd[1]: session-6.scope: Consumed 4.716s CPU time.
Oct 01 16:17:42 compute-0 systemd-logind[788]: Session 6 logged out. Waiting for processes to exit.
Oct 01 16:17:42 compute-0 systemd-logind[788]: Removed session 6.
Oct 01 16:23:45 compute-0 sshd-session[27493]: Accepted publickey for zuul from 192.168.122.30 port 51966 ssh2: ECDSA SHA256:cAu4I/kPoFUKOLOQB71BUt6Th09G4PIJ2iHT8DD8gEY
Oct 01 16:23:45 compute-0 systemd-logind[788]: New session 7 of user zuul.
Oct 01 16:23:45 compute-0 systemd[1]: Started Session 7 of User zuul.
Oct 01 16:23:45 compute-0 sshd-session[27493]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 16:23:46 compute-0 python3.9[27646]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:23:47 compute-0 sudo[27825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efnmaaxgsvhktwrlajvfyiskkpdppezs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759335827.2345006-32-154291909174287/AnsiballZ_command.py'
Oct 01 16:23:47 compute-0 sudo[27825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:23:47 compute-0 python3.9[27827]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:23:54 compute-0 sudo[27825]: pam_unix(sudo:session): session closed for user root
Oct 01 16:23:55 compute-0 sshd-session[27496]: Connection closed by 192.168.122.30 port 51966
Oct 01 16:23:55 compute-0 sshd-session[27493]: pam_unix(sshd:session): session closed for user zuul
Oct 01 16:23:55 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Oct 01 16:23:55 compute-0 systemd[1]: session-7.scope: Consumed 7.568s CPU time.
Oct 01 16:23:55 compute-0 systemd-logind[788]: Session 7 logged out. Waiting for processes to exit.
Oct 01 16:23:55 compute-0 systemd-logind[788]: Removed session 7.
Oct 01 16:24:10 compute-0 sshd-session[27886]: Accepted publickey for zuul from 192.168.122.30 port 36056 ssh2: ECDSA SHA256:cAu4I/kPoFUKOLOQB71BUt6Th09G4PIJ2iHT8DD8gEY
Oct 01 16:24:10 compute-0 systemd-logind[788]: New session 8 of user zuul.
Oct 01 16:24:10 compute-0 systemd[1]: Started Session 8 of User zuul.
Oct 01 16:24:10 compute-0 sshd-session[27886]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 16:24:10 compute-0 python3.9[28039]: ansible-ansible.legacy.ping Invoked with data=pong
Oct 01 16:24:12 compute-0 python3.9[28213]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:24:12 compute-0 sudo[28363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykguhmlwoewhsoeyaoqibvnayarkgklr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759335852.3363428-45-7661925502577/AnsiballZ_command.py'
Oct 01 16:24:12 compute-0 sudo[28363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:24:12 compute-0 python3.9[28365]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:24:13 compute-0 sudo[28363]: pam_unix(sudo:session): session closed for user root
Oct 01 16:24:13 compute-0 sudo[28516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpytcirzubivyqfrrcaimfiutkfrdoqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759335853.298515-57-13860858741377/AnsiballZ_stat.py'
Oct 01 16:24:13 compute-0 sudo[28516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:24:13 compute-0 python3.9[28518]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:24:13 compute-0 sudo[28516]: pam_unix(sudo:session): session closed for user root
Oct 01 16:24:14 compute-0 sudo[28668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyazbsqiursujwozihyshkxjoulnjtvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759335854.1089418-65-136526801433224/AnsiballZ_file.py'
Oct 01 16:24:14 compute-0 sudo[28668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:24:14 compute-0 python3.9[28670]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:24:14 compute-0 sudo[28668]: pam_unix(sudo:session): session closed for user root
Oct 01 16:24:15 compute-0 sudo[28820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djhonepoqgagzlnsbiratgkgmjqerarm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759335854.9861367-73-204228745355314/AnsiballZ_stat.py'
Oct 01 16:24:15 compute-0 sudo[28820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:24:15 compute-0 python3.9[28822]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:24:15 compute-0 sudo[28820]: pam_unix(sudo:session): session closed for user root
Oct 01 16:24:16 compute-0 sudo[28943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmzvplxkuncdxosastykjvlxvlkuojiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759335854.9861367-73-204228745355314/AnsiballZ_copy.py'
Oct 01 16:24:16 compute-0 sudo[28943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:24:16 compute-0 python3.9[28945]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759335854.9861367-73-204228745355314/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:24:16 compute-0 sudo[28943]: pam_unix(sudo:session): session closed for user root
Oct 01 16:24:16 compute-0 sudo[29096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugvxgcjeujgxvcidgkuliyaxbwtjaaqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759335856.396786-88-66701625033025/AnsiballZ_setup.py'
Oct 01 16:24:16 compute-0 sudo[29096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:24:16 compute-0 python3.9[29098]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:24:17 compute-0 sudo[29096]: pam_unix(sudo:session): session closed for user root
Oct 01 16:24:17 compute-0 sudo[29252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-panhmcesyafwegjqqlmligfrnondgjqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759335857.3585057-96-273034696333458/AnsiballZ_file.py'
Oct 01 16:24:17 compute-0 sudo[29252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:24:17 compute-0 python3.9[29254]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:24:17 compute-0 sudo[29252]: pam_unix(sudo:session): session closed for user root
Oct 01 16:24:18 compute-0 python3.9[29404]: ansible-ansible.builtin.service_facts Invoked
Oct 01 16:24:21 compute-0 python3.9[29659]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:24:22 compute-0 python3.9[29809]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:24:23 compute-0 python3.9[29963]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:24:24 compute-0 sudo[30119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdfqaeeunenluuohhdexehsyobdhcgky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759335864.0568342-144-227818372433262/AnsiballZ_setup.py'
Oct 01 16:24:24 compute-0 sudo[30119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:24:24 compute-0 python3.9[30121]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 16:24:24 compute-0 sudo[30119]: pam_unix(sudo:session): session closed for user root
Oct 01 16:24:25 compute-0 sudo[30203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzequjhnxrxcckaujmslbiuwmororyvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759335864.0568342-144-227818372433262/AnsiballZ_dnf.py'
Oct 01 16:24:25 compute-0 sudo[30203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:24:25 compute-0 python3.9[30205]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 16:25:17 compute-0 systemd[1]: Reloading.
Oct 01 16:25:17 compute-0 systemd-rc-local-generator[30400]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:25:17 compute-0 systemd[1]: Starting dnf makecache...
Oct 01 16:25:17 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Oct 01 16:25:17 compute-0 dnf[30410]: Failed determining last makecache time.
Oct 01 16:25:18 compute-0 dnf[30410]: delorean-openstack-barbican-42b4c41831408a8e323 125 kB/s | 3.0 kB     00:00
Oct 01 16:25:18 compute-0 dnf[30410]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 130 kB/s | 3.0 kB     00:00
Oct 01 16:25:18 compute-0 dnf[30410]: delorean-openstack-cinder-1c00d6490d88e436f26ef 121 kB/s | 3.0 kB     00:00
Oct 01 16:25:18 compute-0 dnf[30410]: delorean-python-stevedore-c4acc5639fd2329372142 150 kB/s | 3.0 kB     00:00
Oct 01 16:25:18 compute-0 dnf[30410]: delorean-python-cloudkitty-tests-tempest-3961dc 161 kB/s | 3.0 kB     00:00
Oct 01 16:25:18 compute-0 dnf[30410]: delorean-os-net-config-28598c2978b9e2207dd19fc4 158 kB/s | 3.0 kB     00:00
Oct 01 16:25:18 compute-0 systemd[1]: Reloading.
Oct 01 16:25:18 compute-0 dnf[30410]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 138 kB/s | 3.0 kB     00:00
Oct 01 16:25:18 compute-0 dnf[30410]: delorean-python-designate-tests-tempest-347fdbc 134 kB/s | 3.0 kB     00:00
Oct 01 16:25:18 compute-0 dnf[30410]: delorean-openstack-glance-1fd12c29b339f30fe823e 126 kB/s | 3.0 kB     00:00
Oct 01 16:25:18 compute-0 systemd-rc-local-generator[30450]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:25:18 compute-0 dnf[30410]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 127 kB/s | 3.0 kB     00:00
Oct 01 16:25:18 compute-0 dnf[30410]: delorean-openstack-manila-3c01b7181572c95dac462 148 kB/s | 3.0 kB     00:00
Oct 01 16:25:18 compute-0 dnf[30410]: delorean-python-whitebox-neutron-tests-tempest- 136 kB/s | 3.0 kB     00:00
Oct 01 16:25:18 compute-0 dnf[30410]: delorean-openstack-octavia-ba397f07a7331190208c 136 kB/s | 3.0 kB     00:00
Oct 01 16:25:18 compute-0 dnf[30410]: delorean-openstack-watcher-c014f81a8647287f6dcc 131 kB/s | 3.0 kB     00:00
Oct 01 16:25:18 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct 01 16:25:18 compute-0 dnf[30410]: delorean-edpm-image-builder-55ba53cf215b14ed95b 149 kB/s | 3.0 kB     00:00
Oct 01 16:25:18 compute-0 dnf[30410]: delorean-puppet-ceph-b0c245ccde541a63fde0564366 131 kB/s | 3.0 kB     00:00
Oct 01 16:25:18 compute-0 dnf[30410]: delorean-openstack-swift-dc98a8463506ac520c469a 139 kB/s | 3.0 kB     00:00
Oct 01 16:25:18 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct 01 16:25:18 compute-0 systemd[1]: Reloading.
Oct 01 16:25:18 compute-0 dnf[30410]: delorean-python-tempestconf-8515371b7cceebd4282 126 kB/s | 3.0 kB     00:00
Oct 01 16:25:18 compute-0 dnf[30410]: delorean-openstack-heat-ui-013accbfd179753bc3f0 139 kB/s | 3.0 kB     00:00
Oct 01 16:25:18 compute-0 systemd-rc-local-generator[30501]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:25:18 compute-0 dnf[30410]: CentOS Stream 9 - BaseOS                         77 kB/s | 6.7 kB     00:00
Oct 01 16:25:18 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Oct 01 16:25:18 compute-0 dnf[30410]: CentOS Stream 9 - AppStream                      69 kB/s | 6.8 kB     00:00
Oct 01 16:25:18 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Oct 01 16:25:18 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Oct 01 16:25:18 compute-0 dnf[30410]: CentOS Stream 9 - CRB                            68 kB/s | 6.6 kB     00:00
Oct 01 16:25:19 compute-0 dnf[30410]: CentOS Stream 9 - Extras packages                82 kB/s | 8.0 kB     00:00
Oct 01 16:25:19 compute-0 dnf[30410]: dlrn-antelope-testing                           172 kB/s | 3.0 kB     00:00
Oct 01 16:25:19 compute-0 dnf[30410]: dlrn-antelope-build-deps                        182 kB/s | 3.0 kB     00:00
Oct 01 16:25:19 compute-0 dnf[30410]: centos9-rabbitmq                                123 kB/s | 3.0 kB     00:00
Oct 01 16:25:19 compute-0 dnf[30410]: centos9-storage                                 128 kB/s | 3.0 kB     00:00
Oct 01 16:25:19 compute-0 dnf[30410]: centos9-opstools                                130 kB/s | 3.0 kB     00:00
Oct 01 16:25:19 compute-0 dnf[30410]: NFV SIG OpenvSwitch                              34 kB/s | 3.0 kB     00:00
Oct 01 16:25:19 compute-0 dnf[30410]: repo-setup-centos-appstream                     161 kB/s | 4.4 kB     00:00
Oct 01 16:25:19 compute-0 dnf[30410]: repo-setup-centos-baseos                        134 kB/s | 3.9 kB     00:00
Oct 01 16:25:19 compute-0 dnf[30410]: repo-setup-centos-highavailability              119 kB/s | 3.9 kB     00:00
Oct 01 16:25:19 compute-0 dnf[30410]: repo-setup-centos-powertools                    160 kB/s | 4.3 kB     00:00
Oct 01 16:25:19 compute-0 dnf[30410]: Extra Packages for Enterprise Linux 9 - x86_64  219 kB/s |  34 kB     00:00
Oct 01 16:25:20 compute-0 dnf[30410]: Metadata cache created.
Oct 01 16:25:20 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 01 16:25:20 compute-0 systemd[1]: Finished dnf makecache.
Oct 01 16:25:20 compute-0 systemd[1]: dnf-makecache.service: Consumed 2.232s CPU time.
Oct 01 16:26:28 compute-0 kernel: SELinux:  Converting 2714 SID table entries...
Oct 01 16:26:28 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 01 16:26:28 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 01 16:26:28 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 01 16:26:28 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 01 16:26:28 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 01 16:26:28 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 01 16:26:28 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 01 16:26:28 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Oct 01 16:26:28 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 01 16:26:28 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 01 16:26:28 compute-0 systemd[1]: Reloading.
Oct 01 16:26:28 compute-0 systemd-rc-local-generator[30852]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:26:28 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 01 16:26:29 compute-0 systemd[1]: Starting PackageKit Daemon...
Oct 01 16:26:29 compute-0 PackageKit[31093]: daemon start
Oct 01 16:26:29 compute-0 systemd[1]: Started PackageKit Daemon.
Oct 01 16:26:29 compute-0 sudo[30203]: pam_unix(sudo:session): session closed for user root
Oct 01 16:26:29 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 01 16:26:29 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 01 16:26:29 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.323s CPU time.
Oct 01 16:26:29 compute-0 systemd[1]: run-re566b74ebfb74c7ea091d8b25d7bc470.service: Deactivated successfully.
Oct 01 16:26:29 compute-0 sudo[31771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fagytwavzizbddyeblqydmyxqidawvpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759335989.51824-156-53388126145677/AnsiballZ_command.py'
Oct 01 16:26:29 compute-0 sudo[31771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:26:30 compute-0 python3.9[31773]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:26:31 compute-0 sudo[31771]: pam_unix(sudo:session): session closed for user root
Oct 01 16:26:32 compute-0 sudo[32052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fczicmrcijlqgxhjmstxhbrtrwnsogzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759335991.3395338-164-248660842354242/AnsiballZ_selinux.py'
Oct 01 16:26:32 compute-0 sudo[32052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:26:32 compute-0 python3.9[32054]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct 01 16:26:32 compute-0 sudo[32052]: pam_unix(sudo:session): session closed for user root
Oct 01 16:26:32 compute-0 sudo[32204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwjcsqgqnbobyaecsznvgavzryckfrdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759335992.6678166-175-254389391117788/AnsiballZ_command.py'
Oct 01 16:26:32 compute-0 sudo[32204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:26:33 compute-0 python3.9[32206]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct 01 16:26:34 compute-0 sudo[32204]: pam_unix(sudo:session): session closed for user root
Oct 01 16:26:34 compute-0 sudo[32357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snnjdakcvzbsnjrqzxaribqjomxpjiam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759335994.376112-183-58736460883927/AnsiballZ_file.py'
Oct 01 16:26:34 compute-0 sudo[32357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:26:35 compute-0 python3.9[32359]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:26:35 compute-0 sudo[32357]: pam_unix(sudo:session): session closed for user root
Oct 01 16:26:36 compute-0 sudo[32509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgfsqcawpfcbhxljzbuggdxbayqxvmdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759335995.6033528-191-254776930214814/AnsiballZ_mount.py'
Oct 01 16:26:36 compute-0 sudo[32509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:26:36 compute-0 python3.9[32511]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct 01 16:26:36 compute-0 sudo[32509]: pam_unix(sudo:session): session closed for user root
Oct 01 16:26:37 compute-0 sudo[32661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqofdaiermebdceviocjpksgshxhioqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759335996.9515312-219-244597497447415/AnsiballZ_file.py'
Oct 01 16:26:37 compute-0 sudo[32661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:26:37 compute-0 python3.9[32663]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:26:37 compute-0 sudo[32661]: pam_unix(sudo:session): session closed for user root
Oct 01 16:26:38 compute-0 sudo[32813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-durggwxzjzaofglgqoyvcwyyurwogxea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759335997.6139593-227-130335101738872/AnsiballZ_stat.py'
Oct 01 16:26:38 compute-0 sudo[32813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:26:38 compute-0 python3.9[32815]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:26:38 compute-0 sudo[32813]: pam_unix(sudo:session): session closed for user root
Oct 01 16:26:38 compute-0 sudo[32936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xabqhzegwyruxvyziwsbvnwzgjnybvii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759335997.6139593-227-130335101738872/AnsiballZ_copy.py'
Oct 01 16:26:38 compute-0 sudo[32936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:26:38 compute-0 python3.9[32938]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759335997.6139593-227-130335101738872/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bf4be44dc9b0cb27bebca4408e722e3ce3fb0177 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:26:38 compute-0 sudo[32936]: pam_unix(sudo:session): session closed for user root
Oct 01 16:26:40 compute-0 sudo[33088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdfkjuqbdozqkdhdttomuxslpgahpnsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759335999.4930995-254-206124874472318/AnsiballZ_getent.py'
Oct 01 16:26:40 compute-0 sudo[33088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:26:42 compute-0 python3.9[33090]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct 01 16:26:42 compute-0 sudo[33088]: pam_unix(sudo:session): session closed for user root
Oct 01 16:26:43 compute-0 sudo[33241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlvobvbvvpfxicnhimxqcrkbqeyhlusk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336002.5808492-262-173551602495488/AnsiballZ_group.py'
Oct 01 16:26:43 compute-0 sudo[33241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:26:43 compute-0 python3.9[33243]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 01 16:26:43 compute-0 groupadd[33244]: group added to /etc/group: name=qemu, GID=107
Oct 01 16:26:43 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 16:26:43 compute-0 groupadd[33244]: group added to /etc/gshadow: name=qemu
Oct 01 16:26:43 compute-0 groupadd[33244]: new group: name=qemu, GID=107
Oct 01 16:26:43 compute-0 sudo[33241]: pam_unix(sudo:session): session closed for user root
Oct 01 16:26:44 compute-0 sudo[33400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nokulaplvlckslbavyfyctbzadmgvwwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336003.7547479-270-226779351172429/AnsiballZ_user.py'
Oct 01 16:26:44 compute-0 sudo[33400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:26:44 compute-0 python3.9[33402]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 01 16:26:44 compute-0 useradd[33404]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Oct 01 16:26:44 compute-0 sudo[33400]: pam_unix(sudo:session): session closed for user root
Oct 01 16:26:45 compute-0 sudo[33560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkxdatrbocrdicyvsqpyvijtkzajehez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336004.7630806-278-251742239158481/AnsiballZ_getent.py'
Oct 01 16:26:45 compute-0 sudo[33560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:26:45 compute-0 python3.9[33562]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct 01 16:26:45 compute-0 sudo[33560]: pam_unix(sudo:session): session closed for user root
Oct 01 16:26:45 compute-0 sudo[33713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmbstsfonpgnlbcruyxyatxkoalpcxyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336005.4684205-286-183435355780044/AnsiballZ_group.py'
Oct 01 16:26:45 compute-0 sudo[33713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:26:45 compute-0 python3.9[33715]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 01 16:26:45 compute-0 groupadd[33716]: group added to /etc/group: name=hugetlbfs, GID=42477
Oct 01 16:26:45 compute-0 groupadd[33716]: group added to /etc/gshadow: name=hugetlbfs
Oct 01 16:26:46 compute-0 groupadd[33716]: new group: name=hugetlbfs, GID=42477
Oct 01 16:26:46 compute-0 sudo[33713]: pam_unix(sudo:session): session closed for user root
Oct 01 16:26:47 compute-0 sudo[33871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgvdgsdtqnmhvlbtceddrldnsjuwrqcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336006.3185763-295-274767057124670/AnsiballZ_file.py'
Oct 01 16:26:47 compute-0 sudo[33871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:26:47 compute-0 python3.9[33873]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct 01 16:26:47 compute-0 sudo[33871]: pam_unix(sudo:session): session closed for user root
Oct 01 16:26:47 compute-0 sudo[34023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxvpmcgfochkrjioeuqruajgmfhhjyqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336007.5646982-306-135693103765415/AnsiballZ_dnf.py'
Oct 01 16:26:47 compute-0 sudo[34023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:26:48 compute-0 python3.9[34025]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 16:26:49 compute-0 sudo[34023]: pam_unix(sudo:session): session closed for user root
Oct 01 16:26:50 compute-0 sudo[34176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lifmlbbvlvjzjytprcyebamkggkizsge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336010.13291-314-96251223688394/AnsiballZ_file.py'
Oct 01 16:26:50 compute-0 sudo[34176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:26:50 compute-0 python3.9[34178]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:26:50 compute-0 sudo[34176]: pam_unix(sudo:session): session closed for user root
Oct 01 16:26:51 compute-0 sudo[34328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igrragsqaaojyipjuvtfphvsfflacbkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336011.0507917-322-116180616657171/AnsiballZ_stat.py'
Oct 01 16:26:51 compute-0 sudo[34328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:26:51 compute-0 python3.9[34330]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:26:51 compute-0 sudo[34328]: pam_unix(sudo:session): session closed for user root
Oct 01 16:26:52 compute-0 sudo[34451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tatlzbxkrynxbdlbszqtsdzrsdhbrwck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336011.0507917-322-116180616657171/AnsiballZ_copy.py'
Oct 01 16:26:52 compute-0 sudo[34451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:26:52 compute-0 python3.9[34453]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759336011.0507917-322-116180616657171/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:26:52 compute-0 sudo[34451]: pam_unix(sudo:session): session closed for user root
Oct 01 16:26:53 compute-0 sudo[34603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxjyhbjxmcjkshaoukelajljjtyehehi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336012.3997602-337-85064237556990/AnsiballZ_systemd.py'
Oct 01 16:26:53 compute-0 sudo[34603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:26:53 compute-0 python3.9[34605]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 01 16:26:53 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 01 16:26:53 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct 01 16:26:53 compute-0 kernel: Bridge firewalling registered
Oct 01 16:26:53 compute-0 systemd-modules-load[34609]: Inserted module 'br_netfilter'
Oct 01 16:26:53 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 01 16:26:53 compute-0 sudo[34603]: pam_unix(sudo:session): session closed for user root
Oct 01 16:26:54 compute-0 sudo[34763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uffsooycgsyfvbvisablgegqggwzfjgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336013.9280338-345-192367027198870/AnsiballZ_stat.py'
Oct 01 16:26:54 compute-0 sudo[34763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:26:54 compute-0 python3.9[34765]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:26:54 compute-0 sudo[34763]: pam_unix(sudo:session): session closed for user root
Oct 01 16:26:55 compute-0 sudo[34886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-herpxvzddvqnvngljcodqymlnlfzjwcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336013.9280338-345-192367027198870/AnsiballZ_copy.py'
Oct 01 16:26:55 compute-0 sudo[34886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:26:55 compute-0 python3.9[34888]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759336013.9280338-345-192367027198870/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:26:55 compute-0 sudo[34886]: pam_unix(sudo:session): session closed for user root
Oct 01 16:26:55 compute-0 sudo[35038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbhqmeenmpkqkrtdppczuwzwtpzsemcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336015.6346288-363-226190250236462/AnsiballZ_dnf.py'
Oct 01 16:26:55 compute-0 sudo[35038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:26:56 compute-0 python3.9[35040]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 16:26:59 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Oct 01 16:26:59 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Oct 01 16:26:59 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 01 16:26:59 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 01 16:26:59 compute-0 systemd[1]: Reloading.
Oct 01 16:27:00 compute-0 systemd-rc-local-generator[35105]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:27:00 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 01 16:27:00 compute-0 sudo[35038]: pam_unix(sudo:session): session closed for user root
Oct 01 16:27:01 compute-0 python3.9[36288]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:27:02 compute-0 python3.9[37332]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 01 16:27:03 compute-0 python3.9[38132]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:27:03 compute-0 sudo[38871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qalxdvujfffpcxiddqlgjmlsgpcbymjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336023.4579582-402-121180172142120/AnsiballZ_command.py'
Oct 01 16:27:03 compute-0 sudo[38871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:27:03 compute-0 python3.9[38881]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:27:04 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 01 16:27:04 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 01 16:27:04 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 01 16:27:04 compute-0 systemd[1]: man-db-cache-update.service: Consumed 5.531s CPU time.
Oct 01 16:27:04 compute-0 systemd[1]: run-r2479df4f4fad486db14a97f85fd42697.service: Deactivated successfully.
Oct 01 16:27:04 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 01 16:27:04 compute-0 sudo[38871]: pam_unix(sudo:session): session closed for user root
Oct 01 16:27:05 compute-0 sudo[39576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhsjjsfsteuobiamjwyjnfmuctulyvpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336024.7990184-411-5501704704432/AnsiballZ_systemd.py'
Oct 01 16:27:05 compute-0 sudo[39576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:27:05 compute-0 python3.9[39578]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:27:05 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 01 16:27:05 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Oct 01 16:27:05 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 01 16:27:05 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 01 16:27:05 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 01 16:27:05 compute-0 sudo[39576]: pam_unix(sudo:session): session closed for user root
Oct 01 16:27:06 compute-0 python3.9[39740]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 01 16:27:08 compute-0 sudo[39890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpgegqquxwcsbsjbahasqphkvtvgtvxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336028.6505399-468-58088297927920/AnsiballZ_systemd.py'
Oct 01 16:27:08 compute-0 sudo[39890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:27:09 compute-0 python3.9[39892]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:27:09 compute-0 systemd[1]: Reloading.
Oct 01 16:27:09 compute-0 systemd-rc-local-generator[39922]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:27:09 compute-0 sudo[39890]: pam_unix(sudo:session): session closed for user root
Oct 01 16:27:10 compute-0 sudo[40079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cewdwxnccmazframflpcevfylakwrhib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336029.7810094-468-103464622052175/AnsiballZ_systemd.py'
Oct 01 16:27:10 compute-0 sudo[40079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:27:10 compute-0 python3.9[40081]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:27:10 compute-0 systemd[1]: Reloading.
Oct 01 16:27:10 compute-0 systemd-rc-local-generator[40109]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:27:10 compute-0 sudo[40079]: pam_unix(sudo:session): session closed for user root
Oct 01 16:27:11 compute-0 sudo[40267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqdjnjefqjbaqaxdcjoogutqazzjzagq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336031.0237606-484-280697832534226/AnsiballZ_command.py'
Oct 01 16:27:11 compute-0 sudo[40267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:27:11 compute-0 python3.9[40269]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:27:11 compute-0 sudo[40267]: pam_unix(sudo:session): session closed for user root
Oct 01 16:27:12 compute-0 sudo[40420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzrwlngpppalpaisujokikgbjobzkuwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336031.7467482-492-120877964289430/AnsiballZ_command.py'
Oct 01 16:27:12 compute-0 sudo[40420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:27:12 compute-0 python3.9[40422]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:27:12 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct 01 16:27:12 compute-0 sudo[40420]: pam_unix(sudo:session): session closed for user root
Oct 01 16:27:12 compute-0 sudo[40573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcmjzqkxdbunvcvquaurvuluhfahvwqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336032.4520347-500-70501786336950/AnsiballZ_command.py'
Oct 01 16:27:12 compute-0 sudo[40573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:27:13 compute-0 python3.9[40575]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:27:14 compute-0 sudo[40573]: pam_unix(sudo:session): session closed for user root
Oct 01 16:27:14 compute-0 sudo[40735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igguqkqbcwmsvelwjoxrtzuprehpoiez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336034.6170928-508-273792320064675/AnsiballZ_command.py'
Oct 01 16:27:14 compute-0 sudo[40735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:27:15 compute-0 python3.9[40737]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:27:15 compute-0 sudo[40735]: pam_unix(sudo:session): session closed for user root
Oct 01 16:27:15 compute-0 sudo[40888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbcyfeuksghmaycprgueybpsezatewrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336035.5076735-516-230348991220466/AnsiballZ_systemd.py'
Oct 01 16:27:15 compute-0 sudo[40888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:27:16 compute-0 python3.9[40890]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 01 16:27:16 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 01 16:27:16 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Oct 01 16:27:16 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Oct 01 16:27:16 compute-0 systemd[1]: Starting Apply Kernel Variables...
Oct 01 16:27:16 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 01 16:27:16 compute-0 systemd[1]: Finished Apply Kernel Variables.
Oct 01 16:27:16 compute-0 sudo[40888]: pam_unix(sudo:session): session closed for user root
Oct 01 16:27:16 compute-0 sshd-session[27889]: Connection closed by 192.168.122.30 port 36056
Oct 01 16:27:16 compute-0 sshd-session[27886]: pam_unix(sshd:session): session closed for user zuul
Oct 01 16:27:16 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Oct 01 16:27:16 compute-0 systemd[1]: session-8.scope: Consumed 2min 18.814s CPU time.
Oct 01 16:27:16 compute-0 systemd-logind[788]: Session 8 logged out. Waiting for processes to exit.
Oct 01 16:27:16 compute-0 systemd-logind[788]: Removed session 8.
Oct 01 16:27:21 compute-0 sshd-session[40920]: Accepted publickey for zuul from 192.168.122.30 port 49182 ssh2: ECDSA SHA256:cAu4I/kPoFUKOLOQB71BUt6Th09G4PIJ2iHT8DD8gEY
Oct 01 16:27:21 compute-0 systemd-logind[788]: New session 9 of user zuul.
Oct 01 16:27:21 compute-0 systemd[1]: Started Session 9 of User zuul.
Oct 01 16:27:21 compute-0 sshd-session[40920]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 16:27:22 compute-0 python3.9[41073]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:27:23 compute-0 sudo[41227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aocdpgzqjfuoedslidzbmefveabmhphd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336043.0409024-36-232551184648726/AnsiballZ_getent.py'
Oct 01 16:27:23 compute-0 sudo[41227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:27:23 compute-0 python3.9[41229]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct 01 16:27:23 compute-0 sudo[41227]: pam_unix(sudo:session): session closed for user root
Oct 01 16:27:24 compute-0 sudo[41380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqlesxyurbpsqxeknakxkxrzalbzmsco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336044.0279212-44-159561581506509/AnsiballZ_group.py'
Oct 01 16:27:24 compute-0 sudo[41380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:27:24 compute-0 python3.9[41382]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 01 16:27:24 compute-0 groupadd[41383]: group added to /etc/group: name=openvswitch, GID=42476
Oct 01 16:27:24 compute-0 groupadd[41383]: group added to /etc/gshadow: name=openvswitch
Oct 01 16:27:24 compute-0 groupadd[41383]: new group: name=openvswitch, GID=42476
Oct 01 16:27:24 compute-0 sudo[41380]: pam_unix(sudo:session): session closed for user root
Oct 01 16:27:25 compute-0 sudo[41538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddhxvsmyfzgmzqpvwmwghdcqhvcpskuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336045.0239043-52-199342608819985/AnsiballZ_user.py'
Oct 01 16:27:25 compute-0 sudo[41538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:27:25 compute-0 python3.9[41540]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 01 16:27:25 compute-0 useradd[41542]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Oct 01 16:27:25 compute-0 useradd[41542]: add 'openvswitch' to group 'hugetlbfs'
Oct 01 16:27:25 compute-0 useradd[41542]: add 'openvswitch' to shadow group 'hugetlbfs'
Oct 01 16:27:25 compute-0 sudo[41538]: pam_unix(sudo:session): session closed for user root
Oct 01 16:27:26 compute-0 sudo[41698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdnxpvxdejdxwajjjtssyghonadxqbyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336046.1244514-62-225068132970205/AnsiballZ_setup.py'
Oct 01 16:27:26 compute-0 sudo[41698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:27:26 compute-0 python3.9[41700]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 16:27:26 compute-0 sudo[41698]: pam_unix(sudo:session): session closed for user root
Oct 01 16:27:27 compute-0 sudo[41782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czxedtlgfaojqmanhefrbpwvdqdsyrwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336046.1244514-62-225068132970205/AnsiballZ_dnf.py'
Oct 01 16:27:27 compute-0 sudo[41782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:27:27 compute-0 python3.9[41784]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 01 16:27:29 compute-0 sudo[41782]: pam_unix(sudo:session): session closed for user root
Oct 01 16:27:30 compute-0 sudo[41946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phqdrteogevtivbgtgnjvbneovcnouim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336050.1647785-76-231565065786167/AnsiballZ_dnf.py'
Oct 01 16:27:30 compute-0 sudo[41946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:27:30 compute-0 python3.9[41948]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 16:27:44 compute-0 kernel: SELinux:  Converting 2724 SID table entries...
Oct 01 16:27:44 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 01 16:27:44 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 01 16:27:44 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 01 16:27:44 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 01 16:27:44 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 01 16:27:44 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 01 16:27:44 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 01 16:27:44 compute-0 groupadd[41971]: group added to /etc/group: name=unbound, GID=993
Oct 01 16:27:44 compute-0 groupadd[41971]: group added to /etc/gshadow: name=unbound
Oct 01 16:27:44 compute-0 groupadd[41971]: new group: name=unbound, GID=993
Oct 01 16:27:44 compute-0 useradd[41978]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Oct 01 16:27:44 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Oct 01 16:27:44 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct 01 16:27:45 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 01 16:27:45 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 01 16:27:45 compute-0 systemd[1]: Reloading.
Oct 01 16:27:45 compute-0 systemd-sysv-generator[42480]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:27:45 compute-0 systemd-rc-local-generator[42477]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:27:46 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 01 16:27:46 compute-0 sudo[41946]: pam_unix(sudo:session): session closed for user root
Oct 01 16:27:47 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 01 16:27:47 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 01 16:27:47 compute-0 systemd[1]: run-rde9a467c35404a8e9a05d5ec305e7cb8.service: Deactivated successfully.
Oct 01 16:27:47 compute-0 sudo[43048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxxvfussebagkxdyegstwlxrvikdgtdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336066.897381-84-104790683145618/AnsiballZ_systemd.py'
Oct 01 16:27:47 compute-0 sudo[43048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:27:47 compute-0 python3.9[43050]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 01 16:27:47 compute-0 systemd[1]: Reloading.
Oct 01 16:27:48 compute-0 systemd-rc-local-generator[43079]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:27:48 compute-0 systemd-sysv-generator[43084]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:27:48 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Oct 01 16:27:48 compute-0 chown[43091]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct 01 16:27:48 compute-0 ovs-ctl[43096]: /etc/openvswitch/conf.db does not exist ... (warning).
Oct 01 16:27:48 compute-0 ovs-ctl[43096]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Oct 01 16:27:48 compute-0 ovs-ctl[43096]: Starting ovsdb-server [  OK  ]
Oct 01 16:27:48 compute-0 ovs-vsctl[43145]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct 01 16:27:48 compute-0 ovs-vsctl[43165]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"d2971fc2-5b75-459a-98a0-6e626d0d4d99\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct 01 16:27:48 compute-0 ovs-ctl[43096]: Configuring Open vSwitch system IDs [  OK  ]
Oct 01 16:27:48 compute-0 ovs-ctl[43096]: Enabling remote OVSDB managers [  OK  ]
Oct 01 16:27:48 compute-0 ovs-vsctl[43171]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct 01 16:27:48 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Oct 01 16:27:48 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct 01 16:27:48 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct 01 16:27:48 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct 01 16:27:48 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Oct 01 16:27:48 compute-0 ovs-ctl[43216]: Inserting openvswitch module [  OK  ]
Oct 01 16:27:48 compute-0 ovs-ctl[43185]: Starting ovs-vswitchd [  OK  ]
Oct 01 16:27:48 compute-0 ovs-vsctl[43236]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct 01 16:27:48 compute-0 ovs-ctl[43185]: Enabling remote OVSDB managers [  OK  ]
Oct 01 16:27:48 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct 01 16:27:48 compute-0 systemd[1]: Starting Open vSwitch...
Oct 01 16:27:48 compute-0 systemd[1]: Finished Open vSwitch.
Oct 01 16:27:48 compute-0 sudo[43048]: pam_unix(sudo:session): session closed for user root
Oct 01 16:27:49 compute-0 python3.9[43388]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:27:50 compute-0 sudo[43538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyktvavuydhybwnugsrsjugsgvokckqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336070.0446057-102-150668254193506/AnsiballZ_sefcontext.py'
Oct 01 16:27:50 compute-0 sudo[43538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:27:50 compute-0 python3.9[43540]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct 01 16:27:52 compute-0 kernel: SELinux:  Converting 2738 SID table entries...
Oct 01 16:27:52 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 01 16:27:52 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 01 16:27:52 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 01 16:27:52 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 01 16:27:52 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 01 16:27:52 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 01 16:27:52 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 01 16:27:52 compute-0 sudo[43538]: pam_unix(sudo:session): session closed for user root
Oct 01 16:27:53 compute-0 python3.9[43695]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:27:54 compute-0 sudo[43851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeziuhfdtulfgujvsprdhbssxnqxjpgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336073.7045367-120-167480651136615/AnsiballZ_dnf.py'
Oct 01 16:27:54 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Oct 01 16:27:54 compute-0 sudo[43851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:27:54 compute-0 python3.9[43853]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 16:27:55 compute-0 sudo[43851]: pam_unix(sudo:session): session closed for user root
Oct 01 16:27:56 compute-0 sudo[44004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgmaaxamwavuzvpfwzobjuddnzqktqzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336075.943107-128-50132541867033/AnsiballZ_command.py'
Oct 01 16:27:56 compute-0 sudo[44004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:27:56 compute-0 python3.9[44006]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:27:57 compute-0 sudo[44004]: pam_unix(sudo:session): session closed for user root
Oct 01 16:27:58 compute-0 sudo[44291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkqnzhqpqvjyrqkmnfaipmwlyxiyjiuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336077.5806086-136-28402943796561/AnsiballZ_file.py'
Oct 01 16:27:58 compute-0 sudo[44291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:27:58 compute-0 python3.9[44293]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 01 16:27:58 compute-0 sudo[44291]: pam_unix(sudo:session): session closed for user root
Oct 01 16:27:59 compute-0 python3.9[44443]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:27:59 compute-0 sudo[44595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auntrjqquuzpnovhrbtilppedtdebpfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336079.3189461-152-121811836449908/AnsiballZ_dnf.py'
Oct 01 16:27:59 compute-0 sudo[44595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:27:59 compute-0 python3.9[44597]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 16:28:01 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 01 16:28:01 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 01 16:28:01 compute-0 systemd[1]: Reloading.
Oct 01 16:28:01 compute-0 systemd-rc-local-generator[44636]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:28:01 compute-0 systemd-sysv-generator[44641]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:28:01 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 01 16:28:02 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 01 16:28:02 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 01 16:28:02 compute-0 systemd[1]: run-r330f522173fd4d568e6d207dfb4e302f.service: Deactivated successfully.
Oct 01 16:28:02 compute-0 sudo[44595]: pam_unix(sudo:session): session closed for user root
Oct 01 16:28:02 compute-0 sudo[44912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owhxabwjeriumqxzjeapnauknkkipwcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336082.4854805-160-103019288942400/AnsiballZ_systemd.py'
Oct 01 16:28:02 compute-0 sudo[44912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:28:03 compute-0 python3.9[44914]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 01 16:28:03 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct 01 16:28:03 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Oct 01 16:28:03 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Oct 01 16:28:03 compute-0 systemd[1]: Stopping Network Manager...
Oct 01 16:28:03 compute-0 NetworkManager[3952]: <info>  [1759336083.2396] caught SIGTERM, shutting down normally.
Oct 01 16:28:03 compute-0 NetworkManager[3952]: <info>  [1759336083.2414] dhcp4 (eth0): canceled DHCP transaction
Oct 01 16:28:03 compute-0 NetworkManager[3952]: <info>  [1759336083.2415] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 01 16:28:03 compute-0 NetworkManager[3952]: <info>  [1759336083.2415] dhcp4 (eth0): state changed no lease
Oct 01 16:28:03 compute-0 NetworkManager[3952]: <info>  [1759336083.2417] manager: NetworkManager state is now CONNECTED_SITE
Oct 01 16:28:03 compute-0 NetworkManager[3952]: <info>  [1759336083.2477] exiting (success)
Oct 01 16:28:03 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 01 16:28:03 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 01 16:28:03 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct 01 16:28:03 compute-0 systemd[1]: Stopped Network Manager.
Oct 01 16:28:03 compute-0 systemd[1]: NetworkManager.service: Consumed 10.241s CPU time, 4.1M memory peak, read 0B from disk, written 30.5K to disk.
Oct 01 16:28:03 compute-0 systemd[1]: Starting Network Manager...
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.3368] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:60f5f1a4-b8dd-4af4-b8bb-2f6fb4fb4541)
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.3370] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.3436] manager[0x5583f7763090]: monitoring kernel firmware directory '/lib/firmware'.
Oct 01 16:28:03 compute-0 systemd[1]: Starting Hostname Service...
Oct 01 16:28:03 compute-0 systemd[1]: Started Hostname Service.
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4530] hostname: hostname: using hostnamed
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4533] hostname: static hostname changed from (none) to "compute-0"
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4541] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4547] manager[0x5583f7763090]: rfkill: Wi-Fi hardware radio set enabled
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4548] manager[0x5583f7763090]: rfkill: WWAN hardware radio set enabled
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4584] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4600] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4601] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4602] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4603] manager: Networking is enabled by state file
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4606] settings: Loaded settings plugin: keyfile (internal)
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4613] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4656] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4670] dhcp: init: Using DHCP client 'internal'
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4674] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4682] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4691] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4703] device (lo): Activation: starting connection 'lo' (203f8aae-0043-4f00-be85-213b46013acc)
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4714] device (eth0): carrier: link connected
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4722] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4728] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4730] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4739] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4749] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4757] device (eth1): carrier: link connected
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4764] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4771] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (d4aeb451-37af-5c94-b881-be0e7e424ee8) (indicated)
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4772] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4779] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4789] device (eth1): Activation: starting connection 'ci-private-network' (d4aeb451-37af-5c94-b881-be0e7e424ee8)
Oct 01 16:28:03 compute-0 systemd[1]: Started Network Manager.
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4809] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4825] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4831] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4834] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4840] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4846] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4851] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4855] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4861] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4872] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4877] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4897] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4922] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4941] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4945] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 01 16:28:03 compute-0 systemd[1]: Starting Network Manager Wait Online...
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4955] device (lo): Activation: successful, device activated.
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4970] dhcp4 (eth0): state changed new lease, address=38.129.56.223
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.4988] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.5081] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.5088] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.5099] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.5104] manager: NetworkManager state is now CONNECTED_LOCAL
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.5110] device (eth1): Activation: successful, device activated.
Oct 01 16:28:03 compute-0 sudo[44912]: pam_unix(sudo:session): session closed for user root
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.5153] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.5157] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.5164] manager: NetworkManager state is now CONNECTED_SITE
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.5171] device (eth0): Activation: successful, device activated.
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.5180] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 01 16:28:03 compute-0 NetworkManager[44927]: <info>  [1759336083.5232] manager: startup complete
Oct 01 16:28:03 compute-0 systemd[1]: Finished Network Manager Wait Online.
Oct 01 16:28:04 compute-0 sudo[45139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slcpyawivfzlbpjbkgouqexhgetmqwst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336083.6976247-168-70424138409425/AnsiballZ_dnf.py'
Oct 01 16:28:04 compute-0 sudo[45139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:28:04 compute-0 python3.9[45141]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 16:28:09 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 01 16:28:09 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 01 16:28:09 compute-0 systemd[1]: Reloading.
Oct 01 16:28:09 compute-0 systemd-sysv-generator[45199]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:28:09 compute-0 systemd-rc-local-generator[45195]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:28:09 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 01 16:28:09 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 01 16:28:09 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 01 16:28:09 compute-0 systemd[1]: run-rf89dd76c2d0047b59a21ac18a1843d66.service: Deactivated successfully.
Oct 01 16:28:10 compute-0 sudo[45139]: pam_unix(sudo:session): session closed for user root
Oct 01 16:28:10 compute-0 sudo[45603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdomptcivxodkxcyjmnuafqgzohqbfvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336090.556764-180-251858576123804/AnsiballZ_stat.py'
Oct 01 16:28:10 compute-0 sudo[45603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:28:11 compute-0 python3.9[45605]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:28:11 compute-0 sudo[45603]: pam_unix(sudo:session): session closed for user root
Oct 01 16:28:11 compute-0 sudo[45755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfovsvljsxhkqcwulutzosclivyawnhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336091.3556592-189-95216815186620/AnsiballZ_ini_file.py'
Oct 01 16:28:11 compute-0 sudo[45755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:28:11 compute-0 python3.9[45757]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:28:12 compute-0 sudo[45755]: pam_unix(sudo:session): session closed for user root
Oct 01 16:28:12 compute-0 sudo[45909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwyaijttjvngwaalzthkqpesyvnkvhvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336092.2341542-199-114533805123915/AnsiballZ_ini_file.py'
Oct 01 16:28:12 compute-0 sudo[45909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:28:12 compute-0 python3.9[45911]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:28:12 compute-0 sudo[45909]: pam_unix(sudo:session): session closed for user root
Oct 01 16:28:13 compute-0 sudo[46061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axruxapeochmjkuajxlzmdemmuqhgfdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336092.860606-199-182145163231730/AnsiballZ_ini_file.py'
Oct 01 16:28:13 compute-0 sudo[46061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:28:13 compute-0 python3.9[46063]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:28:13 compute-0 sudo[46061]: pam_unix(sudo:session): session closed for user root
Oct 01 16:28:13 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 01 16:28:13 compute-0 sudo[46213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqtlvclepktkymryvnchxtezxvmrocoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336093.4855697-214-154669904287218/AnsiballZ_ini_file.py'
Oct 01 16:28:13 compute-0 sudo[46213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:28:13 compute-0 python3.9[46215]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:28:13 compute-0 sudo[46213]: pam_unix(sudo:session): session closed for user root
Oct 01 16:28:14 compute-0 sudo[46365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebyrpimgmzgvifturkptavavktbjcshu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336094.1090147-214-156914420397954/AnsiballZ_ini_file.py'
Oct 01 16:28:14 compute-0 sudo[46365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:28:14 compute-0 python3.9[46367]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:28:14 compute-0 sudo[46365]: pam_unix(sudo:session): session closed for user root
Oct 01 16:28:15 compute-0 sudo[46517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzwmyssoceqthraqfibxwwjeifqtkbfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336094.7810376-229-65869547876248/AnsiballZ_stat.py'
Oct 01 16:28:15 compute-0 sudo[46517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:28:15 compute-0 python3.9[46519]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:28:15 compute-0 sudo[46517]: pam_unix(sudo:session): session closed for user root
Oct 01 16:28:16 compute-0 sudo[46640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esygjrlihggoiwxppsvfumedccpwsees ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336094.7810376-229-65869547876248/AnsiballZ_copy.py'
Oct 01 16:28:16 compute-0 sudo[46640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:28:16 compute-0 python3.9[46642]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759336094.7810376-229-65869547876248/.source _original_basename=.bvhr8bq4 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:28:16 compute-0 sudo[46640]: pam_unix(sudo:session): session closed for user root
Oct 01 16:28:16 compute-0 sudo[46792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvzfhzlqbvdtwjepegqlocsctujnczuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336096.5274842-244-270191929293641/AnsiballZ_file.py'
Oct 01 16:28:16 compute-0 sudo[46792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:28:17 compute-0 python3.9[46794]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:28:17 compute-0 sudo[46792]: pam_unix(sudo:session): session closed for user root
Oct 01 16:28:17 compute-0 sudo[46944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqbzrizymyiaarwvmghxvhtarbzdsbku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336097.276874-252-238238311373483/AnsiballZ_edpm_os_net_config_mappings.py'
Oct 01 16:28:17 compute-0 sudo[46944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:28:17 compute-0 python3.9[46946]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Oct 01 16:28:17 compute-0 sudo[46944]: pam_unix(sudo:session): session closed for user root
Oct 01 16:28:18 compute-0 sudo[47096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hleccyavwdaruvnhfimljtsylicidkud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336098.1950898-261-272361939841208/AnsiballZ_file.py'
Oct 01 16:28:18 compute-0 sudo[47096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:28:18 compute-0 python3.9[47098]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:28:18 compute-0 sudo[47096]: pam_unix(sudo:session): session closed for user root
Oct 01 16:28:19 compute-0 sudo[47248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waouajatibrtzwemwpyrzxiuybfxxrqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336098.9490814-271-121343282332992/AnsiballZ_stat.py'
Oct 01 16:28:19 compute-0 sudo[47248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:28:19 compute-0 sudo[47248]: pam_unix(sudo:session): session closed for user root
Oct 01 16:28:19 compute-0 sudo[47371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbewronaucyurzzolkmeeluawanbhmds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336098.9490814-271-121343282332992/AnsiballZ_copy.py'
Oct 01 16:28:19 compute-0 sudo[47371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:28:20 compute-0 sudo[47371]: pam_unix(sudo:session): session closed for user root
Oct 01 16:28:20 compute-0 sudo[47523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vifbgmdbniguqqtzkylrozokbunoamqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336100.2011225-286-23017245141752/AnsiballZ_slurp.py'
Oct 01 16:28:20 compute-0 sudo[47523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:28:20 compute-0 python3.9[47525]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Oct 01 16:28:20 compute-0 sudo[47523]: pam_unix(sudo:session): session closed for user root
Oct 01 16:28:21 compute-0 sudo[47698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqqnsmufzgtalodexeeyycocutcgcwut ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336101.1934528-295-27187316008585/async_wrapper.py j577884455559 300 /home/zuul/.ansible/tmp/ansible-tmp-1759336101.1934528-295-27187316008585/AnsiballZ_edpm_os_net_config.py _'
Oct 01 16:28:21 compute-0 sudo[47698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:28:22 compute-0 ansible-async_wrapper.py[47700]: Invoked with j577884455559 300 /home/zuul/.ansible/tmp/ansible-tmp-1759336101.1934528-295-27187316008585/AnsiballZ_edpm_os_net_config.py _
Oct 01 16:28:22 compute-0 ansible-async_wrapper.py[47703]: Starting module and watcher
Oct 01 16:28:22 compute-0 ansible-async_wrapper.py[47703]: Start watching 47704 (300)
Oct 01 16:28:22 compute-0 ansible-async_wrapper.py[47704]: Start module (47704)
Oct 01 16:28:22 compute-0 ansible-async_wrapper.py[47700]: Return async_wrapper task started.
Oct 01 16:28:22 compute-0 sudo[47698]: pam_unix(sudo:session): session closed for user root
Oct 01 16:28:22 compute-0 python3.9[47705]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Oct 01 16:28:23 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct 01 16:28:23 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct 01 16:28:23 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct 01 16:28:23 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct 01 16:28:23 compute-0 kernel: cfg80211: failed to load regulatory.db
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.2763] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47706 uid=0 result="success"
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.2783] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47706 uid=0 result="success"
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3406] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3408] audit: op="connection-add" uuid="a231817f-507b-46e6-8503-84a5384616f6" name="br-ex-br" pid=47706 uid=0 result="success"
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3421] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3423] audit: op="connection-add" uuid="d918bcb7-fa8b-4890-9285-1ed7e8844ed3" name="br-ex-port" pid=47706 uid=0 result="success"
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3433] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3434] audit: op="connection-add" uuid="0fc52ac9-9d9e-4e30-9fb1-1deb6a799a7b" name="eth1-port" pid=47706 uid=0 result="success"
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3446] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3447] audit: op="connection-add" uuid="d4516210-ff6d-4b30-9728-04ed240d7af1" name="vlan20-port" pid=47706 uid=0 result="success"
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3458] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3459] audit: op="connection-add" uuid="77465064-de7a-4c84-85bb-c6e158aadb9e" name="vlan21-port" pid=47706 uid=0 result="success"
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3469] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3470] audit: op="connection-add" uuid="127eb1b5-6992-492b-8b57-95ab9751e7a2" name="vlan22-port" pid=47706 uid=0 result="success"
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3480] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3481] audit: op="connection-add" uuid="c1bdbb92-0e72-40a7-93d4-34faeabd1cff" name="vlan23-port" pid=47706 uid=0 result="success"
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3503] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode,ipv6.dhcp-timeout,connection.timestamp,connection.autoconnect-priority" pid=47706 uid=0 result="success"
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3518] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3520] audit: op="connection-add" uuid="eef8d6bf-ac60-4bf7-8bf4-c6f43843e0d1" name="br-ex-if" pid=47706 uid=0 result="success"
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3544] audit: op="connection-update" uuid="d4aeb451-37af-5c94-b881-be0e7e424ee8" name="ci-private-network" args="ovs-external-ids.data,ipv4.never-default,ipv4.method,ipv4.routes,ipv4.routing-rules,ipv4.addresses,ipv4.dns,ipv6.method,ipv6.addr-gen-mode,ipv6.routes,ipv6.routing-rules,ipv6.addresses,ipv6.dns,connection.timestamp,connection.slave-type,connection.controller,connection.port-type,connection.master,ovs-interface.type" pid=47706 uid=0 result="success"
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3559] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3561] audit: op="connection-add" uuid="62a4c32c-0377-45d6-8a79-8c2701d0e43a" name="vlan20-if" pid=47706 uid=0 result="success"
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3574] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3575] audit: op="connection-add" uuid="4f574380-ddc9-4595-8d29-bda162374d14" name="vlan21-if" pid=47706 uid=0 result="success"
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3592] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3593] audit: op="connection-add" uuid="7041e92e-2dc9-4fb3-ad3d-d7b9c0c8dd5e" name="vlan22-if" pid=47706 uid=0 result="success"
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3609] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3610] audit: op="connection-add" uuid="3b2a2110-d2cd-4b06-814d-c79d1812e8ef" name="vlan23-if" pid=47706 uid=0 result="success"
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3622] audit: op="connection-delete" uuid="01b669a8-fa91-383c-b7dc-5c5d3c1764d8" name="Wired connection 1" pid=47706 uid=0 result="success"
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3633] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3643] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3646] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (a231817f-507b-46e6-8503-84a5384616f6)
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3646] audit: op="connection-activate" uuid="a231817f-507b-46e6-8503-84a5384616f6" name="br-ex-br" pid=47706 uid=0 result="success"
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3648] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3654] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3656] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (d918bcb7-fa8b-4890-9285-1ed7e8844ed3)
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3657] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3661] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3665] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (0fc52ac9-9d9e-4e30-9fb1-1deb6a799a7b)
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3666] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3671] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3674] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (d4516210-ff6d-4b30-9728-04ed240d7af1)
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3676] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3681] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3684] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (77465064-de7a-4c84-85bb-c6e158aadb9e)
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3686] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3691] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3694] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (127eb1b5-6992-492b-8b57-95ab9751e7a2)
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3695] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3699] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3702] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (c1bdbb92-0e72-40a7-93d4-34faeabd1cff)
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3703] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3704] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3705] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3709] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3713] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3715] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (eef8d6bf-ac60-4bf7-8bf4-c6f43843e0d1)
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3716] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3718] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3719] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3720] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3721] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3728] device (eth1): disconnecting for new activation request.
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3728] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3730] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3731] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3732] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3734] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3738] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3742] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (62a4c32c-0377-45d6-8a79-8c2701d0e43a)
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3742] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3745] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3746] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3747] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3750] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3754] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3758] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (4f574380-ddc9-4595-8d29-bda162374d14)
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3759] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3762] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3764] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3765] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3767] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3772] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3776] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (7041e92e-2dc9-4fb3-ad3d-d7b9c0c8dd5e)
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3777] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3779] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3781] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3782] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3784] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3788] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3792] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (3b2a2110-d2cd-4b06-814d-c79d1812e8ef)
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3793] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3795] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3797] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3798] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3800] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3810] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode,connection.autoconnect-priority" pid=47706 uid=0 result="success"
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3812] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3815] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3817] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3829] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3832] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3835] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3838] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 kernel: ovs-system: entered promiscuous mode
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3840] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3844] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3848] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3851] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3853] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3857] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3860] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 kernel: Timeout policy base is empty
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3863] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3865] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3869] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 systemd-udevd[47712]: Network interface NamePolicy= disabled on kernel command line.
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3873] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3876] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3877] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3881] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3884] dhcp4 (eth0): canceled DHCP transaction
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3884] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3885] dhcp4 (eth0): state changed no lease
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3886] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct 01 16:28:24 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3921] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3939] audit: op="device-reapply" interface="eth1" ifindex=3 pid=47706 uid=0 result="fail" reason="Device is not activated"
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3948] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3966] device (eth1): disconnecting for new activation request.
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.3967] audit: op="connection-activate" uuid="d4aeb451-37af-5c94-b881-be0e7e424ee8" name="ci-private-network" pid=47706 uid=0 result="success"
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4018] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4036] dhcp4 (eth0): state changed new lease, address=38.129.56.223
Oct 01 16:28:24 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4054] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4105] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47706 uid=0 result="success"
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4109] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4116] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4228] device (eth1): Activation: starting connection 'ci-private-network' (d4aeb451-37af-5c94-b881-be0e7e424ee8)
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4232] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4234] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4236] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4237] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4239] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4240] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4241] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 kernel: br-ex: entered promiscuous mode
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4250] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4255] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4262] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4266] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4271] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4275] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4279] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4283] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4287] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4291] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4296] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4300] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4304] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4309] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4313] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4317] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Oct 01 16:28:24 compute-0 kernel: vlan22: entered promiscuous mode
Oct 01 16:28:24 compute-0 systemd-udevd[47710]: Network interface NamePolicy= disabled on kernel command line.
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4350] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4365] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4369] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 kernel: vlan20: entered promiscuous mode
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4446] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4461] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 kernel: vlan23: entered promiscuous mode
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4472] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4479] device (eth1): Activation: successful, device activated.
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4491] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4516] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 kernel: vlan21: entered promiscuous mode
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4560] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4576] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct 01 16:28:24 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4582] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 systemd-udevd[47821]: Network interface NamePolicy= disabled on kernel command line.
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4600] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4606] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4616] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4616] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4625] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4631] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4658] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4685] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4695] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4703] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4712] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4720] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4725] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4732] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4748] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4765] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4804] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4807] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 01 16:28:24 compute-0 NetworkManager[44927]: <info>  [1759336104.4816] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 01 16:28:25 compute-0 NetworkManager[44927]: <info>  [1759336105.6160] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47706 uid=0 result="success"
Oct 01 16:28:25 compute-0 NetworkManager[44927]: <info>  [1759336105.8389] checkpoint[0x5583f7739950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Oct 01 16:28:25 compute-0 NetworkManager[44927]: <info>  [1759336105.8396] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47706 uid=0 result="success"
Oct 01 16:28:25 compute-0 sudo[48062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fubssmhyzumzzboqirqpvfeuembimjaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336105.3144703-295-198705104846559/AnsiballZ_async_status.py'
Oct 01 16:28:25 compute-0 sudo[48062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:28:26 compute-0 python3.9[48065]: ansible-ansible.legacy.async_status Invoked with jid=j577884455559.47700 mode=status _async_dir=/root/.ansible_async
Oct 01 16:28:26 compute-0 sudo[48062]: pam_unix(sudo:session): session closed for user root
Oct 01 16:28:26 compute-0 NetworkManager[44927]: <info>  [1759336106.1925] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47706 uid=0 result="success"
Oct 01 16:28:26 compute-0 NetworkManager[44927]: <info>  [1759336106.1937] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47706 uid=0 result="success"
Oct 01 16:28:26 compute-0 NetworkManager[44927]: <info>  [1759336106.4190] audit: op="networking-control" arg="global-dns-configuration" pid=47706 uid=0 result="success"
Oct 01 16:28:26 compute-0 NetworkManager[44927]: <info>  [1759336106.4218] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Oct 01 16:28:26 compute-0 NetworkManager[44927]: <info>  [1759336106.4249] audit: op="networking-control" arg="global-dns-configuration" pid=47706 uid=0 result="success"
Oct 01 16:28:26 compute-0 NetworkManager[44927]: <info>  [1759336106.4281] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47706 uid=0 result="success"
Oct 01 16:28:26 compute-0 NetworkManager[44927]: <info>  [1759336106.6162] checkpoint[0x5583f7739a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Oct 01 16:28:26 compute-0 NetworkManager[44927]: <info>  [1759336106.6171] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47706 uid=0 result="success"
Oct 01 16:28:26 compute-0 ansible-async_wrapper.py[47704]: Module complete (47704)
Oct 01 16:28:27 compute-0 ansible-async_wrapper.py[47703]: Done in kid B.
Oct 01 16:28:29 compute-0 sudo[48168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyelrmlpcdfcrkceucydweappkzmeswl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336105.3144703-295-198705104846559/AnsiballZ_async_status.py'
Oct 01 16:28:29 compute-0 sudo[48168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:28:29 compute-0 python3.9[48170]: ansible-ansible.legacy.async_status Invoked with jid=j577884455559.47700 mode=status _async_dir=/root/.ansible_async
Oct 01 16:28:29 compute-0 sudo[48168]: pam_unix(sudo:session): session closed for user root
Oct 01 16:28:29 compute-0 sudo[48268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjmuxtzxmdaheyrrffkqedxbrsgrogmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336105.3144703-295-198705104846559/AnsiballZ_async_status.py'
Oct 01 16:28:29 compute-0 sudo[48268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:28:30 compute-0 python3.9[48270]: ansible-ansible.legacy.async_status Invoked with jid=j577884455559.47700 mode=cleanup _async_dir=/root/.ansible_async
Oct 01 16:28:30 compute-0 sudo[48268]: pam_unix(sudo:session): session closed for user root
Oct 01 16:28:30 compute-0 sudo[48420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcjkwyexophmqqffbgdorhiolzbolvci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336110.3686192-322-12467244241793/AnsiballZ_stat.py'
Oct 01 16:28:30 compute-0 sudo[48420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:28:30 compute-0 python3.9[48422]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:28:30 compute-0 sudo[48420]: pam_unix(sudo:session): session closed for user root
Oct 01 16:28:31 compute-0 sudo[48543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmjevuhpmqfngdqpblaqempmcdfynaje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336110.3686192-322-12467244241793/AnsiballZ_copy.py'
Oct 01 16:28:31 compute-0 sudo[48543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:28:31 compute-0 python3.9[48545]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759336110.3686192-322-12467244241793/.source.returncode _original_basename=.lxxvxuy4 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:28:31 compute-0 sudo[48543]: pam_unix(sudo:session): session closed for user root
Oct 01 16:28:32 compute-0 sudo[48695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gajxdtsrrbpjrhaxqgbhmqqrfsumhakv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336111.7527614-338-24937421622233/AnsiballZ_stat.py'
Oct 01 16:28:32 compute-0 sudo[48695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:28:32 compute-0 python3.9[48697]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:28:32 compute-0 sudo[48695]: pam_unix(sudo:session): session closed for user root
Oct 01 16:28:32 compute-0 sudo[48818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cahwqvxylocjbpnleusybfwryucxhsav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336111.7527614-338-24937421622233/AnsiballZ_copy.py'
Oct 01 16:28:32 compute-0 sudo[48818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:28:32 compute-0 python3.9[48820]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759336111.7527614-338-24937421622233/.source.cfg _original_basename=.l4m0t5wb follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:28:32 compute-0 sudo[48818]: pam_unix(sudo:session): session closed for user root
Oct 01 16:28:33 compute-0 sudo[48971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkprfhqushnropemdlllkovksezehpfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336112.902245-353-187341382250898/AnsiballZ_systemd.py'
Oct 01 16:28:33 compute-0 sudo[48971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:28:33 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 01 16:28:33 compute-0 python3.9[48973]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 01 16:28:33 compute-0 systemd[1]: Reloading Network Manager...
Oct 01 16:28:33 compute-0 NetworkManager[44927]: <info>  [1759336113.6577] audit: op="reload" arg="0" pid=48979 uid=0 result="success"
Oct 01 16:28:33 compute-0 NetworkManager[44927]: <info>  [1759336113.6586] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Oct 01 16:28:33 compute-0 systemd[1]: Reloaded Network Manager.
Oct 01 16:28:33 compute-0 sudo[48971]: pam_unix(sudo:session): session closed for user root
Oct 01 16:28:34 compute-0 sshd-session[40923]: Connection closed by 192.168.122.30 port 49182
Oct 01 16:28:34 compute-0 sshd-session[40920]: pam_unix(sshd:session): session closed for user zuul
Oct 01 16:28:34 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Oct 01 16:28:34 compute-0 systemd[1]: session-9.scope: Consumed 53.574s CPU time.
Oct 01 16:28:34 compute-0 systemd-logind[788]: Session 9 logged out. Waiting for processes to exit.
Oct 01 16:28:34 compute-0 systemd-logind[788]: Removed session 9.
Oct 01 16:28:40 compute-0 sshd-session[49010]: Accepted publickey for zuul from 192.168.122.30 port 41090 ssh2: ECDSA SHA256:cAu4I/kPoFUKOLOQB71BUt6Th09G4PIJ2iHT8DD8gEY
Oct 01 16:28:40 compute-0 systemd-logind[788]: New session 10 of user zuul.
Oct 01 16:28:40 compute-0 systemd[1]: Started Session 10 of User zuul.
Oct 01 16:28:40 compute-0 sshd-session[49010]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 16:28:41 compute-0 python3.9[49163]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:28:42 compute-0 python3.9[49317]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 16:28:43 compute-0 python3.9[49511]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:28:43 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 01 16:28:44 compute-0 sshd-session[49013]: Connection closed by 192.168.122.30 port 41090
Oct 01 16:28:44 compute-0 sshd-session[49010]: pam_unix(sshd:session): session closed for user zuul
Oct 01 16:28:44 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Oct 01 16:28:44 compute-0 systemd[1]: session-10.scope: Consumed 2.608s CPU time.
Oct 01 16:28:44 compute-0 systemd-logind[788]: Session 10 logged out. Waiting for processes to exit.
Oct 01 16:28:44 compute-0 systemd-logind[788]: Removed session 10.
Oct 01 16:28:51 compute-0 sshd-session[49540]: Accepted publickey for zuul from 192.168.122.30 port 59520 ssh2: ECDSA SHA256:cAu4I/kPoFUKOLOQB71BUt6Th09G4PIJ2iHT8DD8gEY
Oct 01 16:28:51 compute-0 systemd-logind[788]: New session 11 of user zuul.
Oct 01 16:28:51 compute-0 systemd[1]: Started Session 11 of User zuul.
Oct 01 16:28:51 compute-0 sshd-session[49540]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 16:28:52 compute-0 python3.9[49693]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:28:53 compute-0 python3.9[49848]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:28:53 compute-0 sudo[50002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jamcotkdezkuzcyobzqpejilftlziabs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336133.6922083-40-23021565130711/AnsiballZ_setup.py'
Oct 01 16:28:53 compute-0 sudo[50002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:28:54 compute-0 python3.9[50004]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 16:28:54 compute-0 sudo[50002]: pam_unix(sudo:session): session closed for user root
Oct 01 16:28:55 compute-0 sudo[50087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqnluoaofdeqpmbugyipxadtqaxcdrqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336133.6922083-40-23021565130711/AnsiballZ_dnf.py'
Oct 01 16:28:55 compute-0 sudo[50087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:28:55 compute-0 python3.9[50089]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 16:28:56 compute-0 sudo[50087]: pam_unix(sudo:session): session closed for user root
Oct 01 16:28:57 compute-0 sudo[50240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfetbqzuwdzkepitgpkgnqctekyshaqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336136.792608-52-273248043601189/AnsiballZ_setup.py'
Oct 01 16:28:57 compute-0 sudo[50240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:28:57 compute-0 python3.9[50242]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 16:28:57 compute-0 sudo[50240]: pam_unix(sudo:session): session closed for user root
Oct 01 16:28:58 compute-0 sudo[50435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyuyzzmpsbnaduqtfzquktkihiobajzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336138.0825977-63-235126210802525/AnsiballZ_file.py'
Oct 01 16:28:58 compute-0 sudo[50435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:28:58 compute-0 python3.9[50437]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:28:58 compute-0 sudo[50435]: pam_unix(sudo:session): session closed for user root
Oct 01 16:28:59 compute-0 sudo[50587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-criiqrybjibqdcswlhuzxvwaidyjqbnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336138.9706647-71-173629246616647/AnsiballZ_command.py'
Oct 01 16:28:59 compute-0 sudo[50587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:28:59 compute-0 python3.9[50589]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:28:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat3422736952-merged.mount: Deactivated successfully.
Oct 01 16:28:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck1950332012-merged.mount: Deactivated successfully.
Oct 01 16:28:59 compute-0 podman[50590]: 2025-10-01 16:28:59.700498195 +0000 UTC m=+0.070711251 system refresh
Oct 01 16:28:59 compute-0 sudo[50587]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:00 compute-0 sudo[50750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhembffkwizyilbxcaujdujomautmziq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336139.88184-79-3789581745395/AnsiballZ_stat.py'
Oct 01 16:29:00 compute-0 sudo[50750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:00 compute-0 python3.9[50752]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:29:00 compute-0 sudo[50750]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:00 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 01 16:29:01 compute-0 sudo[50873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kamyvrcrpvexllwxiccsihlqvimmngmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336139.88184-79-3789581745395/AnsiballZ_copy.py'
Oct 01 16:29:01 compute-0 sudo[50873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:01 compute-0 python3.9[50875]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336139.88184-79-3789581745395/.source.json follow=False _original_basename=podman_network_config.j2 checksum=3079a051084364b77582b918e5ec6e0b2258f013 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:29:01 compute-0 sudo[50873]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:01 compute-0 anacron[4037]: Job `cron.daily' started
Oct 01 16:29:01 compute-0 anacron[4037]: Job `cron.daily' terminated
Oct 01 16:29:01 compute-0 sudo[51027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oalkjjyyngeqcbkmduiqhdubxkpvnjdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336141.63062-94-205297537800137/AnsiballZ_stat.py'
Oct 01 16:29:01 compute-0 sudo[51027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:02 compute-0 python3.9[51029]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:29:02 compute-0 sudo[51027]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:02 compute-0 sudo[51150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drqwryflupvewwngeugsbgzehvajeiha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336141.63062-94-205297537800137/AnsiballZ_copy.py'
Oct 01 16:29:02 compute-0 sudo[51150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:02 compute-0 python3.9[51152]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759336141.63062-94-205297537800137/.source.conf follow=False _original_basename=registries.conf.j2 checksum=72ddb04219a06ecdf6a11ec2551ff8d0679beaac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:29:02 compute-0 sudo[51150]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:03 compute-0 sudo[51302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wztbmqkaextdgsbewgxzgugotbaewaye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336143.0005767-110-166639203208222/AnsiballZ_ini_file.py'
Oct 01 16:29:03 compute-0 sudo[51302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:03 compute-0 python3.9[51304]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:29:03 compute-0 sudo[51302]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:04 compute-0 sudo[51454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lroaalvidmfawshlmmjzzfbkiruvltjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336143.8698795-110-87835190643183/AnsiballZ_ini_file.py'
Oct 01 16:29:04 compute-0 sudo[51454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:04 compute-0 python3.9[51456]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:29:04 compute-0 sudo[51454]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:04 compute-0 sudo[51606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjtrvpearorvkwfedelhdbhyozoukgly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336144.5231476-110-97901549707946/AnsiballZ_ini_file.py'
Oct 01 16:29:04 compute-0 sudo[51606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:05 compute-0 python3.9[51608]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:29:05 compute-0 sudo[51606]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:05 compute-0 sudo[51758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmkxeekhwukjrabsuiokehvkcioyeojz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336145.2811215-110-100417261988652/AnsiballZ_ini_file.py'
Oct 01 16:29:05 compute-0 sudo[51758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:05 compute-0 python3.9[51760]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:29:05 compute-0 sudo[51758]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:06 compute-0 sudo[51910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zodhourlpqanfmyjoatdqhryhngaeeca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336146.222259-141-262671478360894/AnsiballZ_dnf.py'
Oct 01 16:29:06 compute-0 sudo[51910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:06 compute-0 python3.9[51912]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 16:29:08 compute-0 sudo[51910]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:08 compute-0 sudo[52063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbyytegiempujzapdlwqkwiwjzqmxyyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336148.4695072-152-65370511836972/AnsiballZ_setup.py'
Oct 01 16:29:08 compute-0 sudo[52063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:09 compute-0 python3.9[52065]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:29:09 compute-0 sudo[52063]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:09 compute-0 sudo[52217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdbpyyzlmnsasvyctocpenhfoknvhprr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336149.404687-160-229304250642269/AnsiballZ_stat.py'
Oct 01 16:29:09 compute-0 sudo[52217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:09 compute-0 python3.9[52219]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:29:09 compute-0 sudo[52217]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:10 compute-0 sudo[52369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymgitgjdbnwxiqyguktkzpnyypdwmlfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336150.1743052-169-132390846492415/AnsiballZ_stat.py'
Oct 01 16:29:10 compute-0 sudo[52369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:10 compute-0 python3.9[52371]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:29:10 compute-0 sudo[52369]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:11 compute-0 sudo[52521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxuinmnvthddlexlueeyrcdzdftfrvck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336151.0045736-179-95904217543926/AnsiballZ_service_facts.py'
Oct 01 16:29:11 compute-0 sudo[52521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:11 compute-0 python3.9[52523]: ansible-service_facts Invoked
Oct 01 16:29:11 compute-0 network[52540]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 01 16:29:11 compute-0 network[52541]: 'network-scripts' will be removed from distribution in near future.
Oct 01 16:29:11 compute-0 network[52542]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 01 16:29:15 compute-0 sudo[52521]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:16 compute-0 sudo[52827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiwfaepyezhuugxtszmtwjfwzautnegb ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1759336156.1748471-192-168793114218496/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1759336156.1748471-192-168793114218496/args'
Oct 01 16:29:16 compute-0 sudo[52827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:16 compute-0 sudo[52827]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:17 compute-0 sudo[52994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtquygufzikfumqshcrfjtmdqslcmdbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336156.994377-203-14209451045844/AnsiballZ_dnf.py'
Oct 01 16:29:17 compute-0 sudo[52994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:17 compute-0 python3.9[52996]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 16:29:18 compute-0 sudo[52994]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:19 compute-0 sudo[53147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehfjzscmafvanvjgcqzhcunybpadloqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336159.1191823-216-177005649570995/AnsiballZ_package_facts.py'
Oct 01 16:29:19 compute-0 sudo[53147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:19 compute-0 python3.9[53149]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct 01 16:29:20 compute-0 sudo[53147]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:20 compute-0 sudo[53299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhbarigoqhvprquwdesqflfomqmqtnct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336160.6205974-226-177252151579403/AnsiballZ_stat.py'
Oct 01 16:29:20 compute-0 sudo[53299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:21 compute-0 python3.9[53301]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:29:21 compute-0 sudo[53299]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:21 compute-0 sudo[53424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieymwgvcscoevucrdbwnhdxoyjtvheam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336160.6205974-226-177252151579403/AnsiballZ_copy.py'
Oct 01 16:29:21 compute-0 sudo[53424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:21 compute-0 python3.9[53426]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759336160.6205974-226-177252151579403/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:29:21 compute-0 sudo[53424]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:22 compute-0 sudo[53578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quaomlxtnppjwjuwbkgtdkronxqehauz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336161.9678547-241-228634388923794/AnsiballZ_stat.py'
Oct 01 16:29:22 compute-0 sudo[53578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:22 compute-0 python3.9[53580]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:29:22 compute-0 sudo[53578]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:22 compute-0 sudo[53703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-admjbivxukpljofrbstjvxasaryvriwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336161.9678547-241-228634388923794/AnsiballZ_copy.py'
Oct 01 16:29:22 compute-0 sudo[53703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:23 compute-0 python3.9[53705]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759336161.9678547-241-228634388923794/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:29:23 compute-0 sudo[53703]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:24 compute-0 sudo[53857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbsvbuvwqdyrvbokwlnzhdxqxgdkviio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336163.7382479-262-42038395982016/AnsiballZ_lineinfile.py'
Oct 01 16:29:24 compute-0 sudo[53857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:24 compute-0 python3.9[53859]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:29:24 compute-0 sudo[53857]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:25 compute-0 sudo[54011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wojzfklmtlrkxtrhilzjufvuwrayzqza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336165.0307965-277-227965514788227/AnsiballZ_setup.py'
Oct 01 16:29:25 compute-0 sudo[54011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:25 compute-0 python3.9[54013]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 16:29:25 compute-0 sudo[54011]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:26 compute-0 sudo[54095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgttpclagkznwpseixefnpaiaphbwsgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336165.0307965-277-227965514788227/AnsiballZ_systemd.py'
Oct 01 16:29:26 compute-0 sudo[54095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:26 compute-0 python3.9[54097]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:29:26 compute-0 sudo[54095]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:27 compute-0 sudo[54249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kibmmihxkrntnfzoahpytxovzzyhyvrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336167.4092085-293-120858920147251/AnsiballZ_setup.py'
Oct 01 16:29:27 compute-0 sudo[54249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:28 compute-0 python3.9[54251]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 16:29:28 compute-0 sudo[54249]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:28 compute-0 sudo[54333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abybzlywczmbjowlcymerhkawqxiwzpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336167.4092085-293-120858920147251/AnsiballZ_systemd.py'
Oct 01 16:29:28 compute-0 sudo[54333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:28 compute-0 python3.9[54335]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 01 16:29:29 compute-0 chronyd[803]: chronyd exiting
Oct 01 16:29:29 compute-0 systemd[1]: Stopping NTP client/server...
Oct 01 16:29:29 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Oct 01 16:29:29 compute-0 systemd[1]: Stopped NTP client/server.
Oct 01 16:29:29 compute-0 systemd[1]: Starting NTP client/server...
Oct 01 16:29:29 compute-0 chronyd[54344]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 01 16:29:29 compute-0 chronyd[54344]: Frequency -25.579 +/- 0.580 ppm read from /var/lib/chrony/drift
Oct 01 16:29:29 compute-0 chronyd[54344]: Loaded seccomp filter (level 2)
Oct 01 16:29:29 compute-0 systemd[1]: Started NTP client/server.
Oct 01 16:29:29 compute-0 sudo[54333]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:29 compute-0 sshd-session[49543]: Connection closed by 192.168.122.30 port 59520
Oct 01 16:29:29 compute-0 sshd-session[49540]: pam_unix(sshd:session): session closed for user zuul
Oct 01 16:29:29 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Oct 01 16:29:29 compute-0 systemd[1]: session-11.scope: Consumed 26.956s CPU time.
Oct 01 16:29:29 compute-0 systemd-logind[788]: Session 11 logged out. Waiting for processes to exit.
Oct 01 16:29:29 compute-0 systemd-logind[788]: Removed session 11.
Oct 01 16:29:35 compute-0 sshd-session[54370]: Accepted publickey for zuul from 192.168.122.30 port 59582 ssh2: ECDSA SHA256:cAu4I/kPoFUKOLOQB71BUt6Th09G4PIJ2iHT8DD8gEY
Oct 01 16:29:35 compute-0 systemd-logind[788]: New session 12 of user zuul.
Oct 01 16:29:35 compute-0 systemd[1]: Started Session 12 of User zuul.
Oct 01 16:29:35 compute-0 sshd-session[54370]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 16:29:36 compute-0 sudo[54523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gysndyaalculnfhtwfgsnksceyhqhcbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336175.7532337-22-64573707534004/AnsiballZ_file.py'
Oct 01 16:29:36 compute-0 sudo[54523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:36 compute-0 python3.9[54525]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:29:36 compute-0 sudo[54523]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:37 compute-0 sudo[54675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyowsgrmucrcnytzoedhdupzfwukkgyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336176.7799003-34-275696764054845/AnsiballZ_stat.py'
Oct 01 16:29:37 compute-0 sudo[54675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:37 compute-0 python3.9[54677]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:29:37 compute-0 sudo[54675]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:38 compute-0 sudo[54798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptgolaagajtaxkvddlsoxfjsfzimaclg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336176.7799003-34-275696764054845/AnsiballZ_copy.py'
Oct 01 16:29:38 compute-0 sudo[54798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:38 compute-0 python3.9[54800]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759336176.7799003-34-275696764054845/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:29:38 compute-0 sudo[54798]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:38 compute-0 sshd-session[54373]: Connection closed by 192.168.122.30 port 59582
Oct 01 16:29:38 compute-0 sshd-session[54370]: pam_unix(sshd:session): session closed for user zuul
Oct 01 16:29:38 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Oct 01 16:29:38 compute-0 systemd[1]: session-12.scope: Consumed 1.698s CPU time.
Oct 01 16:29:38 compute-0 systemd-logind[788]: Session 12 logged out. Waiting for processes to exit.
Oct 01 16:29:38 compute-0 systemd-logind[788]: Removed session 12.
Oct 01 16:29:44 compute-0 sshd-session[54825]: Accepted publickey for zuul from 192.168.122.30 port 53114 ssh2: ECDSA SHA256:cAu4I/kPoFUKOLOQB71BUt6Th09G4PIJ2iHT8DD8gEY
Oct 01 16:29:44 compute-0 systemd-logind[788]: New session 13 of user zuul.
Oct 01 16:29:44 compute-0 systemd[1]: Started Session 13 of User zuul.
Oct 01 16:29:44 compute-0 sshd-session[54825]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 16:29:45 compute-0 python3.9[54978]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:29:46 compute-0 sudo[55132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnywuvjgrdahurapdaqiwuvfelzvmouz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336186.3168414-33-120065927735809/AnsiballZ_file.py'
Oct 01 16:29:46 compute-0 sudo[55132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:46 compute-0 python3.9[55134]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:29:47 compute-0 sudo[55132]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:47 compute-0 sudo[55307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiidgugneehtdyxyjqtznxmhvwqyysiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336187.1895618-41-279505704340139/AnsiballZ_stat.py'
Oct 01 16:29:47 compute-0 sudo[55307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:47 compute-0 python3.9[55309]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:29:47 compute-0 sudo[55307]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:48 compute-0 sudo[55430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hffigixirikrsenbgxnfivhxpdgyyoyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336187.1895618-41-279505704340139/AnsiballZ_copy.py'
Oct 01 16:29:48 compute-0 sudo[55430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:48 compute-0 python3.9[55432]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1759336187.1895618-41-279505704340139/.source.json _original_basename=.kttg8dia follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:29:48 compute-0 sudo[55430]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:49 compute-0 sudo[55582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leocuauuhgfroibqrcwuqnsrlcpkvkus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336189.1013412-64-196135514017979/AnsiballZ_stat.py'
Oct 01 16:29:49 compute-0 sudo[55582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:49 compute-0 python3.9[55584]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:29:49 compute-0 sudo[55582]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:49 compute-0 sudo[55705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkveesvpgeulqkjkphbbwnxtdindlwed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336189.1013412-64-196135514017979/AnsiballZ_copy.py'
Oct 01 16:29:49 compute-0 sudo[55705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:50 compute-0 python3.9[55707]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759336189.1013412-64-196135514017979/.source _original_basename=.zj0rtpjt follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:29:50 compute-0 sudo[55705]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:50 compute-0 sudo[55857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvikrfntnoilnfcgmguuvotmidmglbrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336190.4016213-80-222691202860594/AnsiballZ_file.py'
Oct 01 16:29:50 compute-0 sudo[55857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:50 compute-0 python3.9[55859]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:29:50 compute-0 sudo[55857]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:51 compute-0 sudo[56009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avoztscqtpmzeitygbebfgtmfwrqpvnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336191.0534282-88-38788708078945/AnsiballZ_stat.py'
Oct 01 16:29:51 compute-0 sudo[56009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:51 compute-0 python3.9[56011]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:29:51 compute-0 sudo[56009]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:51 compute-0 sudo[56132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmosrlpeglkucfyqiejphaqfqgjtqxdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336191.0534282-88-38788708078945/AnsiballZ_copy.py'
Oct 01 16:29:51 compute-0 sudo[56132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:52 compute-0 python3.9[56134]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759336191.0534282-88-38788708078945/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:29:52 compute-0 sudo[56132]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:52 compute-0 sudo[56284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgxmheboqirpuwxiavvoomdgggjmyzoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336192.3656147-88-136356901700914/AnsiballZ_stat.py'
Oct 01 16:29:52 compute-0 sudo[56284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:52 compute-0 python3.9[56286]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:29:52 compute-0 sudo[56284]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:53 compute-0 sudo[56407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oidllxytuflmssrnxmgomxwqkroxnzeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336192.3656147-88-136356901700914/AnsiballZ_copy.py'
Oct 01 16:29:53 compute-0 sudo[56407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:53 compute-0 python3.9[56409]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759336192.3656147-88-136356901700914/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:29:53 compute-0 sudo[56407]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:53 compute-0 sudo[56559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzzymmginuugnptcdifwqmwvrpsvrnim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336193.548138-117-21853720088865/AnsiballZ_file.py'
Oct 01 16:29:53 compute-0 sudo[56559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:54 compute-0 python3.9[56561]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:29:54 compute-0 sudo[56559]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:54 compute-0 sudo[56711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtowgqbsxbxwmmhgpamaritcxnrsnozj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336194.3372304-125-95842678999714/AnsiballZ_stat.py'
Oct 01 16:29:54 compute-0 sudo[56711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:54 compute-0 python3.9[56713]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:29:54 compute-0 sudo[56711]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:55 compute-0 sudo[56834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xheicnnybwpcsscamfertbgaplpgjuqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336194.3372304-125-95842678999714/AnsiballZ_copy.py'
Oct 01 16:29:55 compute-0 sudo[56834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:55 compute-0 python3.9[56836]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336194.3372304-125-95842678999714/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:29:55 compute-0 sudo[56834]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:55 compute-0 sudo[56986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yalxoarpibhqccrsgxafhpinmykmurxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336195.5813835-140-262258561621830/AnsiballZ_stat.py'
Oct 01 16:29:55 compute-0 sudo[56986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:56 compute-0 python3.9[56988]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:29:56 compute-0 sudo[56986]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:56 compute-0 sudo[57109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlfsskuzvafttinsmxinonewtyohqkcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336195.5813835-140-262258561621830/AnsiballZ_copy.py'
Oct 01 16:29:56 compute-0 sudo[57109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:56 compute-0 python3.9[57111]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336195.5813835-140-262258561621830/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:29:56 compute-0 sudo[57109]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:57 compute-0 sudo[57261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeksceaiababonsmwztgesutgsqltntv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336196.995175-155-204244369119698/AnsiballZ_systemd.py'
Oct 01 16:29:57 compute-0 sudo[57261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:57 compute-0 python3.9[57263]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:29:57 compute-0 systemd[1]: Reloading.
Oct 01 16:29:58 compute-0 systemd-rc-local-generator[57290]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:29:58 compute-0 systemd-sysv-generator[57294]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:29:58 compute-0 systemd[1]: Reloading.
Oct 01 16:29:58 compute-0 systemd-rc-local-generator[57328]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:29:58 compute-0 systemd-sysv-generator[57332]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:29:58 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Oct 01 16:29:58 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Oct 01 16:29:58 compute-0 sudo[57261]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:59 compute-0 sudo[57488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kugylkkummbalsvjpbkjxnfsgtotqwce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336198.71004-163-126446329013302/AnsiballZ_stat.py'
Oct 01 16:29:59 compute-0 sudo[57488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:59 compute-0 python3.9[57490]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:29:59 compute-0 sudo[57488]: pam_unix(sudo:session): session closed for user root
Oct 01 16:29:59 compute-0 sudo[57611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aegfpoxqmfyvxbyxhmvnxpvhclmgbimi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336198.71004-163-126446329013302/AnsiballZ_copy.py'
Oct 01 16:29:59 compute-0 sudo[57611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:29:59 compute-0 python3.9[57613]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336198.71004-163-126446329013302/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:29:59 compute-0 sudo[57611]: pam_unix(sudo:session): session closed for user root
Oct 01 16:30:00 compute-0 sudo[57763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhlemkwewzpagoesljvcjadpkjxiacxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336199.9225664-178-169445943226269/AnsiballZ_stat.py'
Oct 01 16:30:00 compute-0 sudo[57763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:30:00 compute-0 python3.9[57765]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:30:00 compute-0 sudo[57763]: pam_unix(sudo:session): session closed for user root
Oct 01 16:30:00 compute-0 sudo[57886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfkfcvcqzlwptkqfnuschxxppdeutlvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336199.9225664-178-169445943226269/AnsiballZ_copy.py'
Oct 01 16:30:00 compute-0 sudo[57886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:30:01 compute-0 python3.9[57888]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336199.9225664-178-169445943226269/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:30:01 compute-0 sudo[57886]: pam_unix(sudo:session): session closed for user root
Oct 01 16:30:01 compute-0 sudo[58038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itedkfruzzlkuysopjywsmslxkywujgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336201.222301-193-150007291698861/AnsiballZ_systemd.py'
Oct 01 16:30:01 compute-0 sudo[58038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:30:01 compute-0 python3.9[58040]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:30:01 compute-0 systemd[1]: Reloading.
Oct 01 16:30:01 compute-0 systemd-rc-local-generator[58067]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:30:01 compute-0 systemd-sysv-generator[58072]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:30:02 compute-0 systemd[1]: Reloading.
Oct 01 16:30:02 compute-0 systemd-sysv-generator[58108]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:30:02 compute-0 systemd-rc-local-generator[58104]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:30:02 compute-0 systemd[1]: Starting Create netns directory...
Oct 01 16:30:02 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 01 16:30:02 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 01 16:30:02 compute-0 systemd[1]: Finished Create netns directory.
Oct 01 16:30:02 compute-0 sudo[58038]: pam_unix(sudo:session): session closed for user root
Oct 01 16:30:03 compute-0 python3.9[58266]: ansible-ansible.builtin.service_facts Invoked
Oct 01 16:30:03 compute-0 network[58283]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 01 16:30:03 compute-0 network[58284]: 'network-scripts' will be removed from distribution in near future.
Oct 01 16:30:03 compute-0 network[58285]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 01 16:30:06 compute-0 sudo[58547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaikmhqqolggdkuuazvtimzdogofcbjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336205.9403138-209-128641237060827/AnsiballZ_systemd.py'
Oct 01 16:30:06 compute-0 sudo[58547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:30:06 compute-0 python3.9[58549]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:30:06 compute-0 systemd[1]: Reloading.
Oct 01 16:30:06 compute-0 systemd-rc-local-generator[58577]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:30:06 compute-0 systemd-sysv-generator[58580]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:30:06 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Oct 01 16:30:07 compute-0 iptables.init[58588]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Oct 01 16:30:07 compute-0 iptables.init[58588]: iptables: Flushing firewall rules: [  OK  ]
Oct 01 16:30:07 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Oct 01 16:30:07 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Oct 01 16:30:07 compute-0 sudo[58547]: pam_unix(sudo:session): session closed for user root
Oct 01 16:30:07 compute-0 sudo[58782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nizljzwrtexsmnsvimoqaqlpihtnahza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336207.3861015-209-7626817388100/AnsiballZ_systemd.py'
Oct 01 16:30:07 compute-0 sudo[58782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:30:08 compute-0 python3.9[58784]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:30:08 compute-0 sudo[58782]: pam_unix(sudo:session): session closed for user root
Oct 01 16:30:08 compute-0 sudo[58936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vofepsqjhujhfwrbsjszpayhppqcpwdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336208.2653198-225-177954009972977/AnsiballZ_systemd.py'
Oct 01 16:30:08 compute-0 sudo[58936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:30:08 compute-0 python3.9[58938]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:30:08 compute-0 systemd[1]: Reloading.
Oct 01 16:30:08 compute-0 systemd-rc-local-generator[58967]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:30:08 compute-0 systemd-sysv-generator[58970]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:30:09 compute-0 systemd[1]: Starting Netfilter Tables...
Oct 01 16:30:09 compute-0 systemd[1]: Finished Netfilter Tables.
Oct 01 16:30:09 compute-0 sudo[58936]: pam_unix(sudo:session): session closed for user root
Oct 01 16:30:09 compute-0 sudo[59127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfymrrcloykdiblurzbxbjzlsydmynni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336209.3445873-233-152117056217962/AnsiballZ_command.py'
Oct 01 16:30:09 compute-0 sudo[59127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:30:10 compute-0 python3.9[59129]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:30:10 compute-0 sudo[59127]: pam_unix(sudo:session): session closed for user root
Oct 01 16:30:10 compute-0 sudo[59280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lipcqgfihytrrnpkfruqkenbmkibqavf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336210.5288112-247-30720485489192/AnsiballZ_stat.py'
Oct 01 16:30:10 compute-0 sudo[59280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:30:11 compute-0 python3.9[59282]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:30:11 compute-0 sudo[59280]: pam_unix(sudo:session): session closed for user root
Oct 01 16:30:11 compute-0 sudo[59405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkfwqxrillszeogdzlipflxblxsxaofr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336210.5288112-247-30720485489192/AnsiballZ_copy.py'
Oct 01 16:30:11 compute-0 sudo[59405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:30:11 compute-0 python3.9[59407]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759336210.5288112-247-30720485489192/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=4729b6ffc5b555fa142bf0b6e6dc15609cb89a22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:30:11 compute-0 sudo[59405]: pam_unix(sudo:session): session closed for user root
Oct 01 16:30:12 compute-0 python3.9[59558]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 01 16:30:12 compute-0 polkitd[6497]: Registered Authentication Agent for unix-process:59560:192717 (system bus name :1.522 [/usr/bin/pkttyagent --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 01 16:30:37 compute-0 polkitd[6497]: Unregistered Authentication Agent for unix-process:59560:192717 (system bus name :1.522, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 01 16:30:37 compute-0 polkit-agent-helper-1[59572]: pam_unix(polkit-1:auth): conversation failed
Oct 01 16:30:37 compute-0 polkit-agent-helper-1[59572]: pam_unix(polkit-1:auth): auth could not identify password for [root]
Oct 01 16:30:37 compute-0 polkitd[6497]: Operator of unix-process:59560:192717 FAILED to authenticate to gain authorization for action org.freedesktop.systemd1.manage-units for system-bus-name::1.521 [<unknown>] (owned by unix-user:zuul)
Oct 01 16:30:37 compute-0 sshd-session[54828]: Connection closed by 192.168.122.30 port 53114
Oct 01 16:30:37 compute-0 sshd-session[54825]: pam_unix(sshd:session): session closed for user zuul
Oct 01 16:30:37 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Oct 01 16:30:37 compute-0 systemd[1]: session-13.scope: Consumed 19.028s CPU time.
Oct 01 16:30:37 compute-0 systemd-logind[788]: Session 13 logged out. Waiting for processes to exit.
Oct 01 16:30:37 compute-0 systemd-logind[788]: Removed session 13.
Oct 01 16:30:50 compute-0 sshd-session[59599]: Accepted publickey for zuul from 192.168.122.30 port 40246 ssh2: ECDSA SHA256:cAu4I/kPoFUKOLOQB71BUt6Th09G4PIJ2iHT8DD8gEY
Oct 01 16:30:50 compute-0 systemd-logind[788]: New session 14 of user zuul.
Oct 01 16:30:50 compute-0 systemd[1]: Started Session 14 of User zuul.
Oct 01 16:30:50 compute-0 sshd-session[59599]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 16:30:51 compute-0 python3.9[59752]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:30:52 compute-0 sshd-session[59598]: Connection closed by authenticating user root 80.94.95.115 port 37848 [preauth]
Oct 01 16:30:52 compute-0 sudo[59907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qydsnzzxjknvuxcbwkyopwrqllxmqfub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336251.8697174-33-9796306558259/AnsiballZ_file.py'
Oct 01 16:30:52 compute-0 sudo[59907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:30:52 compute-0 python3.9[59909]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:30:52 compute-0 sudo[59907]: pam_unix(sudo:session): session closed for user root
Oct 01 16:30:53 compute-0 sudo[60082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eozemnhedpmwiubvhwznfdafqqcjuljm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336252.8396635-41-77617452618290/AnsiballZ_stat.py'
Oct 01 16:30:53 compute-0 sudo[60082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:30:53 compute-0 python3.9[60084]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:30:53 compute-0 sudo[60082]: pam_unix(sudo:session): session closed for user root
Oct 01 16:30:53 compute-0 sudo[60160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kulfmoxxrcodooympsxyzltoaoksgrvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336252.8396635-41-77617452618290/AnsiballZ_file.py'
Oct 01 16:30:53 compute-0 sudo[60160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:30:54 compute-0 python3.9[60162]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.7427xfrd recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:30:54 compute-0 sudo[60160]: pam_unix(sudo:session): session closed for user root
Oct 01 16:30:54 compute-0 sudo[60312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrmluydfnbwkiqpcotfwmlbzkhnuwoaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336254.618695-61-109438715596175/AnsiballZ_stat.py'
Oct 01 16:30:54 compute-0 sudo[60312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:30:55 compute-0 python3.9[60314]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:30:55 compute-0 sudo[60312]: pam_unix(sudo:session): session closed for user root
Oct 01 16:30:55 compute-0 sudo[60390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njwpksesynxwfmoihukzjpkpdwhvmcth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336254.618695-61-109438715596175/AnsiballZ_file.py'
Oct 01 16:30:55 compute-0 sudo[60390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:30:55 compute-0 python3.9[60392]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.3k4fp0xe recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:30:55 compute-0 sudo[60390]: pam_unix(sudo:session): session closed for user root
Oct 01 16:30:56 compute-0 sudo[60542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbxgynqcsbefkyaqsbijmwdlzlbllajn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336255.874786-74-24349755164931/AnsiballZ_file.py'
Oct 01 16:30:56 compute-0 sudo[60542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:30:56 compute-0 python3.9[60544]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:30:56 compute-0 sudo[60542]: pam_unix(sudo:session): session closed for user root
Oct 01 16:30:56 compute-0 sudo[60694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kudcjmtggwzdtcispszyjcdpgwkgbqrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336256.595182-82-139716086181844/AnsiballZ_stat.py'
Oct 01 16:30:56 compute-0 sudo[60694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:30:57 compute-0 python3.9[60696]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:30:57 compute-0 sudo[60694]: pam_unix(sudo:session): session closed for user root
Oct 01 16:30:57 compute-0 sudo[60772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsjkblmlqacbkrrhhmtcfdazrzjzzpmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336256.595182-82-139716086181844/AnsiballZ_file.py'
Oct 01 16:30:57 compute-0 sudo[60772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:30:57 compute-0 python3.9[60774]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:30:57 compute-0 sudo[60772]: pam_unix(sudo:session): session closed for user root
Oct 01 16:30:58 compute-0 sudo[60924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vttkxmsjwvepbbwmuszqbaiyegvgflld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336257.8608313-82-76762011692592/AnsiballZ_stat.py'
Oct 01 16:30:58 compute-0 sudo[60924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:30:58 compute-0 python3.9[60926]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:30:58 compute-0 sudo[60924]: pam_unix(sudo:session): session closed for user root
Oct 01 16:30:58 compute-0 sudo[61002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brmfazrfyszocxbybpnhyjubpfvvawwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336257.8608313-82-76762011692592/AnsiballZ_file.py'
Oct 01 16:30:58 compute-0 sudo[61002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:30:58 compute-0 python3.9[61004]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:30:58 compute-0 sudo[61002]: pam_unix(sudo:session): session closed for user root
Oct 01 16:30:59 compute-0 sudo[61154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugniljovkkapawiuqqnleycxlkltdbmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336258.9736464-105-306265022110/AnsiballZ_file.py'
Oct 01 16:30:59 compute-0 sudo[61154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:30:59 compute-0 python3.9[61156]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:30:59 compute-0 sudo[61154]: pam_unix(sudo:session): session closed for user root
Oct 01 16:30:59 compute-0 sudo[61306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhhpbsqwjuawdxvhrpvshlzxvkbduakm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336259.6701794-113-119544180872463/AnsiballZ_stat.py'
Oct 01 16:30:59 compute-0 sudo[61306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:00 compute-0 python3.9[61308]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:31:00 compute-0 sudo[61306]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:00 compute-0 sudo[61384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksdudxyyuwgbghzqckqnkxrslqtdjivi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336259.6701794-113-119544180872463/AnsiballZ_file.py'
Oct 01 16:31:00 compute-0 sudo[61384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:00 compute-0 python3.9[61386]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:31:00 compute-0 sudo[61384]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:01 compute-0 sudo[61536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kctkugsimjzfreqzkangcnpgmspilcla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336260.8439863-125-108661776314033/AnsiballZ_stat.py'
Oct 01 16:31:01 compute-0 sudo[61536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:01 compute-0 python3.9[61538]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:31:01 compute-0 sudo[61536]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:01 compute-0 sudo[61614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oohokxsixcxbvlyqrjuonryxcopsrdgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336260.8439863-125-108661776314033/AnsiballZ_file.py'
Oct 01 16:31:01 compute-0 sudo[61614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:01 compute-0 python3.9[61616]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:31:01 compute-0 sudo[61614]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:02 compute-0 sudo[61766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bavxhhgrchnsjleumrcseebhpshdlqdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336261.9733443-137-30028516813908/AnsiballZ_systemd.py'
Oct 01 16:31:02 compute-0 sudo[61766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:02 compute-0 python3.9[61768]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:31:02 compute-0 systemd[1]: Reloading.
Oct 01 16:31:03 compute-0 systemd-rc-local-generator[61795]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:31:03 compute-0 systemd-sysv-generator[61799]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:31:03 compute-0 sudo[61766]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:03 compute-0 sudo[61954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtblytkpwldpjhojsjruukrkpanzeclv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336263.366403-145-145330654998018/AnsiballZ_stat.py'
Oct 01 16:31:03 compute-0 sudo[61954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:03 compute-0 python3.9[61956]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:31:03 compute-0 sudo[61954]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:04 compute-0 sudo[62032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnhxnvyztztugwohttclieghmejzktfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336263.366403-145-145330654998018/AnsiballZ_file.py'
Oct 01 16:31:04 compute-0 sudo[62032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:04 compute-0 python3.9[62034]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:31:04 compute-0 sudo[62032]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:04 compute-0 sudo[62184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgniwrsploqwpvrzyydzexxhugovdigk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336264.5453115-157-218961492232894/AnsiballZ_stat.py'
Oct 01 16:31:04 compute-0 sudo[62184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:05 compute-0 python3.9[62186]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:31:05 compute-0 sudo[62184]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:05 compute-0 sudo[62262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohgbshkbplvfaoprvhriazsntnpsrfqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336264.5453115-157-218961492232894/AnsiballZ_file.py'
Oct 01 16:31:05 compute-0 sudo[62262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:05 compute-0 python3.9[62264]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:31:05 compute-0 sudo[62262]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:06 compute-0 sudo[62414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdcybxavuzqectaopauohmvbjbbeukhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336265.7329183-169-237728825286074/AnsiballZ_systemd.py'
Oct 01 16:31:06 compute-0 sudo[62414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:06 compute-0 python3.9[62416]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:31:06 compute-0 systemd[1]: Reloading.
Oct 01 16:31:06 compute-0 systemd-sysv-generator[62447]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:31:06 compute-0 systemd-rc-local-generator[62444]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:31:06 compute-0 systemd[1]: Starting Create netns directory...
Oct 01 16:31:06 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 01 16:31:06 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 01 16:31:06 compute-0 systemd[1]: Finished Create netns directory.
Oct 01 16:31:06 compute-0 sudo[62414]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:07 compute-0 python3.9[62607]: ansible-ansible.builtin.service_facts Invoked
Oct 01 16:31:07 compute-0 network[62624]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 01 16:31:07 compute-0 network[62625]: 'network-scripts' will be removed from distribution in near future.
Oct 01 16:31:07 compute-0 network[62626]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 01 16:31:13 compute-0 sudo[62887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhkxtnzjgoqtipniqsesfxcvizykaqeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336273.5258825-195-96858705776840/AnsiballZ_stat.py'
Oct 01 16:31:13 compute-0 sudo[62887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:14 compute-0 python3.9[62889]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:31:14 compute-0 sudo[62887]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:14 compute-0 sudo[62965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgiwrbvhtslnlbctmxakwqcyxgmksijr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336273.5258825-195-96858705776840/AnsiballZ_file.py'
Oct 01 16:31:14 compute-0 sudo[62965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:14 compute-0 python3.9[62967]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:31:14 compute-0 sudo[62965]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:15 compute-0 sudo[63117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbbzfbcuxmbtdlxulgholeurhxhcgpao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336274.9033682-208-170641671779413/AnsiballZ_file.py'
Oct 01 16:31:15 compute-0 sudo[63117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:15 compute-0 python3.9[63119]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:31:15 compute-0 sudo[63117]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:16 compute-0 sudo[63269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zydhmjctnyitogcjodjmopbppeqzttrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336275.7031732-216-199678254556788/AnsiballZ_stat.py'
Oct 01 16:31:16 compute-0 sudo[63269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:16 compute-0 python3.9[63271]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:31:16 compute-0 sudo[63269]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:16 compute-0 sudo[63392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sngwwrcaokyekgbgtwulnxomqenpwqtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336275.7031732-216-199678254556788/AnsiballZ_copy.py'
Oct 01 16:31:16 compute-0 sudo[63392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:17 compute-0 python3.9[63394]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336275.7031732-216-199678254556788/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:31:17 compute-0 sudo[63392]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:18 compute-0 sudo[63544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqlbumnaebxzbhkqmabcqlbmaribeoyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336277.5206568-234-93452277312890/AnsiballZ_timezone.py'
Oct 01 16:31:18 compute-0 sudo[63544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:18 compute-0 python3.9[63546]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 01 16:31:18 compute-0 systemd[1]: Starting Time & Date Service...
Oct 01 16:31:18 compute-0 systemd[1]: Started Time & Date Service.
Oct 01 16:31:18 compute-0 sudo[63544]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:19 compute-0 sudo[63700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zetktmnxkzdavqaestnmyyqfvvwscenz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336278.7076578-243-273078253429751/AnsiballZ_file.py'
Oct 01 16:31:19 compute-0 sudo[63700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:19 compute-0 python3.9[63702]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:31:19 compute-0 sudo[63700]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:19 compute-0 sudo[63852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqwxijymrcgwltrhbvhyrygscsfjfsqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336279.5210748-251-153341340599249/AnsiballZ_stat.py'
Oct 01 16:31:19 compute-0 sudo[63852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:20 compute-0 python3.9[63854]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:31:20 compute-0 sudo[63852]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:20 compute-0 sudo[63975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idavneryadymvhegaxmbdjavvrjathue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336279.5210748-251-153341340599249/AnsiballZ_copy.py'
Oct 01 16:31:20 compute-0 sudo[63975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:20 compute-0 python3.9[63977]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759336279.5210748-251-153341340599249/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:31:20 compute-0 sudo[63975]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:21 compute-0 sudo[64127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwxblmogzhllgnfwpzfmjidgpnjudvpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336280.8911092-266-88074521858635/AnsiballZ_stat.py'
Oct 01 16:31:21 compute-0 sudo[64127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:21 compute-0 python3.9[64129]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:31:21 compute-0 sudo[64127]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:21 compute-0 sudo[64250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvtmzrhjeagvfyvasawfojokfinfrxnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336280.8911092-266-88074521858635/AnsiballZ_copy.py'
Oct 01 16:31:21 compute-0 sudo[64250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:22 compute-0 python3.9[64252]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759336280.8911092-266-88074521858635/.source.yaml _original_basename=.jm50qtpf follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:31:22 compute-0 sudo[64250]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:22 compute-0 sudo[64402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmjgzqepzymoehjpituytrtwhagxavpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336282.3543975-281-38795935235307/AnsiballZ_stat.py'
Oct 01 16:31:22 compute-0 sudo[64402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:23 compute-0 python3.9[64404]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:31:23 compute-0 sudo[64402]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:23 compute-0 sudo[64525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czvxxzuhvjcaouqddnqhaisezdkrjxkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336282.3543975-281-38795935235307/AnsiballZ_copy.py'
Oct 01 16:31:23 compute-0 sudo[64525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:23 compute-0 python3.9[64527]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336282.3543975-281-38795935235307/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:31:23 compute-0 sudo[64525]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:24 compute-0 sudo[64677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djcrywgvvuyozyatcwduzgkryiqrwbev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336283.9197867-296-40139440673101/AnsiballZ_command.py'
Oct 01 16:31:24 compute-0 sudo[64677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:24 compute-0 python3.9[64679]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:31:24 compute-0 sudo[64677]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:25 compute-0 sudo[64830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjwkyqemtzwzcoshjhbrmskfwsjdvmgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336284.8562496-304-259579614406385/AnsiballZ_command.py'
Oct 01 16:31:25 compute-0 sudo[64830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:25 compute-0 python3.9[64832]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:31:25 compute-0 sudo[64830]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:26 compute-0 sudo[64983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cawziphkuwljsoyxgrgzrhqzsgsxwdcs ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759336285.6113715-312-34879715427015/AnsiballZ_edpm_nftables_from_files.py'
Oct 01 16:31:26 compute-0 sudo[64983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:26 compute-0 python3[64985]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 01 16:31:26 compute-0 sudo[64983]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:26 compute-0 sudo[65135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlmkvwrgfwvaipwzectxdgyvontvygnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336286.5363383-320-89874771283060/AnsiballZ_stat.py'
Oct 01 16:31:26 compute-0 sudo[65135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:27 compute-0 python3.9[65137]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:31:27 compute-0 sudo[65135]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:27 compute-0 sudo[65258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwuymsgzczakxvatzsigxloofzazruww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336286.5363383-320-89874771283060/AnsiballZ_copy.py'
Oct 01 16:31:27 compute-0 sudo[65258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:27 compute-0 python3.9[65260]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336286.5363383-320-89874771283060/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:31:27 compute-0 sudo[65258]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:28 compute-0 sudo[65410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcpxdemsepntzrijlidzkioxvzraigrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336287.8888154-335-193203295190731/AnsiballZ_stat.py'
Oct 01 16:31:28 compute-0 sudo[65410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:28 compute-0 python3.9[65412]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:31:28 compute-0 sudo[65410]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:28 compute-0 sudo[65533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jamociiwbuprblekoqbemnrlhvuuzlza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336287.8888154-335-193203295190731/AnsiballZ_copy.py'
Oct 01 16:31:28 compute-0 sudo[65533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:29 compute-0 python3.9[65535]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336287.8888154-335-193203295190731/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:31:29 compute-0 sudo[65533]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:29 compute-0 sudo[65685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rouvxzxjsgnlwhndqmdjtxrxseyqygdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336289.336185-350-22197737346933/AnsiballZ_stat.py'
Oct 01 16:31:29 compute-0 sudo[65685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:29 compute-0 python3.9[65687]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:31:29 compute-0 sudo[65685]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:30 compute-0 sudo[65808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbzvzncrmtzxrguuykjwjalrpdrpcgjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336289.336185-350-22197737346933/AnsiballZ_copy.py'
Oct 01 16:31:30 compute-0 sudo[65808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:30 compute-0 python3.9[65810]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336289.336185-350-22197737346933/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:31:30 compute-0 sudo[65808]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:31 compute-0 sudo[65960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbvbadlnssgyogmjdodkymbtazujvssn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336290.8472018-365-208677409283805/AnsiballZ_stat.py'
Oct 01 16:31:31 compute-0 sudo[65960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:31 compute-0 python3.9[65962]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:31:31 compute-0 sudo[65960]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:31 compute-0 sudo[66083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttejojxiuffqpzfrpksqscaayxpljxnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336290.8472018-365-208677409283805/AnsiballZ_copy.py'
Oct 01 16:31:31 compute-0 sudo[66083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:32 compute-0 python3.9[66085]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336290.8472018-365-208677409283805/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:31:32 compute-0 sudo[66083]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:32 compute-0 sudo[66235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpwxifqozqxvsveldqubnwjnelrkdfwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336292.3807468-380-1358926115555/AnsiballZ_stat.py'
Oct 01 16:31:32 compute-0 sudo[66235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:33 compute-0 python3.9[66237]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:31:33 compute-0 sudo[66235]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:33 compute-0 sudo[66358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbgxzaacbymqpmgsekutavcvmztomvxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336292.3807468-380-1358926115555/AnsiballZ_copy.py'
Oct 01 16:31:33 compute-0 sudo[66358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:33 compute-0 python3.9[66360]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336292.3807468-380-1358926115555/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:31:33 compute-0 sudo[66358]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:34 compute-0 sudo[66510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yovuxgksozineaulccmzzrjjldyasflm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336294.0078764-395-83443088220943/AnsiballZ_file.py'
Oct 01 16:31:34 compute-0 sudo[66510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:34 compute-0 python3.9[66512]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:31:34 compute-0 sudo[66510]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:35 compute-0 sudo[66662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqqjvuitqmhajmupvoswfytvhaeogctu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336294.8023493-403-178718013866694/AnsiballZ_command.py'
Oct 01 16:31:35 compute-0 sudo[66662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:35 compute-0 python3.9[66664]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:31:35 compute-0 sudo[66662]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:36 compute-0 sudo[66821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsljmqegjymnrzaprultgegvszdgtzop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336295.636443-411-61241394855918/AnsiballZ_blockinfile.py'
Oct 01 16:31:36 compute-0 sudo[66821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:36 compute-0 python3.9[66823]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:31:36 compute-0 sudo[66821]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:37 compute-0 sudo[66974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjeeqqfndmezkoxgsuglwisrvfkefjpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336296.7004492-420-115872420788713/AnsiballZ_file.py'
Oct 01 16:31:37 compute-0 sudo[66974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:37 compute-0 python3.9[66976]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:31:37 compute-0 sudo[66974]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:37 compute-0 sudo[67126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eveokznfeehvjudpqlrzlcfiodpjugdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336297.4249144-420-240756316364895/AnsiballZ_file.py'
Oct 01 16:31:37 compute-0 sudo[67126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:37 compute-0 python3.9[67128]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:31:38 compute-0 sudo[67126]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:38 compute-0 sudo[67278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaugewiyfgwjusnxeggrlejareghqnfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336298.2185054-435-245345498815010/AnsiballZ_mount.py'
Oct 01 16:31:38 compute-0 sudo[67278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:38 compute-0 chronyd[54344]: Selected source 198.50.127.72 (pool.ntp.org)
Oct 01 16:31:38 compute-0 python3.9[67280]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 01 16:31:38 compute-0 sudo[67278]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:39 compute-0 sudo[67431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooajugtpcxzqwiaeyaohcbdwlxozcioq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336299.1032007-435-230733898290912/AnsiballZ_mount.py'
Oct 01 16:31:39 compute-0 sudo[67431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:39 compute-0 python3.9[67433]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 01 16:31:39 compute-0 sudo[67431]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:40 compute-0 sshd-session[59602]: Connection closed by 192.168.122.30 port 40246
Oct 01 16:31:40 compute-0 sshd-session[59599]: pam_unix(sshd:session): session closed for user zuul
Oct 01 16:31:40 compute-0 systemd-logind[788]: Session 14 logged out. Waiting for processes to exit.
Oct 01 16:31:40 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Oct 01 16:31:40 compute-0 systemd[1]: session-14.scope: Consumed 35.992s CPU time.
Oct 01 16:31:40 compute-0 systemd-logind[788]: Removed session 14.
Oct 01 16:31:45 compute-0 sshd-session[67459]: Accepted publickey for zuul from 192.168.122.30 port 43432 ssh2: ECDSA SHA256:cAu4I/kPoFUKOLOQB71BUt6Th09G4PIJ2iHT8DD8gEY
Oct 01 16:31:45 compute-0 systemd-logind[788]: New session 15 of user zuul.
Oct 01 16:31:45 compute-0 systemd[1]: Started Session 15 of User zuul.
Oct 01 16:31:45 compute-0 sshd-session[67459]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 16:31:46 compute-0 sudo[67612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfrhusjtryffoizxwnfhkfytuiabfmlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336306.0730078-16-32627059885565/AnsiballZ_tempfile.py'
Oct 01 16:31:46 compute-0 sudo[67612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:46 compute-0 python3.9[67614]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct 01 16:31:46 compute-0 sudo[67612]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:47 compute-0 sudo[67764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfpscecuhcqwtgmeqskptpswdhzjxiyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336307.0202665-28-107997550641593/AnsiballZ_stat.py'
Oct 01 16:31:47 compute-0 sudo[67764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:47 compute-0 python3.9[67766]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:31:47 compute-0 sudo[67764]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:48 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 01 16:31:48 compute-0 sudo[67918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnpjczlhzonvyfhqaljnaznzbaitlpzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336307.824117-38-78778209500359/AnsiballZ_setup.py'
Oct 01 16:31:48 compute-0 sudo[67918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:48 compute-0 python3.9[67920]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:31:48 compute-0 sudo[67918]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:49 compute-0 sudo[68070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qznjhckvgrssxdznrtstdxndiddsxgok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336309.018493-47-39554192562575/AnsiballZ_blockinfile.py'
Oct 01 16:31:49 compute-0 sudo[68070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:49 compute-0 python3.9[68072]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCz7gvtcbMKzpFnHk1f4agzt9I90wetCA2EaLBu1oNJgTT3PtXCey882cflGOcGiO6eA2djvYpIUL+o7dwRLRBNZ97kA04YOAxeYgxIAXDoxPAbfWV8bVry0kTPdKZonohal9Yr3OlzFdBEj6ZVjrAYD3ZOiXeisDyUeOpVoUNWE7DR9kGSu0fuebmAAVWWsrP1IR+DWBG491Cc3cMgrCzQLjDCGcjk1OyXJiyHYAlu+Zef+3kC7YM4l9GpgaFsQFTQE1JkpkqN7qwI47UUE8Z7RUJR9Oeu5Jq+Mjo3b0N3yscTa/IkuG8z9eObxEv523hvSPy1A2EyyVpJYUWJ0AA70tn2el30bWrMoX8lIUwDIuGiwWtXi7w8XpCoOxwzaRgvZ7sHXk2tAuQAHJhpaWIImdqHvhsm35BsBrfRTgZ28SlY2IidIM26CK0JdMFTDUdetjZUsT3KsCrwpJBI+znBCqyzLG3y8iIpbcetM/g+g0OD6im4a7bmbQiWmJVDta8=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP/WM+tWUlUfKM2Ij44JLzsmgyV7ZneIlfyqQnDhdJi9
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMLBvCDWMwltEj/RodBE8oenZIUSaxU7mHDpOkUqLs1NZFXgaYsbb2fSdVyrhZx1ae8i/pDWxipoAGqK53fnMAo=
                                             create=True mode=0644 path=/tmp/ansible.hbcz8d1z state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:31:49 compute-0 sudo[68070]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:50 compute-0 sudo[68222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htmvsnnieawynpkhswejwwptpyagjvos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336309.8678958-55-48059392598309/AnsiballZ_command.py'
Oct 01 16:31:50 compute-0 sudo[68222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:50 compute-0 python3.9[68224]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.hbcz8d1z' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:31:50 compute-0 sudo[68222]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:51 compute-0 sudo[68376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoltqpquqvnheffasaxxkxzvliskefof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336310.7345598-63-176424436837256/AnsiballZ_file.py'
Oct 01 16:31:51 compute-0 sudo[68376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:31:51 compute-0 python3.9[68378]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.hbcz8d1z state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:31:51 compute-0 sudo[68376]: pam_unix(sudo:session): session closed for user root
Oct 01 16:31:51 compute-0 sshd-session[67462]: Connection closed by 192.168.122.30 port 43432
Oct 01 16:31:51 compute-0 sshd-session[67459]: pam_unix(sshd:session): session closed for user zuul
Oct 01 16:31:51 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Oct 01 16:31:51 compute-0 systemd[1]: session-15.scope: Consumed 3.370s CPU time.
Oct 01 16:31:51 compute-0 systemd-logind[788]: Session 15 logged out. Waiting for processes to exit.
Oct 01 16:31:51 compute-0 systemd-logind[788]: Removed session 15.
Oct 01 16:31:59 compute-0 sshd-session[68403]: Accepted publickey for zuul from 192.168.122.30 port 43672 ssh2: ECDSA SHA256:cAu4I/kPoFUKOLOQB71BUt6Th09G4PIJ2iHT8DD8gEY
Oct 01 16:31:59 compute-0 systemd-logind[788]: New session 16 of user zuul.
Oct 01 16:31:59 compute-0 systemd[1]: Started Session 16 of User zuul.
Oct 01 16:31:59 compute-0 sshd-session[68403]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 16:32:00 compute-0 python3.9[68556]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:32:01 compute-0 sudo[68710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cluokgjnvodudnnhcvjryvhqnzxaidvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336321.2423518-32-87971595513175/AnsiballZ_systemd.py'
Oct 01 16:32:01 compute-0 sudo[68710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:02 compute-0 python3.9[68712]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 01 16:32:02 compute-0 sudo[68710]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:02 compute-0 sudo[68864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzmlyiyxxcqugacgvnevcfgxvgxamyhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336322.4301293-40-16999939974783/AnsiballZ_systemd.py'
Oct 01 16:32:02 compute-0 sudo[68864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:03 compute-0 python3.9[68866]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 01 16:32:03 compute-0 sudo[68864]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:03 compute-0 sudo[69017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pheephnyegixrljogghdahlobkpeoyjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336323.3177693-49-136908062275645/AnsiballZ_command.py'
Oct 01 16:32:03 compute-0 sudo[69017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:03 compute-0 python3.9[69019]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:32:03 compute-0 sudo[69017]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:04 compute-0 sudo[69170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wslhkypkoyhwswjbvnmuqbtjsgxjyzrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336324.125962-57-59165879193465/AnsiballZ_stat.py'
Oct 01 16:32:04 compute-0 sudo[69170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:04 compute-0 python3.9[69172]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:32:04 compute-0 sudo[69170]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:05 compute-0 sudo[69324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otzzbutraqqunevigyitegelloopfnzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336325.0625548-65-170407957658800/AnsiballZ_command.py'
Oct 01 16:32:05 compute-0 sudo[69324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:05 compute-0 python3.9[69326]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:32:05 compute-0 sudo[69324]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:06 compute-0 sudo[69479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwbmaiicpzxtujhvjfulpuyltpkrpscd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336325.7459018-73-12648124419869/AnsiballZ_file.py'
Oct 01 16:32:06 compute-0 sudo[69479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:06 compute-0 python3.9[69481]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:32:06 compute-0 sudo[69479]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:06 compute-0 sshd-session[68406]: Connection closed by 192.168.122.30 port 43672
Oct 01 16:32:06 compute-0 sshd-session[68403]: pam_unix(sshd:session): session closed for user zuul
Oct 01 16:32:06 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Oct 01 16:32:06 compute-0 systemd[1]: session-16.scope: Consumed 4.478s CPU time.
Oct 01 16:32:06 compute-0 systemd-logind[788]: Session 16 logged out. Waiting for processes to exit.
Oct 01 16:32:06 compute-0 systemd-logind[788]: Removed session 16.
Oct 01 16:32:12 compute-0 sshd-session[69506]: Accepted publickey for zuul from 192.168.122.30 port 39378 ssh2: ECDSA SHA256:cAu4I/kPoFUKOLOQB71BUt6Th09G4PIJ2iHT8DD8gEY
Oct 01 16:32:12 compute-0 systemd-logind[788]: New session 17 of user zuul.
Oct 01 16:32:12 compute-0 systemd[1]: Started Session 17 of User zuul.
Oct 01 16:32:12 compute-0 sshd-session[69506]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 16:32:13 compute-0 python3.9[69659]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:32:14 compute-0 sudo[69813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkiiqaifttmeshtyhltjafsjscigbulq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336334.2702003-34-66903179721181/AnsiballZ_setup.py'
Oct 01 16:32:14 compute-0 sudo[69813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:14 compute-0 python3.9[69815]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 16:32:15 compute-0 sudo[69813]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:15 compute-0 sudo[69897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxotphoaxxsywhzzedukvauceipxjitu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336334.2702003-34-66903179721181/AnsiballZ_dnf.py'
Oct 01 16:32:15 compute-0 sudo[69897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:15 compute-0 python3.9[69899]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 01 16:32:17 compute-0 sudo[69897]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:17 compute-0 python3.9[70050]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:32:19 compute-0 python3.9[70201]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 01 16:32:20 compute-0 python3.9[70351]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:32:20 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 16:32:20 compute-0 python3.9[70502]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:32:21 compute-0 sshd-session[69509]: Connection closed by 192.168.122.30 port 39378
Oct 01 16:32:21 compute-0 sshd-session[69506]: pam_unix(sshd:session): session closed for user zuul
Oct 01 16:32:21 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Oct 01 16:32:21 compute-0 systemd[1]: session-17.scope: Consumed 5.992s CPU time.
Oct 01 16:32:21 compute-0 systemd-logind[788]: Session 17 logged out. Waiting for processes to exit.
Oct 01 16:32:21 compute-0 systemd-logind[788]: Removed session 17.
Oct 01 16:32:28 compute-0 sshd-session[70527]: Accepted publickey for zuul from 38.129.56.198 port 51506 ssh2: RSA SHA256:5lTJU/gEmQ/yi1WTLiMVGJft7+lcRZSTGB6P0Q6MG20
Oct 01 16:32:28 compute-0 systemd-logind[788]: New session 18 of user zuul.
Oct 01 16:32:28 compute-0 systemd[1]: Started Session 18 of User zuul.
Oct 01 16:32:28 compute-0 sshd-session[70527]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 16:32:29 compute-0 sudo[70603]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khteyqolngjpvcxslsgatctvuttuaqga ; /usr/bin/python3'
Oct 01 16:32:29 compute-0 sudo[70603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:29 compute-0 useradd[70607]: new group: name=ceph-admin, GID=42478
Oct 01 16:32:29 compute-0 useradd[70607]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Oct 01 16:32:29 compute-0 sudo[70603]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:29 compute-0 sudo[70689]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uikleheleiizmytuaomitiqhpfmfntvd ; /usr/bin/python3'
Oct 01 16:32:29 compute-0 sudo[70689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:29 compute-0 sudo[70689]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:30 compute-0 sudo[70762]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wswqxiwbtlupxhwfsnqrksemxlytkyve ; /usr/bin/python3'
Oct 01 16:32:30 compute-0 sudo[70762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:30 compute-0 sudo[70762]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:30 compute-0 sudo[70812]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjycjnmyvlimnraidqufshhchbnrhowj ; /usr/bin/python3'
Oct 01 16:32:30 compute-0 sudo[70812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:30 compute-0 sudo[70812]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:31 compute-0 sudo[70838]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmbzakprfsujzeutgdcjcvgghtncciyw ; /usr/bin/python3'
Oct 01 16:32:31 compute-0 sudo[70838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:31 compute-0 sudo[70838]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:31 compute-0 sudo[70864]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yspewkirpkysltjfopktraupulildyiw ; /usr/bin/python3'
Oct 01 16:32:31 compute-0 sudo[70864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:31 compute-0 sudo[70864]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:31 compute-0 sudo[70890]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzyutdtnvvsokcmvwbeeqmviuoinbqws ; /usr/bin/python3'
Oct 01 16:32:31 compute-0 sudo[70890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:32 compute-0 sudo[70890]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:32 compute-0 sudo[70968]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtbhannlbvaqdtdajcziynpizfzsnvtm ; /usr/bin/python3'
Oct 01 16:32:32 compute-0 sudo[70968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:32 compute-0 sudo[70968]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:32 compute-0 sudo[71041]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbrxikwaszmyinsejdldfipgtnsjqowv ; /usr/bin/python3'
Oct 01 16:32:32 compute-0 sudo[71041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:33 compute-0 sudo[71041]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:33 compute-0 sudo[71143]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdfqyflamizhotxewtdwdnbtkwsacsmw ; /usr/bin/python3'
Oct 01 16:32:33 compute-0 sudo[71143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:33 compute-0 sudo[71143]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:33 compute-0 sudo[71216]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oattyuzwwyehlmsssygkdtyxkuebjwnj ; /usr/bin/python3'
Oct 01 16:32:33 compute-0 sudo[71216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:34 compute-0 sudo[71216]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:34 compute-0 sudo[71266]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oogncfjjriyuxqqvkhbtcnjpiersqnyv ; /usr/bin/python3'
Oct 01 16:32:34 compute-0 sudo[71266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:34 compute-0 python3[71268]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:32:35 compute-0 sudo[71266]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:36 compute-0 sudo[71361]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqsucdiwzdukbkkypadmuhogvsrphvza ; /usr/bin/python3'
Oct 01 16:32:36 compute-0 sudo[71361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:36 compute-0 python3[71363]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 01 16:32:37 compute-0 sudo[71361]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:37 compute-0 sudo[71388]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iszpfienwinpftnycfiogtarxyxxpuqt ; /usr/bin/python3'
Oct 01 16:32:37 compute-0 sudo[71388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:37 compute-0 python3[71390]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:32:37 compute-0 sudo[71388]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:38 compute-0 sudo[71414]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdbcvzdyglafdgsoxhjaermyezldvumj ; /usr/bin/python3'
Oct 01 16:32:38 compute-0 sudo[71414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:38 compute-0 python3[71416]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:32:38 compute-0 kernel: loop: module loaded
Oct 01 16:32:38 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Oct 01 16:32:38 compute-0 sudo[71414]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:38 compute-0 sudo[71449]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aikkecpyisrlunnuujdaunlvbpbkzptq ; /usr/bin/python3'
Oct 01 16:32:38 compute-0 sudo[71449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:38 compute-0 python3[71451]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:32:38 compute-0 lvm[71454]: PV /dev/loop3 not used.
Oct 01 16:32:38 compute-0 lvm[71463]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 01 16:32:39 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Oct 01 16:32:39 compute-0 sudo[71449]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:39 compute-0 lvm[71465]:   1 logical volume(s) in volume group "ceph_vg0" now active
Oct 01 16:32:39 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Oct 01 16:32:39 compute-0 sudo[71542]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsfobplbflldxyhcngsumewlqaezxuwf ; /usr/bin/python3'
Oct 01 16:32:39 compute-0 sudo[71542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:39 compute-0 python3[71544]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 16:32:39 compute-0 sudo[71542]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:39 compute-0 sudo[71615]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbthidradpendtwluejrmpckuuinmnca ; /usr/bin/python3'
Oct 01 16:32:39 compute-0 sudo[71615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:39 compute-0 python3[71617]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759336359.2172441-32723-100180374948476/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:32:39 compute-0 sudo[71615]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:40 compute-0 sudo[71665]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcvbbyfbhlcjazmeztebplabvnivewns ; /usr/bin/python3'
Oct 01 16:32:40 compute-0 sudo[71665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:40 compute-0 python3[71667]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:32:40 compute-0 systemd[1]: Reloading.
Oct 01 16:32:40 compute-0 systemd-rc-local-generator[71697]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:32:40 compute-0 systemd-sysv-generator[71700]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:32:41 compute-0 systemd[1]: Starting Ceph OSD losetup...
Oct 01 16:32:41 compute-0 bash[71707]: /dev/loop3: [64513]:4329715 (/var/lib/ceph-osd-0.img)
Oct 01 16:32:41 compute-0 systemd[1]: Finished Ceph OSD losetup.
Oct 01 16:32:41 compute-0 lvm[71708]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 01 16:32:41 compute-0 lvm[71708]: VG ceph_vg0 finished
Oct 01 16:32:41 compute-0 sudo[71665]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:41 compute-0 sudo[71732]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceftsflfbnclowhtjpgzeqwnmgkwitsa ; /usr/bin/python3'
Oct 01 16:32:41 compute-0 sudo[71732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:41 compute-0 python3[71734]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 01 16:32:42 compute-0 sudo[71732]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:42 compute-0 sudo[71759]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pimbvyorpygdbgfihzjlhrwxasjquyal ; /usr/bin/python3'
Oct 01 16:32:42 compute-0 sudo[71759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:42 compute-0 python3[71761]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:32:42 compute-0 sudo[71759]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:43 compute-0 sudo[71785]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aouyenpceaberzyqrirvigywvfnihlrz ; /usr/bin/python3'
Oct 01 16:32:43 compute-0 sudo[71785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:43 compute-0 python3[71787]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G
                                          losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:32:43 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Oct 01 16:32:43 compute-0 sudo[71785]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:43 compute-0 sudo[71817]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyzswxwzlomaovnvoohtgnejlpqtxwpa ; /usr/bin/python3'
Oct 01 16:32:43 compute-0 sudo[71817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:43 compute-0 python3[71819]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                          vgcreate ceph_vg1 /dev/loop4
                                          lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:32:43 compute-0 lvm[71822]: PV /dev/loop4 not used.
Oct 01 16:32:43 compute-0 lvm[71831]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 01 16:32:43 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Oct 01 16:32:43 compute-0 lvm[71833]:   1 logical volume(s) in volume group "ceph_vg1" now active
Oct 01 16:32:43 compute-0 sudo[71817]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:43 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Oct 01 16:32:44 compute-0 sudo[71909]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwilnoxvaohwjnosyddgwvzbxkpnfwjk ; /usr/bin/python3'
Oct 01 16:32:44 compute-0 sudo[71909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:44 compute-0 python3[71911]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 16:32:44 compute-0 sudo[71909]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:44 compute-0 sudo[71982]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjfunpfmryzeflwprxzmwtqqkkytsgxd ; /usr/bin/python3'
Oct 01 16:32:44 compute-0 sudo[71982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:44 compute-0 python3[71984]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759336364.1046927-32750-94974920736703/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:32:44 compute-0 sudo[71982]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:44 compute-0 sudo[72032]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivaromlqflossqcrlqywzeqvdesghewz ; /usr/bin/python3'
Oct 01 16:32:45 compute-0 sudo[72032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:45 compute-0 python3[72034]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:32:45 compute-0 systemd[1]: Reloading.
Oct 01 16:32:45 compute-0 systemd-rc-local-generator[72064]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:32:45 compute-0 systemd-sysv-generator[72068]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:32:45 compute-0 systemd[1]: Starting Ceph OSD losetup...
Oct 01 16:32:45 compute-0 bash[72074]: /dev/loop4: [64513]:4350032 (/var/lib/ceph-osd-1.img)
Oct 01 16:32:45 compute-0 systemd[1]: Finished Ceph OSD losetup.
Oct 01 16:32:45 compute-0 sudo[72032]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:45 compute-0 lvm[72075]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 01 16:32:45 compute-0 lvm[72075]: VG ceph_vg1 finished
Oct 01 16:32:45 compute-0 sudo[72100]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-golrpaysqobjubdrbeklukkmozqvoenh ; /usr/bin/python3'
Oct 01 16:32:45 compute-0 sudo[72100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:45 compute-0 python3[72102]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 01 16:32:47 compute-0 sudo[72100]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:47 compute-0 sudo[72127]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbmamkzdpjncekgluxffeasbqhzijvit ; /usr/bin/python3'
Oct 01 16:32:47 compute-0 sudo[72127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:47 compute-0 python3[72129]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:32:47 compute-0 sudo[72127]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:47 compute-0 sudo[72153]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkifcpubuqmweesrcarchzxxmyagpobr ; /usr/bin/python3'
Oct 01 16:32:47 compute-0 sudo[72153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:47 compute-0 python3[72155]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G
                                          losetup /dev/loop5 /var/lib/ceph-osd-2.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:32:47 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Oct 01 16:32:47 compute-0 sudo[72153]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:47 compute-0 sudo[72185]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbkxgeinuxptqqytkijqvouhzwuwlkrb ; /usr/bin/python3'
Oct 01 16:32:47 compute-0 sudo[72185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:48 compute-0 python3[72187]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5
                                          vgcreate ceph_vg2 /dev/loop5
                                          lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:32:48 compute-0 lvm[72190]: PV /dev/loop5 not used.
Oct 01 16:32:48 compute-0 lvm[72200]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 01 16:32:48 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Oct 01 16:32:48 compute-0 sudo[72185]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:48 compute-0 lvm[72202]:   1 logical volume(s) in volume group "ceph_vg2" now active
Oct 01 16:32:48 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Oct 01 16:32:48 compute-0 sudo[72278]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdzcmnvzakbckwcwlklwhlsgdviijfgd ; /usr/bin/python3'
Oct 01 16:32:48 compute-0 sudo[72278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:48 compute-0 python3[72280]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 16:32:48 compute-0 sudo[72278]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:49 compute-0 sudo[72351]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lothgcgvtrawhszwqtsckdstnzkunnrw ; /usr/bin/python3'
Oct 01 16:32:49 compute-0 sudo[72351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:49 compute-0 python3[72353]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759336368.5774777-32777-163693090181647/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:32:49 compute-0 sudo[72351]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:49 compute-0 sudo[72401]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmelbtyattdwnmscdmwjjifzazrbavcn ; /usr/bin/python3'
Oct 01 16:32:49 compute-0 sudo[72401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:49 compute-0 python3[72403]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:32:49 compute-0 systemd[1]: Reloading.
Oct 01 16:32:49 compute-0 systemd-rc-local-generator[72431]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:32:49 compute-0 systemd-sysv-generator[72435]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:32:50 compute-0 systemd[1]: Starting Ceph OSD losetup...
Oct 01 16:32:50 compute-0 bash[72443]: /dev/loop5: [64513]:4656283 (/var/lib/ceph-osd-2.img)
Oct 01 16:32:50 compute-0 systemd[1]: Finished Ceph OSD losetup.
Oct 01 16:32:50 compute-0 lvm[72445]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 01 16:32:50 compute-0 lvm[72445]: VG ceph_vg2 finished
Oct 01 16:32:50 compute-0 sudo[72401]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:52 compute-0 python3[72469]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:32:54 compute-0 sudo[72560]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuvxvrhzwnlmcufbxsontebuphgueuhj ; /usr/bin/python3'
Oct 01 16:32:54 compute-0 sudo[72560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:54 compute-0 python3[72562]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 01 16:32:55 compute-0 groupadd[72568]: group added to /etc/group: name=cephadm, GID=992
Oct 01 16:32:55 compute-0 groupadd[72568]: group added to /etc/gshadow: name=cephadm
Oct 01 16:32:55 compute-0 groupadd[72568]: new group: name=cephadm, GID=992
Oct 01 16:32:55 compute-0 useradd[72575]: new user: name=cephadm, UID=992, GID=992, home=/var/lib/cephadm, shell=/bin/bash, from=none
Oct 01 16:32:55 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 01 16:32:55 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 01 16:32:56 compute-0 sudo[72560]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:56 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 01 16:32:56 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 01 16:32:56 compute-0 systemd[1]: run-r01d0fd349e224e5ab7d66e6d3ab212d7.service: Deactivated successfully.
Oct 01 16:32:56 compute-0 sudo[72675]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekcettbwcaltovnpalyofmejdkaqwffh ; /usr/bin/python3'
Oct 01 16:32:56 compute-0 sudo[72675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:56 compute-0 python3[72677]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:32:56 compute-0 sudo[72675]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:56 compute-0 sudo[72703]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbiodlyhvyeuzoxwsamgpxzvyotobsep ; /usr/bin/python3'
Oct 01 16:32:56 compute-0 sudo[72703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:56 compute-0 python3[72705]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:32:57 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 01 16:32:57 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 01 16:32:57 compute-0 sudo[72703]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:57 compute-0 sudo[72766]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kaqzytdsxiassslkrbviijlnjyihklxb ; /usr/bin/python3'
Oct 01 16:32:57 compute-0 sudo[72766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:57 compute-0 python3[72768]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:32:57 compute-0 sudo[72766]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:57 compute-0 sudo[72792]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubomxapvbxocagwammpmjxutsghhofjw ; /usr/bin/python3'
Oct 01 16:32:57 compute-0 sudo[72792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:58 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 01 16:32:58 compute-0 python3[72794]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:32:58 compute-0 sudo[72792]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:58 compute-0 sudo[72870]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjiiwnwtxvpmfsvaajqrnsivnllqfxuq ; /usr/bin/python3'
Oct 01 16:32:58 compute-0 sudo[72870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:58 compute-0 python3[72872]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 16:32:58 compute-0 sudo[72870]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:59 compute-0 sudo[72943]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ectusbvhprdbcroelyimmlaxdccemyaa ; /usr/bin/python3'
Oct 01 16:32:59 compute-0 sudo[72943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:32:59 compute-0 python3[72945]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759336378.5958028-32924-160060675696187/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:32:59 compute-0 sudo[72943]: pam_unix(sudo:session): session closed for user root
Oct 01 16:32:59 compute-0 sudo[73045]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctzvnxklpsusikvjywjykyuzuqnnhpye ; /usr/bin/python3'
Oct 01 16:32:59 compute-0 sudo[73045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:33:00 compute-0 python3[73047]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 16:33:00 compute-0 sudo[73045]: pam_unix(sudo:session): session closed for user root
Oct 01 16:33:00 compute-0 sudo[73118]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqegjbxvnziwlgdlxkrkvhuxcffuyteh ; /usr/bin/python3'
Oct 01 16:33:00 compute-0 sudo[73118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:33:00 compute-0 python3[73120]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759336379.7392843-32942-249693083882288/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:33:00 compute-0 sudo[73118]: pam_unix(sudo:session): session closed for user root
Oct 01 16:33:00 compute-0 sudo[73168]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdbxenfbbhubtnwfbjujhanlevxmiavt ; /usr/bin/python3'
Oct 01 16:33:00 compute-0 sudo[73168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:33:00 compute-0 python3[73170]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:33:00 compute-0 sudo[73168]: pam_unix(sudo:session): session closed for user root
Oct 01 16:33:01 compute-0 sudo[73196]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruqqghmuoorldpuwjohcqjmregfmvlnv ; /usr/bin/python3'
Oct 01 16:33:01 compute-0 sudo[73196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:33:01 compute-0 python3[73198]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:33:01 compute-0 sudo[73196]: pam_unix(sudo:session): session closed for user root
Oct 01 16:33:01 compute-0 sudo[73224]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-domelpsnmrftrrrvoiwhbctqvgkngdsq ; /usr/bin/python3'
Oct 01 16:33:01 compute-0 sudo[73224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:33:01 compute-0 python3[73226]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:33:01 compute-0 sudo[73224]: pam_unix(sudo:session): session closed for user root
Oct 01 16:33:01 compute-0 sudo[73252]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlqbrzwrdnkbbyhnjhrqodfqrxdyydyc ; /usr/bin/python3'
Oct 01 16:33:01 compute-0 sudo[73252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:33:01 compute-0 python3[73254]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:33:02 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 01 16:33:02 compute-0 sshd-session[73270]: Accepted publickey for ceph-admin from 192.168.122.100 port 39422 ssh2: RSA SHA256:KPvZnRcsTOaBZYiLSl21+XqX/cMo4GccpaCtxoWDcjI
Oct 01 16:33:02 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Oct 01 16:33:02 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 01 16:33:02 compute-0 systemd-logind[788]: New session 19 of user ceph-admin.
Oct 01 16:33:02 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 01 16:33:02 compute-0 systemd[1]: Starting User Manager for UID 42477...
Oct 01 16:33:02 compute-0 systemd[73274]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 01 16:33:02 compute-0 systemd[73274]: Queued start job for default target Main User Target.
Oct 01 16:33:02 compute-0 systemd[73274]: Created slice User Application Slice.
Oct 01 16:33:02 compute-0 systemd[73274]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 01 16:33:02 compute-0 systemd[73274]: Started Daily Cleanup of User's Temporary Directories.
Oct 01 16:33:02 compute-0 systemd[73274]: Reached target Paths.
Oct 01 16:33:02 compute-0 systemd[73274]: Reached target Timers.
Oct 01 16:33:02 compute-0 systemd[73274]: Starting D-Bus User Message Bus Socket...
Oct 01 16:33:02 compute-0 systemd[73274]: Starting Create User's Volatile Files and Directories...
Oct 01 16:33:02 compute-0 systemd[73274]: Listening on D-Bus User Message Bus Socket.
Oct 01 16:33:02 compute-0 systemd[73274]: Reached target Sockets.
Oct 01 16:33:02 compute-0 systemd[73274]: Finished Create User's Volatile Files and Directories.
Oct 01 16:33:02 compute-0 systemd[73274]: Reached target Basic System.
Oct 01 16:33:02 compute-0 systemd[73274]: Reached target Main User Target.
Oct 01 16:33:02 compute-0 systemd[73274]: Startup finished in 118ms.
Oct 01 16:33:02 compute-0 systemd[1]: Started User Manager for UID 42477.
Oct 01 16:33:02 compute-0 systemd[1]: Started Session 19 of User ceph-admin.
Oct 01 16:33:02 compute-0 sshd-session[73270]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 01 16:33:02 compute-0 sudo[73291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Oct 01 16:33:02 compute-0 sudo[73291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:33:02 compute-0 sudo[73291]: pam_unix(sudo:session): session closed for user root
Oct 01 16:33:02 compute-0 sshd-session[73290]: Received disconnect from 192.168.122.100 port 39422:11: disconnected by user
Oct 01 16:33:02 compute-0 sshd-session[73290]: Disconnected from user ceph-admin 192.168.122.100 port 39422
Oct 01 16:33:02 compute-0 sshd-session[73270]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 01 16:33:02 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Oct 01 16:33:02 compute-0 systemd-logind[788]: Session 19 logged out. Waiting for processes to exit.
Oct 01 16:33:02 compute-0 systemd-logind[788]: Removed session 19.
Oct 01 16:33:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat222725838-lower\x2dmapped.mount: Deactivated successfully.
Oct 01 16:33:12 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Oct 01 16:33:12 compute-0 systemd[73274]: Activating special unit Exit the Session...
Oct 01 16:33:12 compute-0 systemd[73274]: Stopped target Main User Target.
Oct 01 16:33:12 compute-0 systemd[73274]: Stopped target Basic System.
Oct 01 16:33:12 compute-0 systemd[73274]: Stopped target Paths.
Oct 01 16:33:12 compute-0 systemd[73274]: Stopped target Sockets.
Oct 01 16:33:12 compute-0 systemd[73274]: Stopped target Timers.
Oct 01 16:33:12 compute-0 systemd[73274]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 01 16:33:12 compute-0 systemd[73274]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 01 16:33:12 compute-0 systemd[73274]: Closed D-Bus User Message Bus Socket.
Oct 01 16:33:12 compute-0 systemd[73274]: Stopped Create User's Volatile Files and Directories.
Oct 01 16:33:12 compute-0 systemd[73274]: Removed slice User Application Slice.
Oct 01 16:33:12 compute-0 systemd[73274]: Reached target Shutdown.
Oct 01 16:33:12 compute-0 systemd[73274]: Finished Exit the Session.
Oct 01 16:33:12 compute-0 systemd[73274]: Reached target Exit the Session.
Oct 01 16:33:12 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Oct 01 16:33:12 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Oct 01 16:33:12 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Oct 01 16:33:12 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Oct 01 16:33:12 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Oct 01 16:33:12 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Oct 01 16:33:12 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Oct 01 16:33:16 compute-0 podman[73328]: 2025-10-01 16:33:16.302842267 +0000 UTC m=+13.642331601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:33:16 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 01 16:33:16 compute-0 podman[73389]: 2025-10-01 16:33:16.376507107 +0000 UTC m=+0.051997306 container create 7e73bb3af806385be7d1fc0af3816e90b935c9ffe3a81d4810feaf9b1e3d00c6 (image=quay.io/ceph/ceph:v18, name=nice_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:33:16 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Oct 01 16:33:16 compute-0 systemd[1]: Started libpod-conmon-7e73bb3af806385be7d1fc0af3816e90b935c9ffe3a81d4810feaf9b1e3d00c6.scope.
Oct 01 16:33:16 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:33:16 compute-0 podman[73389]: 2025-10-01 16:33:16.348697963 +0000 UTC m=+0.024188252 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:33:16 compute-0 podman[73389]: 2025-10-01 16:33:16.46676357 +0000 UTC m=+0.142253789 container init 7e73bb3af806385be7d1fc0af3816e90b935c9ffe3a81d4810feaf9b1e3d00c6 (image=quay.io/ceph/ceph:v18, name=nice_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 01 16:33:16 compute-0 podman[73389]: 2025-10-01 16:33:16.47561319 +0000 UTC m=+0.151103399 container start 7e73bb3af806385be7d1fc0af3816e90b935c9ffe3a81d4810feaf9b1e3d00c6 (image=quay.io/ceph/ceph:v18, name=nice_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:33:16 compute-0 podman[73389]: 2025-10-01 16:33:16.478919392 +0000 UTC m=+0.154409641 container attach 7e73bb3af806385be7d1fc0af3816e90b935c9ffe3a81d4810feaf9b1e3d00c6 (image=quay.io/ceph/ceph:v18, name=nice_hermann, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 01 16:33:16 compute-0 nice_hermann[73406]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Oct 01 16:33:16 compute-0 systemd[1]: libpod-7e73bb3af806385be7d1fc0af3816e90b935c9ffe3a81d4810feaf9b1e3d00c6.scope: Deactivated successfully.
Oct 01 16:33:16 compute-0 podman[73389]: 2025-10-01 16:33:16.787689561 +0000 UTC m=+0.463179790 container died 7e73bb3af806385be7d1fc0af3816e90b935c9ffe3a81d4810feaf9b1e3d00c6 (image=quay.io/ceph/ceph:v18, name=nice_hermann, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:33:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8873ab4a0a2bbd33195d30febd3078242378b20feca3652b84e82bbf7df2022-merged.mount: Deactivated successfully.
Oct 01 16:33:16 compute-0 podman[73389]: 2025-10-01 16:33:16.846625991 +0000 UTC m=+0.522116200 container remove 7e73bb3af806385be7d1fc0af3816e90b935c9ffe3a81d4810feaf9b1e3d00c6 (image=quay.io/ceph/ceph:v18, name=nice_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 01 16:33:16 compute-0 systemd[1]: libpod-conmon-7e73bb3af806385be7d1fc0af3816e90b935c9ffe3a81d4810feaf9b1e3d00c6.scope: Deactivated successfully.
Oct 01 16:33:16 compute-0 podman[73421]: 2025-10-01 16:33:16.914704571 +0000 UTC m=+0.046346344 container create 7b608ceaa70638e83cdaee139cd8aae7e2144274a1fd9e6a43b9f883495e638d (image=quay.io/ceph/ceph:v18, name=relaxed_moore, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:33:16 compute-0 systemd[1]: Started libpod-conmon-7b608ceaa70638e83cdaee139cd8aae7e2144274a1fd9e6a43b9f883495e638d.scope.
Oct 01 16:33:16 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:33:16 compute-0 podman[73421]: 2025-10-01 16:33:16.889400402 +0000 UTC m=+0.021042185 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:33:16 compute-0 podman[73421]: 2025-10-01 16:33:16.988157583 +0000 UTC m=+0.119799386 container init 7b608ceaa70638e83cdaee139cd8aae7e2144274a1fd9e6a43b9f883495e638d (image=quay.io/ceph/ceph:v18, name=relaxed_moore, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:33:16 compute-0 podman[73421]: 2025-10-01 16:33:16.99601092 +0000 UTC m=+0.127652683 container start 7b608ceaa70638e83cdaee139cd8aae7e2144274a1fd9e6a43b9f883495e638d (image=quay.io/ceph/ceph:v18, name=relaxed_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:33:16 compute-0 relaxed_moore[73438]: 167 167
Oct 01 16:33:16 compute-0 systemd[1]: libpod-7b608ceaa70638e83cdaee139cd8aae7e2144274a1fd9e6a43b9f883495e638d.scope: Deactivated successfully.
Oct 01 16:33:16 compute-0 podman[73421]: 2025-10-01 16:33:16.999134086 +0000 UTC m=+0.130775859 container attach 7b608ceaa70638e83cdaee139cd8aae7e2144274a1fd9e6a43b9f883495e638d (image=quay.io/ceph/ceph:v18, name=relaxed_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:33:16 compute-0 podman[73421]: 2025-10-01 16:33:16.999471797 +0000 UTC m=+0.131113580 container died 7b608ceaa70638e83cdaee139cd8aae7e2144274a1fd9e6a43b9f883495e638d (image=quay.io/ceph/ceph:v18, name=relaxed_moore, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:33:17 compute-0 podman[73421]: 2025-10-01 16:33:17.030806881 +0000 UTC m=+0.162448664 container remove 7b608ceaa70638e83cdaee139cd8aae7e2144274a1fd9e6a43b9f883495e638d (image=quay.io/ceph/ceph:v18, name=relaxed_moore, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:33:17 compute-0 systemd[1]: libpod-conmon-7b608ceaa70638e83cdaee139cd8aae7e2144274a1fd9e6a43b9f883495e638d.scope: Deactivated successfully.
Oct 01 16:33:17 compute-0 podman[73454]: 2025-10-01 16:33:17.084009646 +0000 UTC m=+0.034341326 container create 84f174b12479b22ab2d0446a4ef5091c00f2c066b6d3ad013b7304a3b1e9e950 (image=quay.io/ceph/ceph:v18, name=laughing_rhodes, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:33:17 compute-0 systemd[1]: Started libpod-conmon-84f174b12479b22ab2d0446a4ef5091c00f2c066b6d3ad013b7304a3b1e9e950.scope.
Oct 01 16:33:17 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:33:17 compute-0 podman[73454]: 2025-10-01 16:33:17.161006999 +0000 UTC m=+0.111338719 container init 84f174b12479b22ab2d0446a4ef5091c00f2c066b6d3ad013b7304a3b1e9e950 (image=quay.io/ceph/ceph:v18, name=laughing_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:33:17 compute-0 podman[73454]: 2025-10-01 16:33:17.068555612 +0000 UTC m=+0.018887332 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:33:17 compute-0 podman[73454]: 2025-10-01 16:33:17.165999718 +0000 UTC m=+0.116331408 container start 84f174b12479b22ab2d0446a4ef5091c00f2c066b6d3ad013b7304a3b1e9e950 (image=quay.io/ceph/ceph:v18, name=laughing_rhodes, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:33:17 compute-0 podman[73454]: 2025-10-01 16:33:17.170184561 +0000 UTC m=+0.120516291 container attach 84f174b12479b22ab2d0446a4ef5091c00f2c066b6d3ad013b7304a3b1e9e950 (image=quay.io/ceph/ceph:v18, name=laughing_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 01 16:33:17 compute-0 laughing_rhodes[73471]: AQDNV91o8VwVCxAA0Nk4ue0Px/fZa/M6j0aOBg==
Oct 01 16:33:17 compute-0 systemd[1]: libpod-84f174b12479b22ab2d0446a4ef5091c00f2c066b6d3ad013b7304a3b1e9e950.scope: Deactivated successfully.
Oct 01 16:33:17 compute-0 podman[73454]: 2025-10-01 16:33:17.189601319 +0000 UTC m=+0.139932999 container died 84f174b12479b22ab2d0446a4ef5091c00f2c066b6d3ad013b7304a3b1e9e950 (image=quay.io/ceph/ceph:v18, name=laughing_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 01 16:33:17 compute-0 podman[73454]: 2025-10-01 16:33:17.227102192 +0000 UTC m=+0.177433882 container remove 84f174b12479b22ab2d0446a4ef5091c00f2c066b6d3ad013b7304a3b1e9e950 (image=quay.io/ceph/ceph:v18, name=laughing_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 01 16:33:17 compute-0 systemd[1]: libpod-conmon-84f174b12479b22ab2d0446a4ef5091c00f2c066b6d3ad013b7304a3b1e9e950.scope: Deactivated successfully.
Oct 01 16:33:17 compute-0 podman[73489]: 2025-10-01 16:33:17.284868982 +0000 UTC m=+0.039376447 container create 8c4f952c0809a402b6cd10f690e2a02486f0e97b9bc9c5762451c09fd0f900f1 (image=quay.io/ceph/ceph:v18, name=vibrant_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:33:17 compute-0 systemd[1]: Started libpod-conmon-8c4f952c0809a402b6cd10f690e2a02486f0e97b9bc9c5762451c09fd0f900f1.scope.
Oct 01 16:33:17 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:33:17 compute-0 podman[73489]: 2025-10-01 16:33:17.353426179 +0000 UTC m=+0.107933664 container init 8c4f952c0809a402b6cd10f690e2a02486f0e97b9bc9c5762451c09fd0f900f1 (image=quay.io/ceph/ceph:v18, name=vibrant_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:33:17 compute-0 podman[73489]: 2025-10-01 16:33:17.36171887 +0000 UTC m=+0.116226325 container start 8c4f952c0809a402b6cd10f690e2a02486f0e97b9bc9c5762451c09fd0f900f1 (image=quay.io/ceph/ceph:v18, name=vibrant_noether, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 01 16:33:17 compute-0 podman[73489]: 2025-10-01 16:33:17.265834146 +0000 UTC m=+0.020341651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:33:17 compute-0 podman[73489]: 2025-10-01 16:33:17.372738594 +0000 UTC m=+0.127246069 container attach 8c4f952c0809a402b6cd10f690e2a02486f0e97b9bc9c5762451c09fd0f900f1 (image=quay.io/ceph/ceph:v18, name=vibrant_noether, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 01 16:33:17 compute-0 vibrant_noether[73505]: AQDNV91oTiulFhAAvPthX3oJRg2aQPg4xM6rfg==
Oct 01 16:33:17 compute-0 systemd[1]: libpod-8c4f952c0809a402b6cd10f690e2a02486f0e97b9bc9c5762451c09fd0f900f1.scope: Deactivated successfully.
Oct 01 16:33:17 compute-0 podman[73489]: 2025-10-01 16:33:17.383452878 +0000 UTC m=+0.137960333 container died 8c4f952c0809a402b6cd10f690e2a02486f0e97b9bc9c5762451c09fd0f900f1 (image=quay.io/ceph/ceph:v18, name=vibrant_noether, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:33:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c1771cdb2fcfbc365b55abc4311af8820e82ede1aacae92a8e71566ed52c7d9-merged.mount: Deactivated successfully.
Oct 01 16:33:17 compute-0 podman[73489]: 2025-10-01 16:33:17.425880087 +0000 UTC m=+0.180387542 container remove 8c4f952c0809a402b6cd10f690e2a02486f0e97b9bc9c5762451c09fd0f900f1 (image=quay.io/ceph/ceph:v18, name=vibrant_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 16:33:17 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 01 16:33:17 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 01 16:33:17 compute-0 systemd[1]: libpod-conmon-8c4f952c0809a402b6cd10f690e2a02486f0e97b9bc9c5762451c09fd0f900f1.scope: Deactivated successfully.
Oct 01 16:33:17 compute-0 podman[73524]: 2025-10-01 16:33:17.482019443 +0000 UTC m=+0.035810117 container create 88c115ebd5494ed16feee5d9494cb99f64d909e3460e9051c26ab3df9b74c236 (image=quay.io/ceph/ceph:v18, name=stoic_clarke, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 01 16:33:17 compute-0 systemd[1]: Started libpod-conmon-88c115ebd5494ed16feee5d9494cb99f64d909e3460e9051c26ab3df9b74c236.scope.
Oct 01 16:33:17 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:33:17 compute-0 podman[73524]: 2025-10-01 16:33:17.4663223 +0000 UTC m=+0.020112994 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:33:17 compute-0 podman[73524]: 2025-10-01 16:33:17.80962093 +0000 UTC m=+0.363411624 container init 88c115ebd5494ed16feee5d9494cb99f64d909e3460e9051c26ab3df9b74c236 (image=quay.io/ceph/ceph:v18, name=stoic_clarke, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 01 16:33:17 compute-0 podman[73524]: 2025-10-01 16:33:17.814522826 +0000 UTC m=+0.368313510 container start 88c115ebd5494ed16feee5d9494cb99f64d909e3460e9051c26ab3df9b74c236 (image=quay.io/ceph/ceph:v18, name=stoic_clarke, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 01 16:33:17 compute-0 podman[73524]: 2025-10-01 16:33:17.8190447 +0000 UTC m=+0.372835424 container attach 88c115ebd5494ed16feee5d9494cb99f64d909e3460e9051c26ab3df9b74c236 (image=quay.io/ceph/ceph:v18, name=stoic_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 01 16:33:17 compute-0 stoic_clarke[73540]: AQDNV91oGxSNMRAAh75UjBOB8C0Riv9nfx6LfQ==
Oct 01 16:33:17 compute-0 systemd[1]: libpod-88c115ebd5494ed16feee5d9494cb99f64d909e3460e9051c26ab3df9b74c236.scope: Deactivated successfully.
Oct 01 16:33:17 compute-0 conmon[73540]: conmon 88c115ebd5494ed16fee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-88c115ebd5494ed16feee5d9494cb99f64d909e3460e9051c26ab3df9b74c236.scope/container/memory.events
Oct 01 16:33:17 compute-0 podman[73524]: 2025-10-01 16:33:17.835323242 +0000 UTC m=+0.389113916 container died 88c115ebd5494ed16feee5d9494cb99f64d909e3460e9051c26ab3df9b74c236 (image=quay.io/ceph/ceph:v18, name=stoic_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 01 16:33:17 compute-0 podman[73524]: 2025-10-01 16:33:17.874034876 +0000 UTC m=+0.427825550 container remove 88c115ebd5494ed16feee5d9494cb99f64d909e3460e9051c26ab3df9b74c236 (image=quay.io/ceph/ceph:v18, name=stoic_clarke, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:33:17 compute-0 systemd[1]: libpod-conmon-88c115ebd5494ed16feee5d9494cb99f64d909e3460e9051c26ab3df9b74c236.scope: Deactivated successfully.
Oct 01 16:33:17 compute-0 podman[73559]: 2025-10-01 16:33:17.945964097 +0000 UTC m=+0.051135407 container create 045c20d30a90888ea74e1108744059d11f98d90d9304b666e8a357b5acc96cc8 (image=quay.io/ceph/ceph:v18, name=great_gagarin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 01 16:33:17 compute-0 systemd[1]: Started libpod-conmon-045c20d30a90888ea74e1108744059d11f98d90d9304b666e8a357b5acc96cc8.scope.
Oct 01 16:33:17 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:33:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70e7ef2c8236bd792722f2697fee29441bb359bdd883b5134f94743a40b23586/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:18 compute-0 podman[73559]: 2025-10-01 16:33:18.011980817 +0000 UTC m=+0.117152167 container init 045c20d30a90888ea74e1108744059d11f98d90d9304b666e8a357b5acc96cc8 (image=quay.io/ceph/ceph:v18, name=great_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:33:18 compute-0 podman[73559]: 2025-10-01 16:33:17.92043513 +0000 UTC m=+0.025606540 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:33:18 compute-0 podman[73559]: 2025-10-01 16:33:18.018141576 +0000 UTC m=+0.123312886 container start 045c20d30a90888ea74e1108744059d11f98d90d9304b666e8a357b5acc96cc8 (image=quay.io/ceph/ceph:v18, name=great_gagarin, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:33:18 compute-0 podman[73559]: 2025-10-01 16:33:18.02238041 +0000 UTC m=+0.127551740 container attach 045c20d30a90888ea74e1108744059d11f98d90d9304b666e8a357b5acc96cc8 (image=quay.io/ceph/ceph:v18, name=great_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:33:18 compute-0 great_gagarin[73576]: /usr/bin/monmaptool: monmap file /tmp/monmap
Oct 01 16:33:18 compute-0 great_gagarin[73576]: setting min_mon_release = pacific
Oct 01 16:33:18 compute-0 great_gagarin[73576]: /usr/bin/monmaptool: set fsid to f44264e3-e26a-5bd3-9e84-b4ba651d9cf5
Oct 01 16:33:18 compute-0 great_gagarin[73576]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Oct 01 16:33:18 compute-0 systemd[1]: libpod-045c20d30a90888ea74e1108744059d11f98d90d9304b666e8a357b5acc96cc8.scope: Deactivated successfully.
Oct 01 16:33:18 compute-0 podman[73559]: 2025-10-01 16:33:18.054816071 +0000 UTC m=+0.159987421 container died 045c20d30a90888ea74e1108744059d11f98d90d9304b666e8a357b5acc96cc8 (image=quay.io/ceph/ceph:v18, name=great_gagarin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 01 16:33:18 compute-0 podman[73559]: 2025-10-01 16:33:18.091956441 +0000 UTC m=+0.197127761 container remove 045c20d30a90888ea74e1108744059d11f98d90d9304b666e8a357b5acc96cc8 (image=quay.io/ceph/ceph:v18, name=great_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 01 16:33:18 compute-0 systemd[1]: libpod-conmon-045c20d30a90888ea74e1108744059d11f98d90d9304b666e8a357b5acc96cc8.scope: Deactivated successfully.
Oct 01 16:33:18 compute-0 podman[73595]: 2025-10-01 16:33:18.15559034 +0000 UTC m=+0.041887532 container create 24add589696a17bbd8ee8ac57dbd177c2d32f6490fb4a0ef0862969d73bd598d (image=quay.io/ceph/ceph:v18, name=romantic_saha, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:33:18 compute-0 systemd[1]: Started libpod-conmon-24add589696a17bbd8ee8ac57dbd177c2d32f6490fb4a0ef0862969d73bd598d.scope.
Oct 01 16:33:18 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:33:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f72287d1175a27751c03a266a2a05dd489d6d966889a22a4cbd989d4c1203c24/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f72287d1175a27751c03a266a2a05dd489d6d966889a22a4cbd989d4c1203c24/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f72287d1175a27751c03a266a2a05dd489d6d966889a22a4cbd989d4c1203c24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f72287d1175a27751c03a266a2a05dd489d6d966889a22a4cbd989d4c1203c24/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:18 compute-0 podman[73595]: 2025-10-01 16:33:18.232043505 +0000 UTC m=+0.118340687 container init 24add589696a17bbd8ee8ac57dbd177c2d32f6490fb4a0ef0862969d73bd598d (image=quay.io/ceph/ceph:v18, name=romantic_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 01 16:33:18 compute-0 podman[73595]: 2025-10-01 16:33:18.140949034 +0000 UTC m=+0.027246236 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:33:18 compute-0 podman[73595]: 2025-10-01 16:33:18.240535833 +0000 UTC m=+0.126833025 container start 24add589696a17bbd8ee8ac57dbd177c2d32f6490fb4a0ef0862969d73bd598d (image=quay.io/ceph/ceph:v18, name=romantic_saha, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 01 16:33:18 compute-0 podman[73595]: 2025-10-01 16:33:18.244347262 +0000 UTC m=+0.130644474 container attach 24add589696a17bbd8ee8ac57dbd177c2d32f6490fb4a0ef0862969d73bd598d (image=quay.io/ceph/ceph:v18, name=romantic_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 01 16:33:18 compute-0 systemd[1]: libpod-24add589696a17bbd8ee8ac57dbd177c2d32f6490fb4a0ef0862969d73bd598d.scope: Deactivated successfully.
Oct 01 16:33:18 compute-0 podman[73595]: 2025-10-01 16:33:18.306951447 +0000 UTC m=+0.193248629 container died 24add589696a17bbd8ee8ac57dbd177c2d32f6490fb4a0ef0862969d73bd598d (image=quay.io/ceph/ceph:v18, name=romantic_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 01 16:33:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-f72287d1175a27751c03a266a2a05dd489d6d966889a22a4cbd989d4c1203c24-merged.mount: Deactivated successfully.
Oct 01 16:33:18 compute-0 podman[73595]: 2025-10-01 16:33:18.347338547 +0000 UTC m=+0.233635739 container remove 24add589696a17bbd8ee8ac57dbd177c2d32f6490fb4a0ef0862969d73bd598d (image=quay.io/ceph/ceph:v18, name=romantic_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Oct 01 16:33:18 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 01 16:33:18 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 01 16:33:18 compute-0 systemd[1]: libpod-conmon-24add589696a17bbd8ee8ac57dbd177c2d32f6490fb4a0ef0862969d73bd598d.scope: Deactivated successfully.
Oct 01 16:33:18 compute-0 systemd[1]: Reloading.
Oct 01 16:33:18 compute-0 systemd-rc-local-generator[73679]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:33:18 compute-0 systemd-sysv-generator[73683]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:33:18 compute-0 systemd[1]: Reloading.
Oct 01 16:33:18 compute-0 systemd-rc-local-generator[73714]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:33:18 compute-0 systemd-sysv-generator[73718]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:33:18 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Oct 01 16:33:18 compute-0 systemd[1]: Reloading.
Oct 01 16:33:18 compute-0 systemd-rc-local-generator[73756]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:33:18 compute-0 systemd-sysv-generator[73760]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:33:19 compute-0 systemd[1]: Reached target Ceph cluster f44264e3-e26a-5bd3-9e84-b4ba651d9cf5.
Oct 01 16:33:19 compute-0 systemd[1]: Reloading.
Oct 01 16:33:19 compute-0 systemd-rc-local-generator[73794]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:33:19 compute-0 systemd-sysv-generator[73797]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:33:19 compute-0 systemd[1]: Reloading.
Oct 01 16:33:19 compute-0 systemd-rc-local-generator[73834]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:33:19 compute-0 systemd-sysv-generator[73837]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:33:19 compute-0 systemd[1]: Created slice Slice /system/ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5.
Oct 01 16:33:19 compute-0 systemd[1]: Reached target System Time Set.
Oct 01 16:33:19 compute-0 systemd[1]: Reached target System Time Synchronized.
Oct 01 16:33:19 compute-0 systemd[1]: Starting Ceph mon.compute-0 for f44264e3-e26a-5bd3-9e84-b4ba651d9cf5...
Oct 01 16:33:19 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 01 16:33:19 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 01 16:33:19 compute-0 podman[73891]: 2025-10-01 16:33:19.795327644 +0000 UTC m=+0.033659273 container create fb2a60c537dc9408ce0a73869126efc19672960b4a2207b7dbf6051b0baed8bb (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:33:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1324f388d5b346552e675aafdbb29a27ff46862c4ef539d629f9874b03a3638/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1324f388d5b346552e675aafdbb29a27ff46862c4ef539d629f9874b03a3638/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1324f388d5b346552e675aafdbb29a27ff46862c4ef539d629f9874b03a3638/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1324f388d5b346552e675aafdbb29a27ff46862c4ef539d629f9874b03a3638/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:19 compute-0 podman[73891]: 2025-10-01 16:33:19.842268707 +0000 UTC m=+0.080600356 container init fb2a60c537dc9408ce0a73869126efc19672960b4a2207b7dbf6051b0baed8bb (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:33:19 compute-0 podman[73891]: 2025-10-01 16:33:19.849219683 +0000 UTC m=+0.087551312 container start fb2a60c537dc9408ce0a73869126efc19672960b4a2207b7dbf6051b0baed8bb (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 01 16:33:19 compute-0 bash[73891]: fb2a60c537dc9408ce0a73869126efc19672960b4a2207b7dbf6051b0baed8bb
Oct 01 16:33:19 compute-0 podman[73891]: 2025-10-01 16:33:19.780418518 +0000 UTC m=+0.018750167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:33:19 compute-0 systemd[1]: Started Ceph mon.compute-0 for f44264e3-e26a-5bd3-9e84-b4ba651d9cf5.
Oct 01 16:33:19 compute-0 ceph-mon[73911]: set uid:gid to 167:167 (ceph:ceph)
Oct 01 16:33:19 compute-0 ceph-mon[73911]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Oct 01 16:33:19 compute-0 ceph-mon[73911]: pidfile_write: ignore empty --pid-file
Oct 01 16:33:19 compute-0 ceph-mon[73911]: load: jerasure load: lrc 
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: RocksDB version: 7.9.2
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: Git sha 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: DB SUMMARY
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: DB Session ID:  L7RIXZYHW475DUXXRW7X
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: CURRENT file:  CURRENT
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: IDENTITY file:  IDENTITY
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                         Options.error_if_exists: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                       Options.create_if_missing: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                         Options.paranoid_checks: 1
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                                     Options.env: 0x5651068a9c40
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                                      Options.fs: PosixFileSystem
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                                Options.info_log: 0x565108cfae80
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                Options.max_file_opening_threads: 16
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                              Options.statistics: (nil)
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                               Options.use_fsync: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                       Options.max_log_file_size: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                         Options.allow_fallocate: 1
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                        Options.use_direct_reads: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:          Options.create_missing_column_families: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                              Options.db_log_dir: 
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                                 Options.wal_dir: 
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                   Options.advise_random_on_open: 1
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                    Options.write_buffer_manager: 0x565108d0ab40
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                            Options.rate_limiter: (nil)
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                  Options.unordered_write: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                               Options.row_cache: None
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                              Options.wal_filter: None
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:             Options.allow_ingest_behind: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:             Options.two_write_queues: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:             Options.manual_wal_flush: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:             Options.wal_compression: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:             Options.atomic_flush: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                 Options.log_readahead_size: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:             Options.allow_data_in_errors: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:             Options.db_host_id: __hostname__
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:             Options.max_background_jobs: 2
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:             Options.max_background_compactions: -1
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:             Options.max_subcompactions: 1
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:             Options.max_total_wal_size: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                          Options.max_open_files: -1
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                          Options.bytes_per_sync: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:       Options.compaction_readahead_size: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                  Options.max_background_flushes: -1
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: Compression algorithms supported:
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:         kZSTD supported: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:         kXpressCompression supported: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:         kBZip2Compression supported: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:         kLZ4Compression supported: 1
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:         kZlibCompression supported: 1
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:         kLZ4HCCompression supported: 1
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:         kSnappyCompression supported: 1
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:           Options.merge_operator: 
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:        Options.compaction_filter: None
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x565108cfaa80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x565108cf31f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:        Options.write_buffer_size: 33554432
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:  Options.max_write_buffer_number: 2
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:          Options.compression: NoCompression
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:             Options.num_levels: 7
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3703b1af-85cb-46a0-a42e-c54c049b0356
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336399894791, "job": 1, "event": "recovery_started", "wal_files": [4]}
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336399907514, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336399, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "L7RIXZYHW475DUXXRW7X", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336399907696, "job": 1, "event": "recovery_finished"}
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Oct 01 16:33:19 compute-0 podman[73912]: 2025-10-01 16:33:19.946159513 +0000 UTC m=+0.062582255 container create ca84781ea1feb45da5a3d89b68e18867bdd295f609ef46c31ecc174eb7cb754d (image=quay.io/ceph/ceph:v18, name=elated_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x565108d1ce00
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: DB pointer 0x565108da6000
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 16:33:19 compute-0 ceph-mon[73911]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.013       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.013       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.013       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.013       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x565108cf31f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 01 16:33:19 compute-0 ceph-mon[73911]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5
Oct 01 16:33:19 compute-0 ceph-mon[73911]: mon.compute-0@-1(???) e0 preinit fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5
Oct 01 16:33:19 compute-0 ceph-mon[73911]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Oct 01 16:33:19 compute-0 ceph-mon[73911]: mon.compute-0@0(probing) e0 win_standalone_election
Oct 01 16:33:19 compute-0 ceph-mon[73911]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Oct 01 16:33:19 compute-0 ceph-mon[73911]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 01 16:33:19 compute-0 ceph-mon[73911]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 01 16:33:19 compute-0 ceph-mon[73911]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct 01 16:33:19 compute-0 ceph-mon[73911]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct 01 16:33:19 compute-0 ceph-mon[73911]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct 01 16:33:19 compute-0 ceph-mon[73911]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct 01 16:33:19 compute-0 ceph-mon[73911]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 01 16:33:19 compute-0 ceph-mon[73911]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Oct 01 16:33:19 compute-0 ceph-mon[73911]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct 01 16:33:19 compute-0 ceph-mon[73911]: mon.compute-0@0(probing) e1 win_standalone_election
Oct 01 16:33:19 compute-0 ceph-mon[73911]: paxos.0).electionLogic(2) init, last seen epoch 2
Oct 01 16:33:19 compute-0 ceph-mon[73911]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 01 16:33:19 compute-0 ceph-mon[73911]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 01 16:33:19 compute-0 systemd[1]: Started libpod-conmon-ca84781ea1feb45da5a3d89b68e18867bdd295f609ef46c31ecc174eb7cb754d.scope.
Oct 01 16:33:19 compute-0 ceph-mon[73911]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 01 16:33:19 compute-0 ceph-mon[73911]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 01 16:33:19 compute-0 ceph-mon[73911]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-10-01T16:33:18.271094Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864100,os=Linux}
Oct 01 16:33:19 compute-0 ceph-mon[73911]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct 01 16:33:19 compute-0 ceph-mon[73911]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct 01 16:33:19 compute-0 ceph-mon[73911]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct 01 16:33:19 compute-0 ceph-mon[73911]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct 01 16:33:19 compute-0 ceph-mon[73911]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 01 16:33:19 compute-0 ceph-mon[73911]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Oct 01 16:33:20 compute-0 podman[73912]: 2025-10-01 16:33:19.908709982 +0000 UTC m=+0.025132714 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:33:20 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:33:20 compute-0 ceph-mon[73911]: mon.compute-0@0(leader).mds e1 new map
Oct 01 16:33:20 compute-0 ceph-mon[73911]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Oct 01 16:33:20 compute-0 ceph-mon[73911]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct 01 16:33:20 compute-0 ceph-mon[73911]: log_channel(cluster) log [DBG] : fsmap 
Oct 01 16:33:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/231cd75fb46fa513a807f523e6bb3448d296f78caac373385dc85752e9e7e88b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/231cd75fb46fa513a807f523e6bb3448d296f78caac373385dc85752e9e7e88b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/231cd75fb46fa513a807f523e6bb3448d296f78caac373385dc85752e9e7e88b/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:20 compute-0 ceph-mon[73911]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct 01 16:33:20 compute-0 ceph-mon[73911]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct 01 16:33:20 compute-0 ceph-mon[73911]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Oct 01 16:33:20 compute-0 ceph-mon[73911]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct 01 16:33:20 compute-0 ceph-mon[73911]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 01 16:33:20 compute-0 ceph-mon[73911]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 01 16:33:20 compute-0 ceph-mon[73911]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 01 16:33:20 compute-0 ceph-mon[73911]: mkfs f44264e3-e26a-5bd3-9e84-b4ba651d9cf5
Oct 01 16:33:20 compute-0 ceph-mon[73911]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Oct 01 16:33:20 compute-0 podman[73912]: 2025-10-01 16:33:20.03476582 +0000 UTC m=+0.151188572 container init ca84781ea1feb45da5a3d89b68e18867bdd295f609ef46c31ecc174eb7cb754d (image=quay.io/ceph/ceph:v18, name=elated_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 01 16:33:20 compute-0 ceph-mon[73911]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct 01 16:33:20 compute-0 ceph-mon[73911]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct 01 16:33:20 compute-0 ceph-mon[73911]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 01 16:33:20 compute-0 podman[73912]: 2025-10-01 16:33:20.04920855 +0000 UTC m=+0.165631282 container start ca84781ea1feb45da5a3d89b68e18867bdd295f609ef46c31ecc174eb7cb754d (image=quay.io/ceph/ceph:v18, name=elated_wilson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 01 16:33:20 compute-0 podman[73912]: 2025-10-01 16:33:20.052672817 +0000 UTC m=+0.169095549 container attach ca84781ea1feb45da5a3d89b68e18867bdd295f609ef46c31ecc174eb7cb754d (image=quay.io/ceph/ceph:v18, name=elated_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:33:20 compute-0 ceph-mon[73911]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 01 16:33:20 compute-0 ceph-mon[73911]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2174511609' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 01 16:33:20 compute-0 elated_wilson[73966]:   cluster:
Oct 01 16:33:20 compute-0 elated_wilson[73966]:     id:     f44264e3-e26a-5bd3-9e84-b4ba651d9cf5
Oct 01 16:33:20 compute-0 elated_wilson[73966]:     health: HEALTH_OK
Oct 01 16:33:20 compute-0 elated_wilson[73966]:  
Oct 01 16:33:20 compute-0 elated_wilson[73966]:   services:
Oct 01 16:33:20 compute-0 elated_wilson[73966]:     mon: 1 daemons, quorum compute-0 (age 0.454636s)
Oct 01 16:33:20 compute-0 elated_wilson[73966]:     mgr: no daemons active
Oct 01 16:33:20 compute-0 elated_wilson[73966]:     osd: 0 osds: 0 up, 0 in
Oct 01 16:33:20 compute-0 elated_wilson[73966]:  
Oct 01 16:33:20 compute-0 elated_wilson[73966]:   data:
Oct 01 16:33:20 compute-0 elated_wilson[73966]:     pools:   0 pools, 0 pgs
Oct 01 16:33:20 compute-0 elated_wilson[73966]:     objects: 0 objects, 0 B
Oct 01 16:33:20 compute-0 elated_wilson[73966]:     usage:   0 B used, 0 B / 0 B avail
Oct 01 16:33:20 compute-0 elated_wilson[73966]:     pgs:     
Oct 01 16:33:20 compute-0 elated_wilson[73966]:  
Oct 01 16:33:20 compute-0 systemd[1]: libpod-ca84781ea1feb45da5a3d89b68e18867bdd295f609ef46c31ecc174eb7cb754d.scope: Deactivated successfully.
Oct 01 16:33:20 compute-0 podman[73912]: 2025-10-01 16:33:20.457117712 +0000 UTC m=+0.573540454 container died ca84781ea1feb45da5a3d89b68e18867bdd295f609ef46c31ecc174eb7cb754d (image=quay.io/ceph/ceph:v18, name=elated_wilson, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 01 16:33:20 compute-0 podman[73912]: 2025-10-01 16:33:20.538174613 +0000 UTC m=+0.654597385 container remove ca84781ea1feb45da5a3d89b68e18867bdd295f609ef46c31ecc174eb7cb754d (image=quay.io/ceph/ceph:v18, name=elated_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 01 16:33:20 compute-0 systemd[1]: libpod-conmon-ca84781ea1feb45da5a3d89b68e18867bdd295f609ef46c31ecc174eb7cb754d.scope: Deactivated successfully.
Oct 01 16:33:20 compute-0 podman[74004]: 2025-10-01 16:33:20.595175487 +0000 UTC m=+0.035926110 container create 97c8a77adf3a235a4871a30d5897e415bc99c9bf0e7da7a58f6aa806090ffb21 (image=quay.io/ceph/ceph:v18, name=hungry_banzai, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:33:20 compute-0 podman[74004]: 2025-10-01 16:33:20.580156278 +0000 UTC m=+0.020906921 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:33:20 compute-0 systemd[1]: Started libpod-conmon-97c8a77adf3a235a4871a30d5897e415bc99c9bf0e7da7a58f6aa806090ffb21.scope.
Oct 01 16:33:20 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:33:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71ec595505a4dd22ed748bd4d8ceeea091c9f99c5d52d867b8f48e0ffca8c880/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71ec595505a4dd22ed748bd4d8ceeea091c9f99c5d52d867b8f48e0ffca8c880/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71ec595505a4dd22ed748bd4d8ceeea091c9f99c5d52d867b8f48e0ffca8c880/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71ec595505a4dd22ed748bd4d8ceeea091c9f99c5d52d867b8f48e0ffca8c880/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:20 compute-0 podman[74004]: 2025-10-01 16:33:20.816642193 +0000 UTC m=+0.257392836 container init 97c8a77adf3a235a4871a30d5897e415bc99c9bf0e7da7a58f6aa806090ffb21 (image=quay.io/ceph/ceph:v18, name=hungry_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 01 16:33:20 compute-0 podman[74004]: 2025-10-01 16:33:20.822186651 +0000 UTC m=+0.262937274 container start 97c8a77adf3a235a4871a30d5897e415bc99c9bf0e7da7a58f6aa806090ffb21 (image=quay.io/ceph/ceph:v18, name=hungry_banzai, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 01 16:33:20 compute-0 podman[74004]: 2025-10-01 16:33:20.825556116 +0000 UTC m=+0.266306739 container attach 97c8a77adf3a235a4871a30d5897e415bc99c9bf0e7da7a58f6aa806090ffb21 (image=quay.io/ceph/ceph:v18, name=hungry_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 01 16:33:21 compute-0 ceph-mon[73911]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 01 16:33:21 compute-0 ceph-mon[73911]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 01 16:33:21 compute-0 ceph-mon[73911]: fsmap 
Oct 01 16:33:21 compute-0 ceph-mon[73911]: osdmap e1: 0 total, 0 up, 0 in
Oct 01 16:33:21 compute-0 ceph-mon[73911]: mgrmap e1: no daemons active
Oct 01 16:33:21 compute-0 ceph-mon[73911]: from='client.? 192.168.122.100:0/2174511609' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 01 16:33:21 compute-0 ceph-mon[73911]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct 01 16:33:21 compute-0 ceph-mon[73911]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3676565351' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 01 16:33:21 compute-0 ceph-mon[73911]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3676565351' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 01 16:33:21 compute-0 hungry_banzai[74021]: 
Oct 01 16:33:21 compute-0 hungry_banzai[74021]: [global]
Oct 01 16:33:21 compute-0 hungry_banzai[74021]:         fsid = f44264e3-e26a-5bd3-9e84-b4ba651d9cf5
Oct 01 16:33:21 compute-0 hungry_banzai[74021]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Oct 01 16:33:21 compute-0 hungry_banzai[74021]:         osd_crush_chooseleaf_type = 0
Oct 01 16:33:21 compute-0 systemd[1]: libpod-97c8a77adf3a235a4871a30d5897e415bc99c9bf0e7da7a58f6aa806090ffb21.scope: Deactivated successfully.
Oct 01 16:33:21 compute-0 podman[74004]: 2025-10-01 16:33:21.228612853 +0000 UTC m=+0.669363476 container died 97c8a77adf3a235a4871a30d5897e415bc99c9bf0e7da7a58f6aa806090ffb21 (image=quay.io/ceph/ceph:v18, name=hungry_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:33:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-71ec595505a4dd22ed748bd4d8ceeea091c9f99c5d52d867b8f48e0ffca8c880-merged.mount: Deactivated successfully.
Oct 01 16:33:21 compute-0 podman[74004]: 2025-10-01 16:33:21.446180477 +0000 UTC m=+0.886931100 container remove 97c8a77adf3a235a4871a30d5897e415bc99c9bf0e7da7a58f6aa806090ffb21 (image=quay.io/ceph/ceph:v18, name=hungry_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 01 16:33:21 compute-0 systemd[1]: libpod-conmon-97c8a77adf3a235a4871a30d5897e415bc99c9bf0e7da7a58f6aa806090ffb21.scope: Deactivated successfully.
Oct 01 16:33:21 compute-0 podman[74060]: 2025-10-01 16:33:21.532000929 +0000 UTC m=+0.067660207 container create b566d4debd8b98c6a63e8a72762c3e0201ffa2ff41d5ef0a694839202a1f6719 (image=quay.io/ceph/ceph:v18, name=optimistic_bouman, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:33:21 compute-0 systemd[1]: Started libpod-conmon-b566d4debd8b98c6a63e8a72762c3e0201ffa2ff41d5ef0a694839202a1f6719.scope.
Oct 01 16:33:21 compute-0 podman[74060]: 2025-10-01 16:33:21.483552575 +0000 UTC m=+0.019211913 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:33:21 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:33:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c682ea489e968d5f82600725d4be81ae98a444db386f97dbd0fde5275bd048c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c682ea489e968d5f82600725d4be81ae98a444db386f97dbd0fde5275bd048c9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c682ea489e968d5f82600725d4be81ae98a444db386f97dbd0fde5275bd048c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c682ea489e968d5f82600725d4be81ae98a444db386f97dbd0fde5275bd048c9/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:21 compute-0 podman[74060]: 2025-10-01 16:33:21.652126945 +0000 UTC m=+0.187786263 container init b566d4debd8b98c6a63e8a72762c3e0201ffa2ff41d5ef0a694839202a1f6719 (image=quay.io/ceph/ceph:v18, name=optimistic_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:33:21 compute-0 podman[74060]: 2025-10-01 16:33:21.6590502 +0000 UTC m=+0.194709488 container start b566d4debd8b98c6a63e8a72762c3e0201ffa2ff41d5ef0a694839202a1f6719 (image=quay.io/ceph/ceph:v18, name=optimistic_bouman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:33:21 compute-0 podman[74060]: 2025-10-01 16:33:21.690596571 +0000 UTC m=+0.226255909 container attach b566d4debd8b98c6a63e8a72762c3e0201ffa2ff41d5ef0a694839202a1f6719 (image=quay.io/ceph/ceph:v18, name=optimistic_bouman, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 01 16:33:22 compute-0 ceph-mon[73911]: from='client.? 192.168.122.100:0/3676565351' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 01 16:33:22 compute-0 ceph-mon[73911]: from='client.? 192.168.122.100:0/3676565351' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 01 16:33:22 compute-0 ceph-mon[73911]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:33:22 compute-0 ceph-mon[73911]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3527772298' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:33:22 compute-0 systemd[1]: libpod-b566d4debd8b98c6a63e8a72762c3e0201ffa2ff41d5ef0a694839202a1f6719.scope: Deactivated successfully.
Oct 01 16:33:22 compute-0 podman[74104]: 2025-10-01 16:33:22.142559268 +0000 UTC m=+0.025441164 container died b566d4debd8b98c6a63e8a72762c3e0201ffa2ff41d5ef0a694839202a1f6719 (image=quay.io/ceph/ceph:v18, name=optimistic_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:33:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-c682ea489e968d5f82600725d4be81ae98a444db386f97dbd0fde5275bd048c9-merged.mount: Deactivated successfully.
Oct 01 16:33:22 compute-0 podman[74104]: 2025-10-01 16:33:22.186172508 +0000 UTC m=+0.069054384 container remove b566d4debd8b98c6a63e8a72762c3e0201ffa2ff41d5ef0a694839202a1f6719 (image=quay.io/ceph/ceph:v18, name=optimistic_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Oct 01 16:33:22 compute-0 systemd[1]: libpod-conmon-b566d4debd8b98c6a63e8a72762c3e0201ffa2ff41d5ef0a694839202a1f6719.scope: Deactivated successfully.
Oct 01 16:33:22 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for f44264e3-e26a-5bd3-9e84-b4ba651d9cf5...
Oct 01 16:33:22 compute-0 ceph-mon[73911]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct 01 16:33:22 compute-0 ceph-mon[73911]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct 01 16:33:22 compute-0 ceph-mon[73911]: mon.compute-0@0(leader) e1 shutdown
Oct 01 16:33:22 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0[73907]: 2025-10-01T16:33:22.417+0000 7f4226b53640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct 01 16:33:22 compute-0 ceph-mon[73911]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 01 16:33:22 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0[73907]: 2025-10-01T16:33:22.417+0000 7f4226b53640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct 01 16:33:22 compute-0 ceph-mon[73911]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 01 16:33:22 compute-0 podman[74149]: 2025-10-01 16:33:22.458961496 +0000 UTC m=+0.092677777 container died fb2a60c537dc9408ce0a73869126efc19672960b4a2207b7dbf6051b0baed8bb (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 01 16:33:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1324f388d5b346552e675aafdbb29a27ff46862c4ef539d629f9874b03a3638-merged.mount: Deactivated successfully.
Oct 01 16:33:22 compute-0 podman[74149]: 2025-10-01 16:33:22.722482068 +0000 UTC m=+0.356198349 container remove fb2a60c537dc9408ce0a73869126efc19672960b4a2207b7dbf6051b0baed8bb (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:33:22 compute-0 bash[74149]: ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0
Oct 01 16:33:22 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 01 16:33:22 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 01 16:33:22 compute-0 systemd[1]: ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5@mon.compute-0.service: Deactivated successfully.
Oct 01 16:33:22 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for f44264e3-e26a-5bd3-9e84-b4ba651d9cf5.
Oct 01 16:33:22 compute-0 systemd[1]: Starting Ceph mon.compute-0 for f44264e3-e26a-5bd3-9e84-b4ba651d9cf5...
Oct 01 16:33:23 compute-0 podman[74254]: 2025-10-01 16:33:23.0644068 +0000 UTC m=+0.051558900 container create bfdaa9b78cc1558959452c7020a00aa78f3da27e3ededf3766f2f88165c2443b (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:33:23 compute-0 podman[74254]: 2025-10-01 16:33:23.034420913 +0000 UTC m=+0.021573063 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:33:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bd00802fb86c5b9f9d2d1295bc34597bf41f9d675e1092ffeb8e76338b2459a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bd00802fb86c5b9f9d2d1295bc34597bf41f9d675e1092ffeb8e76338b2459a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bd00802fb86c5b9f9d2d1295bc34597bf41f9d675e1092ffeb8e76338b2459a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bd00802fb86c5b9f9d2d1295bc34597bf41f9d675e1092ffeb8e76338b2459a/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:23 compute-0 podman[74254]: 2025-10-01 16:33:23.280367029 +0000 UTC m=+0.267519169 container init bfdaa9b78cc1558959452c7020a00aa78f3da27e3ededf3766f2f88165c2443b (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 01 16:33:23 compute-0 podman[74254]: 2025-10-01 16:33:23.285243784 +0000 UTC m=+0.272395894 container start bfdaa9b78cc1558959452c7020a00aa78f3da27e3ededf3766f2f88165c2443b (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:33:23 compute-0 ceph-mon[74273]: set uid:gid to 167:167 (ceph:ceph)
Oct 01 16:33:23 compute-0 ceph-mon[74273]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Oct 01 16:33:23 compute-0 ceph-mon[74273]: pidfile_write: ignore empty --pid-file
Oct 01 16:33:23 compute-0 ceph-mon[74273]: load: jerasure load: lrc 
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: RocksDB version: 7.9.2
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: Git sha 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: DB SUMMARY
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: DB Session ID:  Q91HFJNCEI5G0QGGY20B
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: CURRENT file:  CURRENT
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: IDENTITY file:  IDENTITY
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 54564 ; 
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                         Options.error_if_exists: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                       Options.create_if_missing: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                         Options.paranoid_checks: 1
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                                     Options.env: 0x5647cf4a1c40
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                                      Options.fs: PosixFileSystem
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                                Options.info_log: 0x5647d11e1040
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                Options.max_file_opening_threads: 16
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                              Options.statistics: (nil)
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                               Options.use_fsync: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                       Options.max_log_file_size: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                         Options.allow_fallocate: 1
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                        Options.use_direct_reads: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:          Options.create_missing_column_families: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                              Options.db_log_dir: 
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                                 Options.wal_dir: 
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                   Options.advise_random_on_open: 1
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                    Options.write_buffer_manager: 0x5647d11f0b40
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                            Options.rate_limiter: (nil)
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                  Options.unordered_write: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                               Options.row_cache: None
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                              Options.wal_filter: None
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:             Options.allow_ingest_behind: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:             Options.two_write_queues: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:             Options.manual_wal_flush: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:             Options.wal_compression: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:             Options.atomic_flush: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                 Options.log_readahead_size: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:             Options.allow_data_in_errors: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:             Options.db_host_id: __hostname__
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:             Options.max_background_jobs: 2
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:             Options.max_background_compactions: -1
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:             Options.max_subcompactions: 1
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:             Options.max_total_wal_size: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                          Options.max_open_files: -1
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                          Options.bytes_per_sync: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:       Options.compaction_readahead_size: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                  Options.max_background_flushes: -1
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: Compression algorithms supported:
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:         kZSTD supported: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:         kXpressCompression supported: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:         kBZip2Compression supported: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:         kLZ4Compression supported: 1
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:         kZlibCompression supported: 1
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:         kLZ4HCCompression supported: 1
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:         kSnappyCompression supported: 1
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:           Options.merge_operator: 
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:        Options.compaction_filter: None
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5647d11e0c40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5647d11d91f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:        Options.write_buffer_size: 33554432
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:  Options.max_write_buffer_number: 2
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:          Options.compression: NoCompression
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:             Options.num_levels: 7
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3703b1af-85cb-46a0-a42e-c54c049b0356
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336403324874, "job": 1, "event": "recovery_started", "wal_files": [9]}
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336403404325, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 54153, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 137, "table_properties": {"data_size": 52695, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 3023, "raw_average_key_size": 30, "raw_value_size": 50297, "raw_average_value_size": 502, "num_data_blocks": 8, "num_entries": 100, "num_filter_entries": 100, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336403, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336403404490, "job": 1, "event": "recovery_finished"}
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Oct 01 16:33:23 compute-0 bash[74254]: bfdaa9b78cc1558959452c7020a00aa78f3da27e3ededf3766f2f88165c2443b
Oct 01 16:33:23 compute-0 systemd[1]: Started Ceph mon.compute-0 for f44264e3-e26a-5bd3-9e84-b4ba651d9cf5.
Oct 01 16:33:23 compute-0 podman[74295]: 2025-10-01 16:33:23.517486735 +0000 UTC m=+0.026700227 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5647d1202e00
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: DB pointer 0x5647d128c000
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 16:33:23 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   54.78 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.08              0.00         1    0.079       0      0       0.0       0.0
                                            Sum      2/0   54.78 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.08              0.00         1    0.079       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.08              0.00         1    0.079       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.08              0.00         1    0.079       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.16 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.16 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5647d11d91f0#2 capacity: 512.00 MB usage: 0.78 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 01 16:33:23 compute-0 ceph-mon[74273]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5
Oct 01 16:33:23 compute-0 ceph-mon[74273]: mon.compute-0@-1(???) e1 preinit fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5
Oct 01 16:33:23 compute-0 ceph-mon[74273]: mon.compute-0@-1(???).mds e1 new map
Oct 01 16:33:23 compute-0 ceph-mon[74273]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Oct 01 16:33:23 compute-0 ceph-mon[74273]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct 01 16:33:23 compute-0 ceph-mon[74273]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 01 16:33:23 compute-0 ceph-mon[74273]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 01 16:33:23 compute-0 ceph-mon[74273]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 01 16:33:23 compute-0 ceph-mon[74273]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Oct 01 16:33:23 compute-0 ceph-mon[74273]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Oct 01 16:33:23 compute-0 ceph-mon[74273]: mon.compute-0@0(probing) e1 win_standalone_election
Oct 01 16:33:23 compute-0 ceph-mon[74273]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Oct 01 16:33:23 compute-0 podman[74295]: 2025-10-01 16:33:23.690772326 +0000 UTC m=+0.199985758 container create a11865bb10ff703fa65fa617b12d95d3f873ab7443a1d4ad965d790bcf472043 (image=quay.io/ceph/ceph:v18, name=compassionate_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:33:23 compute-0 ceph-mon[74273]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 01 16:33:23 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 01 16:33:23 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 01 16:33:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 01 16:33:23 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : fsmap 
Oct 01 16:33:23 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct 01 16:33:23 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct 01 16:33:23 compute-0 systemd[1]: Started libpod-conmon-a11865bb10ff703fa65fa617b12d95d3f873ab7443a1d4ad965d790bcf472043.scope.
Oct 01 16:33:23 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:33:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0440555a75d046f2664fd85b6d22f576acdf31add05f921410f327d5bfb6376e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0440555a75d046f2664fd85b6d22f576acdf31add05f921410f327d5bfb6376e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0440555a75d046f2664fd85b6d22f576acdf31add05f921410f327d5bfb6376e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:23 compute-0 podman[74295]: 2025-10-01 16:33:23.959779324 +0000 UTC m=+0.468992806 container init a11865bb10ff703fa65fa617b12d95d3f873ab7443a1d4ad965d790bcf472043 (image=quay.io/ceph/ceph:v18, name=compassionate_joliot, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:33:23 compute-0 ceph-mon[74273]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 01 16:33:23 compute-0 ceph-mon[74273]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 01 16:33:23 compute-0 ceph-mon[74273]: fsmap 
Oct 01 16:33:23 compute-0 ceph-mon[74273]: osdmap e1: 0 total, 0 up, 0 in
Oct 01 16:33:23 compute-0 ceph-mon[74273]: mgrmap e1: no daemons active
Oct 01 16:33:23 compute-0 podman[74295]: 2025-10-01 16:33:23.969833025 +0000 UTC m=+0.479046427 container start a11865bb10ff703fa65fa617b12d95d3f873ab7443a1d4ad965d790bcf472043 (image=quay.io/ceph/ceph:v18, name=compassionate_joliot, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:33:24 compute-0 podman[74295]: 2025-10-01 16:33:24.015186814 +0000 UTC m=+0.524400256 container attach a11865bb10ff703fa65fa617b12d95d3f873ab7443a1d4ad965d790bcf472043 (image=quay.io/ceph/ceph:v18, name=compassionate_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 01 16:33:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Oct 01 16:33:24 compute-0 systemd[1]: libpod-a11865bb10ff703fa65fa617b12d95d3f873ab7443a1d4ad965d790bcf472043.scope: Deactivated successfully.
Oct 01 16:33:24 compute-0 podman[74295]: 2025-10-01 16:33:24.37662105 +0000 UTC m=+0.885834452 container died a11865bb10ff703fa65fa617b12d95d3f873ab7443a1d4ad965d790bcf472043 (image=quay.io/ceph/ceph:v18, name=compassionate_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 01 16:33:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-0440555a75d046f2664fd85b6d22f576acdf31add05f921410f327d5bfb6376e-merged.mount: Deactivated successfully.
Oct 01 16:33:24 compute-0 podman[74295]: 2025-10-01 16:33:24.726402449 +0000 UTC m=+1.235615901 container remove a11865bb10ff703fa65fa617b12d95d3f873ab7443a1d4ad965d790bcf472043 (image=quay.io/ceph/ceph:v18, name=compassionate_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:33:24 compute-0 systemd[1]: libpod-conmon-a11865bb10ff703fa65fa617b12d95d3f873ab7443a1d4ad965d790bcf472043.scope: Deactivated successfully.
Oct 01 16:33:24 compute-0 podman[74366]: 2025-10-01 16:33:24.820195792 +0000 UTC m=+0.068703452 container create f4896e6451765277d9d910b6901c563e3a5d3cfd341436047304342c9f2140e7 (image=quay.io/ceph/ceph:v18, name=dreamy_mclaren, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 01 16:33:24 compute-0 podman[74366]: 2025-10-01 16:33:24.779093588 +0000 UTC m=+0.027601248 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:33:24 compute-0 systemd[1]: Started libpod-conmon-f4896e6451765277d9d910b6901c563e3a5d3cfd341436047304342c9f2140e7.scope.
Oct 01 16:33:24 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:33:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5bb803a5b92a80b5e97f349b81086b652306308d794f6702669121f9786f4f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5bb803a5b92a80b5e97f349b81086b652306308d794f6702669121f9786f4f0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5bb803a5b92a80b5e97f349b81086b652306308d794f6702669121f9786f4f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:24 compute-0 podman[74366]: 2025-10-01 16:33:24.955297797 +0000 UTC m=+0.203805437 container init f4896e6451765277d9d910b6901c563e3a5d3cfd341436047304342c9f2140e7 (image=quay.io/ceph/ceph:v18, name=dreamy_mclaren, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:33:24 compute-0 podman[74366]: 2025-10-01 16:33:24.960729691 +0000 UTC m=+0.209237341 container start f4896e6451765277d9d910b6901c563e3a5d3cfd341436047304342c9f2140e7 (image=quay.io/ceph/ceph:v18, name=dreamy_mclaren, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:33:24 compute-0 podman[74366]: 2025-10-01 16:33:24.989398514 +0000 UTC m=+0.237906174 container attach f4896e6451765277d9d910b6901c563e3a5d3cfd341436047304342c9f2140e7 (image=quay.io/ceph/ceph:v18, name=dreamy_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 01 16:33:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Oct 01 16:33:25 compute-0 systemd[1]: libpod-f4896e6451765277d9d910b6901c563e3a5d3cfd341436047304342c9f2140e7.scope: Deactivated successfully.
Oct 01 16:33:25 compute-0 podman[74409]: 2025-10-01 16:33:25.422924116 +0000 UTC m=+0.020902820 container died f4896e6451765277d9d910b6901c563e3a5d3cfd341436047304342c9f2140e7 (image=quay.io/ceph/ceph:v18, name=dreamy_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 01 16:33:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5bb803a5b92a80b5e97f349b81086b652306308d794f6702669121f9786f4f0-merged.mount: Deactivated successfully.
Oct 01 16:33:26 compute-0 podman[74409]: 2025-10-01 16:33:26.300701094 +0000 UTC m=+0.898679798 container remove f4896e6451765277d9d910b6901c563e3a5d3cfd341436047304342c9f2140e7 (image=quay.io/ceph/ceph:v18, name=dreamy_mclaren, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 01 16:33:26 compute-0 systemd[1]: libpod-conmon-f4896e6451765277d9d910b6901c563e3a5d3cfd341436047304342c9f2140e7.scope: Deactivated successfully.
Oct 01 16:33:26 compute-0 systemd[1]: Reloading.
Oct 01 16:33:26 compute-0 systemd-rc-local-generator[74447]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:33:26 compute-0 systemd-sysv-generator[74451]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:33:26 compute-0 systemd[1]: Reloading.
Oct 01 16:33:26 compute-0 systemd-rc-local-generator[74494]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:33:26 compute-0 systemd-sysv-generator[74497]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:33:26 compute-0 systemd[1]: Starting Ceph mgr.compute-0.pmbdpj for f44264e3-e26a-5bd3-9e84-b4ba651d9cf5...
Oct 01 16:33:27 compute-0 podman[74552]: 2025-10-01 16:33:27.084291514 +0000 UTC m=+0.037233055 container create 9642ab418b1376ae571b3e1091aa977c68e9f4f41966323d8cabeea5f635304b (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/354ab9a1bc2084d0bbd07824a18b34c35a4d81de687e8e5a5f4c39929980d2a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/354ab9a1bc2084d0bbd07824a18b34c35a4d81de687e8e5a5f4c39929980d2a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/354ab9a1bc2084d0bbd07824a18b34c35a4d81de687e8e5a5f4c39929980d2a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/354ab9a1bc2084d0bbd07824a18b34c35a4d81de687e8e5a5f4c39929980d2a2/merged/var/lib/ceph/mgr/ceph-compute-0.pmbdpj supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:27 compute-0 podman[74552]: 2025-10-01 16:33:27.140486591 +0000 UTC m=+0.093428142 container init 9642ab418b1376ae571b3e1091aa977c68e9f4f41966323d8cabeea5f635304b (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:33:27 compute-0 podman[74552]: 2025-10-01 16:33:27.146535806 +0000 UTC m=+0.099477337 container start 9642ab418b1376ae571b3e1091aa977c68e9f4f41966323d8cabeea5f635304b (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 01 16:33:27 compute-0 bash[74552]: 9642ab418b1376ae571b3e1091aa977c68e9f4f41966323d8cabeea5f635304b
Oct 01 16:33:27 compute-0 podman[74552]: 2025-10-01 16:33:27.065883149 +0000 UTC m=+0.018824710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:33:27 compute-0 systemd[1]: Started Ceph mgr.compute-0.pmbdpj for f44264e3-e26a-5bd3-9e84-b4ba651d9cf5.
Oct 01 16:33:27 compute-0 ceph-mgr[74571]: set uid:gid to 167:167 (ceph:ceph)
Oct 01 16:33:27 compute-0 ceph-mgr[74571]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct 01 16:33:27 compute-0 ceph-mgr[74571]: pidfile_write: ignore empty --pid-file
Oct 01 16:33:27 compute-0 podman[74572]: 2025-10-01 16:33:27.225792355 +0000 UTC m=+0.045428672 container create e36ea35a8aeba097733ae621133b7c1c854b4224999b99acd2eb27428b400468 (image=quay.io/ceph/ceph:v18, name=admiring_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Oct 01 16:33:27 compute-0 systemd[1]: Started libpod-conmon-e36ea35a8aeba097733ae621133b7c1c854b4224999b99acd2eb27428b400468.scope.
Oct 01 16:33:27 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'alerts'
Oct 01 16:33:27 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83ebb32acefeb3d351f00f2b1b02595df10792c1379b626f7de65939d2dbab2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83ebb32acefeb3d351f00f2b1b02595df10792c1379b626f7de65939d2dbab2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83ebb32acefeb3d351f00f2b1b02595df10792c1379b626f7de65939d2dbab2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:27 compute-0 podman[74572]: 2025-10-01 16:33:27.302152597 +0000 UTC m=+0.121788924 container init e36ea35a8aeba097733ae621133b7c1c854b4224999b99acd2eb27428b400468 (image=quay.io/ceph/ceph:v18, name=admiring_brown, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 01 16:33:27 compute-0 podman[74572]: 2025-10-01 16:33:27.207112002 +0000 UTC m=+0.026748339 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:33:27 compute-0 podman[74572]: 2025-10-01 16:33:27.308186101 +0000 UTC m=+0.127822418 container start e36ea35a8aeba097733ae621133b7c1c854b4224999b99acd2eb27428b400468 (image=quay.io/ceph/ceph:v18, name=admiring_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:33:27 compute-0 podman[74572]: 2025-10-01 16:33:27.311921158 +0000 UTC m=+0.131557505 container attach e36ea35a8aeba097733ae621133b7c1c854b4224999b99acd2eb27428b400468 (image=quay.io/ceph/ceph:v18, name=admiring_brown, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:33:27 compute-0 ceph-mgr[74571]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 01 16:33:27 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'balancer'
Oct 01 16:33:27 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:33:27.602+0000 7f5d20517140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 01 16:33:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 01 16:33:27 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2194490521' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 16:33:27 compute-0 admiring_brown[74614]: 
Oct 01 16:33:27 compute-0 admiring_brown[74614]: {
Oct 01 16:33:27 compute-0 admiring_brown[74614]:     "fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:33:27 compute-0 admiring_brown[74614]:     "health": {
Oct 01 16:33:27 compute-0 admiring_brown[74614]:         "status": "HEALTH_OK",
Oct 01 16:33:27 compute-0 admiring_brown[74614]:         "checks": {},
Oct 01 16:33:27 compute-0 admiring_brown[74614]:         "mutes": []
Oct 01 16:33:27 compute-0 admiring_brown[74614]:     },
Oct 01 16:33:27 compute-0 admiring_brown[74614]:     "election_epoch": 5,
Oct 01 16:33:27 compute-0 admiring_brown[74614]:     "quorum": [
Oct 01 16:33:27 compute-0 admiring_brown[74614]:         0
Oct 01 16:33:27 compute-0 admiring_brown[74614]:     ],
Oct 01 16:33:27 compute-0 admiring_brown[74614]:     "quorum_names": [
Oct 01 16:33:27 compute-0 admiring_brown[74614]:         "compute-0"
Oct 01 16:33:27 compute-0 admiring_brown[74614]:     ],
Oct 01 16:33:27 compute-0 admiring_brown[74614]:     "quorum_age": 3,
Oct 01 16:33:27 compute-0 admiring_brown[74614]:     "monmap": {
Oct 01 16:33:27 compute-0 admiring_brown[74614]:         "epoch": 1,
Oct 01 16:33:27 compute-0 admiring_brown[74614]:         "min_mon_release_name": "reef",
Oct 01 16:33:27 compute-0 admiring_brown[74614]:         "num_mons": 1
Oct 01 16:33:27 compute-0 admiring_brown[74614]:     },
Oct 01 16:33:27 compute-0 admiring_brown[74614]:     "osdmap": {
Oct 01 16:33:27 compute-0 admiring_brown[74614]:         "epoch": 1,
Oct 01 16:33:27 compute-0 admiring_brown[74614]:         "num_osds": 0,
Oct 01 16:33:27 compute-0 admiring_brown[74614]:         "num_up_osds": 0,
Oct 01 16:33:27 compute-0 admiring_brown[74614]:         "osd_up_since": 0,
Oct 01 16:33:27 compute-0 admiring_brown[74614]:         "num_in_osds": 0,
Oct 01 16:33:27 compute-0 admiring_brown[74614]:         "osd_in_since": 0,
Oct 01 16:33:27 compute-0 admiring_brown[74614]:         "num_remapped_pgs": 0
Oct 01 16:33:27 compute-0 admiring_brown[74614]:     },
Oct 01 16:33:27 compute-0 admiring_brown[74614]:     "pgmap": {
Oct 01 16:33:27 compute-0 admiring_brown[74614]:         "pgs_by_state": [],
Oct 01 16:33:27 compute-0 admiring_brown[74614]:         "num_pgs": 0,
Oct 01 16:33:27 compute-0 admiring_brown[74614]:         "num_pools": 0,
Oct 01 16:33:27 compute-0 admiring_brown[74614]:         "num_objects": 0,
Oct 01 16:33:27 compute-0 admiring_brown[74614]:         "data_bytes": 0,
Oct 01 16:33:27 compute-0 admiring_brown[74614]:         "bytes_used": 0,
Oct 01 16:33:27 compute-0 admiring_brown[74614]:         "bytes_avail": 0,
Oct 01 16:33:27 compute-0 admiring_brown[74614]:         "bytes_total": 0
Oct 01 16:33:27 compute-0 admiring_brown[74614]:     },
Oct 01 16:33:27 compute-0 admiring_brown[74614]:     "fsmap": {
Oct 01 16:33:27 compute-0 admiring_brown[74614]:         "epoch": 1,
Oct 01 16:33:27 compute-0 admiring_brown[74614]:         "by_rank": [],
Oct 01 16:33:27 compute-0 admiring_brown[74614]:         "up:standby": 0
Oct 01 16:33:27 compute-0 admiring_brown[74614]:     },
Oct 01 16:33:27 compute-0 admiring_brown[74614]:     "mgrmap": {
Oct 01 16:33:27 compute-0 admiring_brown[74614]:         "available": false,
Oct 01 16:33:27 compute-0 admiring_brown[74614]:         "num_standbys": 0,
Oct 01 16:33:27 compute-0 admiring_brown[74614]:         "modules": [
Oct 01 16:33:27 compute-0 admiring_brown[74614]:             "iostat",
Oct 01 16:33:27 compute-0 admiring_brown[74614]:             "nfs",
Oct 01 16:33:27 compute-0 admiring_brown[74614]:             "restful"
Oct 01 16:33:27 compute-0 admiring_brown[74614]:         ],
Oct 01 16:33:27 compute-0 admiring_brown[74614]:         "services": {}
Oct 01 16:33:27 compute-0 admiring_brown[74614]:     },
Oct 01 16:33:27 compute-0 admiring_brown[74614]:     "servicemap": {
Oct 01 16:33:27 compute-0 admiring_brown[74614]:         "epoch": 1,
Oct 01 16:33:27 compute-0 admiring_brown[74614]:         "modified": "2025-10-01T16:33:19.992352+0000",
Oct 01 16:33:27 compute-0 admiring_brown[74614]:         "services": {}
Oct 01 16:33:27 compute-0 admiring_brown[74614]:     },
Oct 01 16:33:27 compute-0 admiring_brown[74614]:     "progress_events": {}
Oct 01 16:33:27 compute-0 admiring_brown[74614]: }
Oct 01 16:33:27 compute-0 systemd[1]: libpod-e36ea35a8aeba097733ae621133b7c1c854b4224999b99acd2eb27428b400468.scope: Deactivated successfully.
Oct 01 16:33:27 compute-0 podman[74572]: 2025-10-01 16:33:27.69682938 +0000 UTC m=+0.516465707 container died e36ea35a8aeba097733ae621133b7c1c854b4224999b99acd2eb27428b400468 (image=quay.io/ceph/ceph:v18, name=admiring_brown, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:33:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-e83ebb32acefeb3d351f00f2b1b02595df10792c1379b626f7de65939d2dbab2-merged.mount: Deactivated successfully.
Oct 01 16:33:27 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2194490521' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 16:33:27 compute-0 podman[74572]: 2025-10-01 16:33:27.74603916 +0000 UTC m=+0.565675467 container remove e36ea35a8aeba097733ae621133b7c1c854b4224999b99acd2eb27428b400468 (image=quay.io/ceph/ceph:v18, name=admiring_brown, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 01 16:33:27 compute-0 systemd[1]: libpod-conmon-e36ea35a8aeba097733ae621133b7c1c854b4224999b99acd2eb27428b400468.scope: Deactivated successfully.
Oct 01 16:33:27 compute-0 ceph-mgr[74571]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 01 16:33:27 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'cephadm'
Oct 01 16:33:27 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:33:27.864+0000 7f5d20517140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 01 16:33:29 compute-0 podman[74666]: 2025-10-01 16:33:29.834839504 +0000 UTC m=+0.067130689 container create e0d48324b5176e9b032b26b65c15df302e3fea299cf21712eaca856c5e5ec86a (image=quay.io/ceph/ceph:v18, name=happy_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 01 16:33:29 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'crash'
Oct 01 16:33:29 compute-0 systemd[1]: Started libpod-conmon-e0d48324b5176e9b032b26b65c15df302e3fea299cf21712eaca856c5e5ec86a.scope.
Oct 01 16:33:29 compute-0 podman[74666]: 2025-10-01 16:33:29.796385649 +0000 UTC m=+0.028676844 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:33:29 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:33:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe51077c87cb90f567d025dd929bfe7d78171b2c04bcaeae38e9032c8ab38112/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe51077c87cb90f567d025dd929bfe7d78171b2c04bcaeae38e9032c8ab38112/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe51077c87cb90f567d025dd929bfe7d78171b2c04bcaeae38e9032c8ab38112/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:29 compute-0 podman[74666]: 2025-10-01 16:33:29.937331861 +0000 UTC m=+0.169623056 container init e0d48324b5176e9b032b26b65c15df302e3fea299cf21712eaca856c5e5ec86a (image=quay.io/ceph/ceph:v18, name=happy_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:33:29 compute-0 podman[74666]: 2025-10-01 16:33:29.944858396 +0000 UTC m=+0.177149561 container start e0d48324b5176e9b032b26b65c15df302e3fea299cf21712eaca856c5e5ec86a (image=quay.io/ceph/ceph:v18, name=happy_leakey, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:33:29 compute-0 podman[74666]: 2025-10-01 16:33:29.950213128 +0000 UTC m=+0.182504333 container attach e0d48324b5176e9b032b26b65c15df302e3fea299cf21712eaca856c5e5ec86a (image=quay.io/ceph/ceph:v18, name=happy_leakey, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:33:30 compute-0 ceph-mgr[74571]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 01 16:33:30 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'dashboard'
Oct 01 16:33:30 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:33:30.193+0000 7f5d20517140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 01 16:33:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 01 16:33:30 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/269399271' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 16:33:30 compute-0 happy_leakey[74682]: 
Oct 01 16:33:30 compute-0 happy_leakey[74682]: {
Oct 01 16:33:30 compute-0 happy_leakey[74682]:     "fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:33:30 compute-0 happy_leakey[74682]:     "health": {
Oct 01 16:33:30 compute-0 happy_leakey[74682]:         "status": "HEALTH_OK",
Oct 01 16:33:30 compute-0 happy_leakey[74682]:         "checks": {},
Oct 01 16:33:30 compute-0 happy_leakey[74682]:         "mutes": []
Oct 01 16:33:30 compute-0 happy_leakey[74682]:     },
Oct 01 16:33:30 compute-0 happy_leakey[74682]:     "election_epoch": 5,
Oct 01 16:33:30 compute-0 happy_leakey[74682]:     "quorum": [
Oct 01 16:33:30 compute-0 happy_leakey[74682]:         0
Oct 01 16:33:30 compute-0 happy_leakey[74682]:     ],
Oct 01 16:33:30 compute-0 happy_leakey[74682]:     "quorum_names": [
Oct 01 16:33:30 compute-0 happy_leakey[74682]:         "compute-0"
Oct 01 16:33:30 compute-0 happy_leakey[74682]:     ],
Oct 01 16:33:30 compute-0 happy_leakey[74682]:     "quorum_age": 6,
Oct 01 16:33:30 compute-0 happy_leakey[74682]:     "monmap": {
Oct 01 16:33:30 compute-0 happy_leakey[74682]:         "epoch": 1,
Oct 01 16:33:30 compute-0 happy_leakey[74682]:         "min_mon_release_name": "reef",
Oct 01 16:33:30 compute-0 happy_leakey[74682]:         "num_mons": 1
Oct 01 16:33:30 compute-0 happy_leakey[74682]:     },
Oct 01 16:33:30 compute-0 happy_leakey[74682]:     "osdmap": {
Oct 01 16:33:30 compute-0 happy_leakey[74682]:         "epoch": 1,
Oct 01 16:33:30 compute-0 happy_leakey[74682]:         "num_osds": 0,
Oct 01 16:33:30 compute-0 happy_leakey[74682]:         "num_up_osds": 0,
Oct 01 16:33:30 compute-0 happy_leakey[74682]:         "osd_up_since": 0,
Oct 01 16:33:30 compute-0 happy_leakey[74682]:         "num_in_osds": 0,
Oct 01 16:33:30 compute-0 happy_leakey[74682]:         "osd_in_since": 0,
Oct 01 16:33:30 compute-0 happy_leakey[74682]:         "num_remapped_pgs": 0
Oct 01 16:33:30 compute-0 happy_leakey[74682]:     },
Oct 01 16:33:30 compute-0 happy_leakey[74682]:     "pgmap": {
Oct 01 16:33:30 compute-0 happy_leakey[74682]:         "pgs_by_state": [],
Oct 01 16:33:30 compute-0 happy_leakey[74682]:         "num_pgs": 0,
Oct 01 16:33:30 compute-0 happy_leakey[74682]:         "num_pools": 0,
Oct 01 16:33:30 compute-0 happy_leakey[74682]:         "num_objects": 0,
Oct 01 16:33:30 compute-0 happy_leakey[74682]:         "data_bytes": 0,
Oct 01 16:33:30 compute-0 happy_leakey[74682]:         "bytes_used": 0,
Oct 01 16:33:30 compute-0 happy_leakey[74682]:         "bytes_avail": 0,
Oct 01 16:33:30 compute-0 happy_leakey[74682]:         "bytes_total": 0
Oct 01 16:33:30 compute-0 happy_leakey[74682]:     },
Oct 01 16:33:30 compute-0 happy_leakey[74682]:     "fsmap": {
Oct 01 16:33:30 compute-0 happy_leakey[74682]:         "epoch": 1,
Oct 01 16:33:30 compute-0 happy_leakey[74682]:         "by_rank": [],
Oct 01 16:33:30 compute-0 happy_leakey[74682]:         "up:standby": 0
Oct 01 16:33:30 compute-0 happy_leakey[74682]:     },
Oct 01 16:33:30 compute-0 happy_leakey[74682]:     "mgrmap": {
Oct 01 16:33:30 compute-0 happy_leakey[74682]:         "available": false,
Oct 01 16:33:30 compute-0 happy_leakey[74682]:         "num_standbys": 0,
Oct 01 16:33:30 compute-0 happy_leakey[74682]:         "modules": [
Oct 01 16:33:30 compute-0 happy_leakey[74682]:             "iostat",
Oct 01 16:33:30 compute-0 happy_leakey[74682]:             "nfs",
Oct 01 16:33:30 compute-0 happy_leakey[74682]:             "restful"
Oct 01 16:33:30 compute-0 happy_leakey[74682]:         ],
Oct 01 16:33:30 compute-0 happy_leakey[74682]:         "services": {}
Oct 01 16:33:30 compute-0 happy_leakey[74682]:     },
Oct 01 16:33:30 compute-0 happy_leakey[74682]:     "servicemap": {
Oct 01 16:33:30 compute-0 happy_leakey[74682]:         "epoch": 1,
Oct 01 16:33:30 compute-0 happy_leakey[74682]:         "modified": "2025-10-01T16:33:19.992352+0000",
Oct 01 16:33:30 compute-0 happy_leakey[74682]:         "services": {}
Oct 01 16:33:30 compute-0 happy_leakey[74682]:     },
Oct 01 16:33:30 compute-0 happy_leakey[74682]:     "progress_events": {}
Oct 01 16:33:30 compute-0 happy_leakey[74682]: }
Oct 01 16:33:30 compute-0 systemd[1]: libpod-e0d48324b5176e9b032b26b65c15df302e3fea299cf21712eaca856c5e5ec86a.scope: Deactivated successfully.
Oct 01 16:33:30 compute-0 podman[74666]: 2025-10-01 16:33:30.389148563 +0000 UTC m=+0.621439738 container died e0d48324b5176e9b032b26b65c15df302e3fea299cf21712eaca856c5e5ec86a (image=quay.io/ceph/ceph:v18, name=happy_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:33:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe51077c87cb90f567d025dd929bfe7d78171b2c04bcaeae38e9032c8ab38112-merged.mount: Deactivated successfully.
Oct 01 16:33:30 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/269399271' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 16:33:30 compute-0 podman[74666]: 2025-10-01 16:33:30.431289933 +0000 UTC m=+0.663581108 container remove e0d48324b5176e9b032b26b65c15df302e3fea299cf21712eaca856c5e5ec86a (image=quay.io/ceph/ceph:v18, name=happy_leakey, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 01 16:33:30 compute-0 systemd[1]: libpod-conmon-e0d48324b5176e9b032b26b65c15df302e3fea299cf21712eaca856c5e5ec86a.scope: Deactivated successfully.
Oct 01 16:33:31 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'devicehealth'
Oct 01 16:33:31 compute-0 ceph-mgr[74571]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 01 16:33:31 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'diskprediction_local'
Oct 01 16:33:31 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:33:31.954+0000 7f5d20517140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 01 16:33:32 compute-0 podman[74718]: 2025-10-01 16:33:32.492116727 +0000 UTC m=+0.039197741 container create ed522d0b4eac1bb95171fd722181ef12f0e696a7a0ec91daa9190655ae89f24b (image=quay.io/ceph/ceph:v18, name=loving_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 01 16:33:32 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 01 16:33:32 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 01 16:33:32 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]:   from numpy import show_config as show_numpy_config
Oct 01 16:33:32 compute-0 ceph-mgr[74571]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 01 16:33:32 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'influx'
Oct 01 16:33:32 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:33:32.513+0000 7f5d20517140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 01 16:33:32 compute-0 systemd[1]: Started libpod-conmon-ed522d0b4eac1bb95171fd722181ef12f0e696a7a0ec91daa9190655ae89f24b.scope.
Oct 01 16:33:32 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:33:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f12129fff275f6657f80536d14ea5320042883cca325f9a6054832b72613f505/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f12129fff275f6657f80536d14ea5320042883cca325f9a6054832b72613f505/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f12129fff275f6657f80536d14ea5320042883cca325f9a6054832b72613f505/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:32 compute-0 podman[74718]: 2025-10-01 16:33:32.556573005 +0000 UTC m=+0.103654039 container init ed522d0b4eac1bb95171fd722181ef12f0e696a7a0ec91daa9190655ae89f24b (image=quay.io/ceph/ceph:v18, name=loving_swartz, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 01 16:33:32 compute-0 podman[74718]: 2025-10-01 16:33:32.56378967 +0000 UTC m=+0.110870704 container start ed522d0b4eac1bb95171fd722181ef12f0e696a7a0ec91daa9190655ae89f24b (image=quay.io/ceph/ceph:v18, name=loving_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:33:32 compute-0 podman[74718]: 2025-10-01 16:33:32.471357313 +0000 UTC m=+0.018438357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:33:32 compute-0 podman[74718]: 2025-10-01 16:33:32.567429693 +0000 UTC m=+0.114510727 container attach ed522d0b4eac1bb95171fd722181ef12f0e696a7a0ec91daa9190655ae89f24b (image=quay.io/ceph/ceph:v18, name=loving_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:33:32 compute-0 ceph-mgr[74571]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 01 16:33:32 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'insights'
Oct 01 16:33:32 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:33:32.758+0000 7f5d20517140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 01 16:33:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 01 16:33:32 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/397705543' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 16:33:32 compute-0 loving_swartz[74734]: 
Oct 01 16:33:32 compute-0 loving_swartz[74734]: {
Oct 01 16:33:32 compute-0 loving_swartz[74734]:     "fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:33:32 compute-0 loving_swartz[74734]:     "health": {
Oct 01 16:33:32 compute-0 loving_swartz[74734]:         "status": "HEALTH_OK",
Oct 01 16:33:32 compute-0 loving_swartz[74734]:         "checks": {},
Oct 01 16:33:32 compute-0 loving_swartz[74734]:         "mutes": []
Oct 01 16:33:32 compute-0 loving_swartz[74734]:     },
Oct 01 16:33:32 compute-0 loving_swartz[74734]:     "election_epoch": 5,
Oct 01 16:33:32 compute-0 loving_swartz[74734]:     "quorum": [
Oct 01 16:33:32 compute-0 loving_swartz[74734]:         0
Oct 01 16:33:32 compute-0 loving_swartz[74734]:     ],
Oct 01 16:33:32 compute-0 loving_swartz[74734]:     "quorum_names": [
Oct 01 16:33:32 compute-0 loving_swartz[74734]:         "compute-0"
Oct 01 16:33:32 compute-0 loving_swartz[74734]:     ],
Oct 01 16:33:32 compute-0 loving_swartz[74734]:     "quorum_age": 9,
Oct 01 16:33:32 compute-0 loving_swartz[74734]:     "monmap": {
Oct 01 16:33:32 compute-0 loving_swartz[74734]:         "epoch": 1,
Oct 01 16:33:32 compute-0 loving_swartz[74734]:         "min_mon_release_name": "reef",
Oct 01 16:33:32 compute-0 loving_swartz[74734]:         "num_mons": 1
Oct 01 16:33:32 compute-0 loving_swartz[74734]:     },
Oct 01 16:33:32 compute-0 loving_swartz[74734]:     "osdmap": {
Oct 01 16:33:32 compute-0 loving_swartz[74734]:         "epoch": 1,
Oct 01 16:33:32 compute-0 loving_swartz[74734]:         "num_osds": 0,
Oct 01 16:33:32 compute-0 loving_swartz[74734]:         "num_up_osds": 0,
Oct 01 16:33:32 compute-0 loving_swartz[74734]:         "osd_up_since": 0,
Oct 01 16:33:32 compute-0 loving_swartz[74734]:         "num_in_osds": 0,
Oct 01 16:33:32 compute-0 loving_swartz[74734]:         "osd_in_since": 0,
Oct 01 16:33:32 compute-0 loving_swartz[74734]:         "num_remapped_pgs": 0
Oct 01 16:33:32 compute-0 loving_swartz[74734]:     },
Oct 01 16:33:32 compute-0 loving_swartz[74734]:     "pgmap": {
Oct 01 16:33:32 compute-0 loving_swartz[74734]:         "pgs_by_state": [],
Oct 01 16:33:32 compute-0 loving_swartz[74734]:         "num_pgs": 0,
Oct 01 16:33:32 compute-0 loving_swartz[74734]:         "num_pools": 0,
Oct 01 16:33:32 compute-0 loving_swartz[74734]:         "num_objects": 0,
Oct 01 16:33:32 compute-0 loving_swartz[74734]:         "data_bytes": 0,
Oct 01 16:33:32 compute-0 loving_swartz[74734]:         "bytes_used": 0,
Oct 01 16:33:32 compute-0 loving_swartz[74734]:         "bytes_avail": 0,
Oct 01 16:33:32 compute-0 loving_swartz[74734]:         "bytes_total": 0
Oct 01 16:33:32 compute-0 loving_swartz[74734]:     },
Oct 01 16:33:32 compute-0 loving_swartz[74734]:     "fsmap": {
Oct 01 16:33:32 compute-0 loving_swartz[74734]:         "epoch": 1,
Oct 01 16:33:32 compute-0 loving_swartz[74734]:         "by_rank": [],
Oct 01 16:33:32 compute-0 loving_swartz[74734]:         "up:standby": 0
Oct 01 16:33:32 compute-0 loving_swartz[74734]:     },
Oct 01 16:33:32 compute-0 loving_swartz[74734]:     "mgrmap": {
Oct 01 16:33:32 compute-0 loving_swartz[74734]:         "available": false,
Oct 01 16:33:32 compute-0 loving_swartz[74734]:         "num_standbys": 0,
Oct 01 16:33:32 compute-0 loving_swartz[74734]:         "modules": [
Oct 01 16:33:32 compute-0 loving_swartz[74734]:             "iostat",
Oct 01 16:33:32 compute-0 loving_swartz[74734]:             "nfs",
Oct 01 16:33:32 compute-0 loving_swartz[74734]:             "restful"
Oct 01 16:33:32 compute-0 loving_swartz[74734]:         ],
Oct 01 16:33:32 compute-0 loving_swartz[74734]:         "services": {}
Oct 01 16:33:32 compute-0 loving_swartz[74734]:     },
Oct 01 16:33:32 compute-0 loving_swartz[74734]:     "servicemap": {
Oct 01 16:33:32 compute-0 loving_swartz[74734]:         "epoch": 1,
Oct 01 16:33:32 compute-0 loving_swartz[74734]:         "modified": "2025-10-01T16:33:19.992352+0000",
Oct 01 16:33:32 compute-0 loving_swartz[74734]:         "services": {}
Oct 01 16:33:32 compute-0 loving_swartz[74734]:     },
Oct 01 16:33:32 compute-0 loving_swartz[74734]:     "progress_events": {}
Oct 01 16:33:32 compute-0 loving_swartz[74734]: }
Oct 01 16:33:32 compute-0 systemd[1]: libpod-ed522d0b4eac1bb95171fd722181ef12f0e696a7a0ec91daa9190655ae89f24b.scope: Deactivated successfully.
Oct 01 16:33:32 compute-0 podman[74718]: 2025-10-01 16:33:32.944000872 +0000 UTC m=+0.491081926 container died ed522d0b4eac1bb95171fd722181ef12f0e696a7a0ec91daa9190655ae89f24b (image=quay.io/ceph/ceph:v18, name=loving_swartz, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 01 16:33:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-f12129fff275f6657f80536d14ea5320042883cca325f9a6054832b72613f505-merged.mount: Deactivated successfully.
Oct 01 16:33:32 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/397705543' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 16:33:32 compute-0 podman[74718]: 2025-10-01 16:33:32.987070334 +0000 UTC m=+0.534151348 container remove ed522d0b4eac1bb95171fd722181ef12f0e696a7a0ec91daa9190655ae89f24b (image=quay.io/ceph/ceph:v18, name=loving_swartz, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:33:32 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'iostat'
Oct 01 16:33:32 compute-0 systemd[1]: libpod-conmon-ed522d0b4eac1bb95171fd722181ef12f0e696a7a0ec91daa9190655ae89f24b.scope: Deactivated successfully.
Oct 01 16:33:33 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:33:33.218+0000 7f5d20517140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 01 16:33:33 compute-0 ceph-mgr[74571]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 01 16:33:33 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'k8sevents'
Oct 01 16:33:34 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'localpool'
Oct 01 16:33:35 compute-0 podman[74771]: 2025-10-01 16:33:35.092315324 +0000 UTC m=+0.077382357 container create 8043a56a7795b70b20a854727d445b4fabebc3afe1f7657b4f99c98e7ee756f7 (image=quay.io/ceph/ceph:v18, name=clever_goldberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 01 16:33:35 compute-0 podman[74771]: 2025-10-01 16:33:35.05417188 +0000 UTC m=+0.039238933 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:33:35 compute-0 systemd[1]: Started libpod-conmon-8043a56a7795b70b20a854727d445b4fabebc3afe1f7657b4f99c98e7ee756f7.scope.
Oct 01 16:33:35 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'mds_autoscaler'
Oct 01 16:33:35 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:33:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/005f06cdf9e0eb2e0cb1740d858725e110bd971a21e354d8edd9d4d60b8c5910/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/005f06cdf9e0eb2e0cb1740d858725e110bd971a21e354d8edd9d4d60b8c5910/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/005f06cdf9e0eb2e0cb1740d858725e110bd971a21e354d8edd9d4d60b8c5910/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:35 compute-0 podman[74771]: 2025-10-01 16:33:35.30228939 +0000 UTC m=+0.287356493 container init 8043a56a7795b70b20a854727d445b4fabebc3afe1f7657b4f99c98e7ee756f7 (image=quay.io/ceph/ceph:v18, name=clever_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:33:35 compute-0 podman[74771]: 2025-10-01 16:33:35.311507373 +0000 UTC m=+0.296574406 container start 8043a56a7795b70b20a854727d445b4fabebc3afe1f7657b4f99c98e7ee756f7 (image=quay.io/ceph/ceph:v18, name=clever_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 01 16:33:35 compute-0 podman[74771]: 2025-10-01 16:33:35.406155735 +0000 UTC m=+0.391222868 container attach 8043a56a7795b70b20a854727d445b4fabebc3afe1f7657b4f99c98e7ee756f7 (image=quay.io/ceph/ceph:v18, name=clever_goldberg, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:33:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 01 16:33:35 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/363432691' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 16:33:35 compute-0 clever_goldberg[74787]: 
Oct 01 16:33:35 compute-0 clever_goldberg[74787]: {
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:     "fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:     "health": {
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:         "status": "HEALTH_OK",
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:         "checks": {},
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:         "mutes": []
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:     },
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:     "election_epoch": 5,
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:     "quorum": [
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:         0
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:     ],
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:     "quorum_names": [
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:         "compute-0"
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:     ],
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:     "quorum_age": 11,
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:     "monmap": {
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:         "epoch": 1,
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:         "min_mon_release_name": "reef",
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:         "num_mons": 1
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:     },
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:     "osdmap": {
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:         "epoch": 1,
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:         "num_osds": 0,
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:         "num_up_osds": 0,
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:         "osd_up_since": 0,
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:         "num_in_osds": 0,
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:         "osd_in_since": 0,
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:         "num_remapped_pgs": 0
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:     },
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:     "pgmap": {
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:         "pgs_by_state": [],
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:         "num_pgs": 0,
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:         "num_pools": 0,
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:         "num_objects": 0,
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:         "data_bytes": 0,
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:         "bytes_used": 0,
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:         "bytes_avail": 0,
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:         "bytes_total": 0
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:     },
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:     "fsmap": {
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:         "epoch": 1,
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:         "by_rank": [],
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:         "up:standby": 0
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:     },
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:     "mgrmap": {
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:         "available": false,
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:         "num_standbys": 0,
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:         "modules": [
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:             "iostat",
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:             "nfs",
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:             "restful"
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:         ],
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:         "services": {}
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:     },
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:     "servicemap": {
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:         "epoch": 1,
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:         "modified": "2025-10-01T16:33:19.992352+0000",
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:         "services": {}
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:     },
Oct 01 16:33:35 compute-0 clever_goldberg[74787]:     "progress_events": {}
Oct 01 16:33:35 compute-0 clever_goldberg[74787]: }
Oct 01 16:33:35 compute-0 systemd[1]: libpod-8043a56a7795b70b20a854727d445b4fabebc3afe1f7657b4f99c98e7ee756f7.scope: Deactivated successfully.
Oct 01 16:33:35 compute-0 podman[74771]: 2025-10-01 16:33:35.733747951 +0000 UTC m=+0.718814984 container died 8043a56a7795b70b20a854727d445b4fabebc3afe1f7657b4f99c98e7ee756f7 (image=quay.io/ceph/ceph:v18, name=clever_goldberg, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:33:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-005f06cdf9e0eb2e0cb1740d858725e110bd971a21e354d8edd9d4d60b8c5910-merged.mount: Deactivated successfully.
Oct 01 16:33:35 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/363432691' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 16:33:35 compute-0 podman[74771]: 2025-10-01 16:33:35.791543763 +0000 UTC m=+0.776610796 container remove 8043a56a7795b70b20a854727d445b4fabebc3afe1f7657b4f99c98e7ee756f7 (image=quay.io/ceph/ceph:v18, name=clever_goldberg, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:33:35 compute-0 systemd[1]: libpod-conmon-8043a56a7795b70b20a854727d445b4fabebc3afe1f7657b4f99c98e7ee756f7.scope: Deactivated successfully.
Oct 01 16:33:35 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'mirroring'
Oct 01 16:33:36 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'nfs'
Oct 01 16:33:36 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:33:36.920+0000 7f5d20517140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 01 16:33:36 compute-0 ceph-mgr[74571]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 01 16:33:36 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'orchestrator'
Oct 01 16:33:37 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:33:37.561+0000 7f5d20517140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 01 16:33:37 compute-0 ceph-mgr[74571]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 01 16:33:37 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'osd_perf_query'
Oct 01 16:33:37 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:33:37.821+0000 7f5d20517140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 01 16:33:37 compute-0 ceph-mgr[74571]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 01 16:33:37 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'osd_support'
Oct 01 16:33:37 compute-0 podman[74825]: 2025-10-01 16:33:37.852687247 +0000 UTC m=+0.040085261 container create a7029927dfab8c554dbcf16ddd3e5c9f4459a44f53ca2d3e74e1d2573c4f0744 (image=quay.io/ceph/ceph:v18, name=tender_elion, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:33:37 compute-0 systemd[1]: Started libpod-conmon-a7029927dfab8c554dbcf16ddd3e5c9f4459a44f53ca2d3e74e1d2573c4f0744.scope.
Oct 01 16:33:37 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:33:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d3b20ebab6d67412c59faf754103eeb42267ef0e6417588e9aaeffa741009d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d3b20ebab6d67412c59faf754103eeb42267ef0e6417588e9aaeffa741009d5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d3b20ebab6d67412c59faf754103eeb42267ef0e6417588e9aaeffa741009d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:37 compute-0 podman[74825]: 2025-10-01 16:33:37.834390606 +0000 UTC m=+0.021788650 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:33:37 compute-0 podman[74825]: 2025-10-01 16:33:37.937308268 +0000 UTC m=+0.124706392 container init a7029927dfab8c554dbcf16ddd3e5c9f4459a44f53ca2d3e74e1d2573c4f0744 (image=quay.io/ceph/ceph:v18, name=tender_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 01 16:33:37 compute-0 podman[74825]: 2025-10-01 16:33:37.943640493 +0000 UTC m=+0.131038517 container start a7029927dfab8c554dbcf16ddd3e5c9f4459a44f53ca2d3e74e1d2573c4f0744 (image=quay.io/ceph/ceph:v18, name=tender_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 01 16:33:37 compute-0 podman[74825]: 2025-10-01 16:33:37.946809191 +0000 UTC m=+0.134207305 container attach a7029927dfab8c554dbcf16ddd3e5c9f4459a44f53ca2d3e74e1d2573c4f0744 (image=quay.io/ceph/ceph:v18, name=tender_elion, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 01 16:33:38 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:33:38.106+0000 7f5d20517140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 01 16:33:38 compute-0 ceph-mgr[74571]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 01 16:33:38 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'pg_autoscaler'
Oct 01 16:33:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 01 16:33:38 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1118037164' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 16:33:38 compute-0 tender_elion[74841]: 
Oct 01 16:33:38 compute-0 tender_elion[74841]: {
Oct 01 16:33:38 compute-0 tender_elion[74841]:     "fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:33:38 compute-0 tender_elion[74841]:     "health": {
Oct 01 16:33:38 compute-0 tender_elion[74841]:         "status": "HEALTH_OK",
Oct 01 16:33:38 compute-0 tender_elion[74841]:         "checks": {},
Oct 01 16:33:38 compute-0 tender_elion[74841]:         "mutes": []
Oct 01 16:33:38 compute-0 tender_elion[74841]:     },
Oct 01 16:33:38 compute-0 tender_elion[74841]:     "election_epoch": 5,
Oct 01 16:33:38 compute-0 tender_elion[74841]:     "quorum": [
Oct 01 16:33:38 compute-0 tender_elion[74841]:         0
Oct 01 16:33:38 compute-0 tender_elion[74841]:     ],
Oct 01 16:33:38 compute-0 tender_elion[74841]:     "quorum_names": [
Oct 01 16:33:38 compute-0 tender_elion[74841]:         "compute-0"
Oct 01 16:33:38 compute-0 tender_elion[74841]:     ],
Oct 01 16:33:38 compute-0 tender_elion[74841]:     "quorum_age": 14,
Oct 01 16:33:38 compute-0 tender_elion[74841]:     "monmap": {
Oct 01 16:33:38 compute-0 tender_elion[74841]:         "epoch": 1,
Oct 01 16:33:38 compute-0 tender_elion[74841]:         "min_mon_release_name": "reef",
Oct 01 16:33:38 compute-0 tender_elion[74841]:         "num_mons": 1
Oct 01 16:33:38 compute-0 tender_elion[74841]:     },
Oct 01 16:33:38 compute-0 tender_elion[74841]:     "osdmap": {
Oct 01 16:33:38 compute-0 tender_elion[74841]:         "epoch": 1,
Oct 01 16:33:38 compute-0 tender_elion[74841]:         "num_osds": 0,
Oct 01 16:33:38 compute-0 tender_elion[74841]:         "num_up_osds": 0,
Oct 01 16:33:38 compute-0 tender_elion[74841]:         "osd_up_since": 0,
Oct 01 16:33:38 compute-0 tender_elion[74841]:         "num_in_osds": 0,
Oct 01 16:33:38 compute-0 tender_elion[74841]:         "osd_in_since": 0,
Oct 01 16:33:38 compute-0 tender_elion[74841]:         "num_remapped_pgs": 0
Oct 01 16:33:38 compute-0 tender_elion[74841]:     },
Oct 01 16:33:38 compute-0 tender_elion[74841]:     "pgmap": {
Oct 01 16:33:38 compute-0 tender_elion[74841]:         "pgs_by_state": [],
Oct 01 16:33:38 compute-0 tender_elion[74841]:         "num_pgs": 0,
Oct 01 16:33:38 compute-0 tender_elion[74841]:         "num_pools": 0,
Oct 01 16:33:38 compute-0 tender_elion[74841]:         "num_objects": 0,
Oct 01 16:33:38 compute-0 tender_elion[74841]:         "data_bytes": 0,
Oct 01 16:33:38 compute-0 tender_elion[74841]:         "bytes_used": 0,
Oct 01 16:33:38 compute-0 tender_elion[74841]:         "bytes_avail": 0,
Oct 01 16:33:38 compute-0 tender_elion[74841]:         "bytes_total": 0
Oct 01 16:33:38 compute-0 tender_elion[74841]:     },
Oct 01 16:33:38 compute-0 tender_elion[74841]:     "fsmap": {
Oct 01 16:33:38 compute-0 tender_elion[74841]:         "epoch": 1,
Oct 01 16:33:38 compute-0 tender_elion[74841]:         "by_rank": [],
Oct 01 16:33:38 compute-0 tender_elion[74841]:         "up:standby": 0
Oct 01 16:33:38 compute-0 tender_elion[74841]:     },
Oct 01 16:33:38 compute-0 tender_elion[74841]:     "mgrmap": {
Oct 01 16:33:38 compute-0 tender_elion[74841]:         "available": false,
Oct 01 16:33:38 compute-0 tender_elion[74841]:         "num_standbys": 0,
Oct 01 16:33:38 compute-0 tender_elion[74841]:         "modules": [
Oct 01 16:33:38 compute-0 tender_elion[74841]:             "iostat",
Oct 01 16:33:38 compute-0 tender_elion[74841]:             "nfs",
Oct 01 16:33:38 compute-0 tender_elion[74841]:             "restful"
Oct 01 16:33:38 compute-0 tender_elion[74841]:         ],
Oct 01 16:33:38 compute-0 tender_elion[74841]:         "services": {}
Oct 01 16:33:38 compute-0 tender_elion[74841]:     },
Oct 01 16:33:38 compute-0 tender_elion[74841]:     "servicemap": {
Oct 01 16:33:38 compute-0 tender_elion[74841]:         "epoch": 1,
Oct 01 16:33:38 compute-0 tender_elion[74841]:         "modified": "2025-10-01T16:33:19.992352+0000",
Oct 01 16:33:38 compute-0 tender_elion[74841]:         "services": {}
Oct 01 16:33:38 compute-0 tender_elion[74841]:     },
Oct 01 16:33:38 compute-0 tender_elion[74841]:     "progress_events": {}
Oct 01 16:33:38 compute-0 tender_elion[74841]: }
Oct 01 16:33:38 compute-0 systemd[1]: libpod-a7029927dfab8c554dbcf16ddd3e5c9f4459a44f53ca2d3e74e1d2573c4f0744.scope: Deactivated successfully.
Oct 01 16:33:38 compute-0 podman[74825]: 2025-10-01 16:33:38.351522045 +0000 UTC m=+0.538920069 container died a7029927dfab8c554dbcf16ddd3e5c9f4459a44f53ca2d3e74e1d2573c4f0744 (image=quay.io/ceph/ceph:v18, name=tender_elion, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 01 16:33:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d3b20ebab6d67412c59faf754103eeb42267ef0e6417588e9aaeffa741009d5-merged.mount: Deactivated successfully.
Oct 01 16:33:38 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1118037164' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 16:33:38 compute-0 podman[74825]: 2025-10-01 16:33:38.40826631 +0000 UTC m=+0.595664334 container remove a7029927dfab8c554dbcf16ddd3e5c9f4459a44f53ca2d3e74e1d2573c4f0744 (image=quay.io/ceph/ceph:v18, name=tender_elion, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:33:38 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:33:38.407+0000 7f5d20517140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 01 16:33:38 compute-0 ceph-mgr[74571]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 01 16:33:38 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'progress'
Oct 01 16:33:38 compute-0 systemd[1]: libpod-conmon-a7029927dfab8c554dbcf16ddd3e5c9f4459a44f53ca2d3e74e1d2573c4f0744.scope: Deactivated successfully.
Oct 01 16:33:38 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:33:38.645+0000 7f5d20517140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 01 16:33:38 compute-0 ceph-mgr[74571]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 01 16:33:38 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'prometheus'
Oct 01 16:33:39 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:33:39.696+0000 7f5d20517140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 01 16:33:39 compute-0 ceph-mgr[74571]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 01 16:33:39 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'rbd_support'
Oct 01 16:33:40 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:33:40.010+0000 7f5d20517140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 01 16:33:40 compute-0 ceph-mgr[74571]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 01 16:33:40 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'restful'
Oct 01 16:33:40 compute-0 podman[74879]: 2025-10-01 16:33:40.468821775 +0000 UTC m=+0.039354376 container create b7e137551d29c8f119825fcc370388a32e947bfd290a7ee5e820d514b7ff3552 (image=quay.io/ceph/ceph:v18, name=happy_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Oct 01 16:33:40 compute-0 systemd[1]: Started libpod-conmon-b7e137551d29c8f119825fcc370388a32e947bfd290a7ee5e820d514b7ff3552.scope.
Oct 01 16:33:40 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:33:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a5795b124d6672655f7cbaeae5e8a97cac9b87274912138a988b46ec10696e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a5795b124d6672655f7cbaeae5e8a97cac9b87274912138a988b46ec10696e7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a5795b124d6672655f7cbaeae5e8a97cac9b87274912138a988b46ec10696e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:40 compute-0 podman[74879]: 2025-10-01 16:33:40.537437464 +0000 UTC m=+0.107970085 container init b7e137551d29c8f119825fcc370388a32e947bfd290a7ee5e820d514b7ff3552 (image=quay.io/ceph/ceph:v18, name=happy_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:33:40 compute-0 podman[74879]: 2025-10-01 16:33:40.544173513 +0000 UTC m=+0.114706104 container start b7e137551d29c8f119825fcc370388a32e947bfd290a7ee5e820d514b7ff3552 (image=quay.io/ceph/ceph:v18, name=happy_yonath, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 01 16:33:40 compute-0 podman[74879]: 2025-10-01 16:33:40.447841414 +0000 UTC m=+0.018374035 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:33:40 compute-0 podman[74879]: 2025-10-01 16:33:40.548570652 +0000 UTC m=+0.119103273 container attach b7e137551d29c8f119825fcc370388a32e947bfd290a7ee5e820d514b7ff3552 (image=quay.io/ceph/ceph:v18, name=happy_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 01 16:33:40 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'rgw'
Oct 01 16:33:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 01 16:33:40 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3648322070' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 16:33:40 compute-0 happy_yonath[74895]: 
Oct 01 16:33:40 compute-0 happy_yonath[74895]: {
Oct 01 16:33:40 compute-0 happy_yonath[74895]:     "fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:33:40 compute-0 happy_yonath[74895]:     "health": {
Oct 01 16:33:40 compute-0 happy_yonath[74895]:         "status": "HEALTH_OK",
Oct 01 16:33:40 compute-0 happy_yonath[74895]:         "checks": {},
Oct 01 16:33:40 compute-0 happy_yonath[74895]:         "mutes": []
Oct 01 16:33:40 compute-0 happy_yonath[74895]:     },
Oct 01 16:33:40 compute-0 happy_yonath[74895]:     "election_epoch": 5,
Oct 01 16:33:40 compute-0 happy_yonath[74895]:     "quorum": [
Oct 01 16:33:40 compute-0 happy_yonath[74895]:         0
Oct 01 16:33:40 compute-0 happy_yonath[74895]:     ],
Oct 01 16:33:40 compute-0 happy_yonath[74895]:     "quorum_names": [
Oct 01 16:33:40 compute-0 happy_yonath[74895]:         "compute-0"
Oct 01 16:33:40 compute-0 happy_yonath[74895]:     ],
Oct 01 16:33:40 compute-0 happy_yonath[74895]:     "quorum_age": 17,
Oct 01 16:33:40 compute-0 happy_yonath[74895]:     "monmap": {
Oct 01 16:33:40 compute-0 happy_yonath[74895]:         "epoch": 1,
Oct 01 16:33:40 compute-0 happy_yonath[74895]:         "min_mon_release_name": "reef",
Oct 01 16:33:40 compute-0 happy_yonath[74895]:         "num_mons": 1
Oct 01 16:33:40 compute-0 happy_yonath[74895]:     },
Oct 01 16:33:40 compute-0 happy_yonath[74895]:     "osdmap": {
Oct 01 16:33:40 compute-0 happy_yonath[74895]:         "epoch": 1,
Oct 01 16:33:40 compute-0 happy_yonath[74895]:         "num_osds": 0,
Oct 01 16:33:40 compute-0 happy_yonath[74895]:         "num_up_osds": 0,
Oct 01 16:33:40 compute-0 happy_yonath[74895]:         "osd_up_since": 0,
Oct 01 16:33:40 compute-0 happy_yonath[74895]:         "num_in_osds": 0,
Oct 01 16:33:40 compute-0 happy_yonath[74895]:         "osd_in_since": 0,
Oct 01 16:33:40 compute-0 happy_yonath[74895]:         "num_remapped_pgs": 0
Oct 01 16:33:40 compute-0 happy_yonath[74895]:     },
Oct 01 16:33:40 compute-0 happy_yonath[74895]:     "pgmap": {
Oct 01 16:33:40 compute-0 happy_yonath[74895]:         "pgs_by_state": [],
Oct 01 16:33:40 compute-0 happy_yonath[74895]:         "num_pgs": 0,
Oct 01 16:33:40 compute-0 happy_yonath[74895]:         "num_pools": 0,
Oct 01 16:33:40 compute-0 happy_yonath[74895]:         "num_objects": 0,
Oct 01 16:33:40 compute-0 happy_yonath[74895]:         "data_bytes": 0,
Oct 01 16:33:40 compute-0 happy_yonath[74895]:         "bytes_used": 0,
Oct 01 16:33:40 compute-0 happy_yonath[74895]:         "bytes_avail": 0,
Oct 01 16:33:40 compute-0 happy_yonath[74895]:         "bytes_total": 0
Oct 01 16:33:40 compute-0 happy_yonath[74895]:     },
Oct 01 16:33:40 compute-0 happy_yonath[74895]:     "fsmap": {
Oct 01 16:33:40 compute-0 happy_yonath[74895]:         "epoch": 1,
Oct 01 16:33:40 compute-0 happy_yonath[74895]:         "by_rank": [],
Oct 01 16:33:40 compute-0 happy_yonath[74895]:         "up:standby": 0
Oct 01 16:33:40 compute-0 happy_yonath[74895]:     },
Oct 01 16:33:40 compute-0 happy_yonath[74895]:     "mgrmap": {
Oct 01 16:33:40 compute-0 happy_yonath[74895]:         "available": false,
Oct 01 16:33:40 compute-0 happy_yonath[74895]:         "num_standbys": 0,
Oct 01 16:33:40 compute-0 happy_yonath[74895]:         "modules": [
Oct 01 16:33:40 compute-0 happy_yonath[74895]:             "iostat",
Oct 01 16:33:40 compute-0 happy_yonath[74895]:             "nfs",
Oct 01 16:33:40 compute-0 happy_yonath[74895]:             "restful"
Oct 01 16:33:40 compute-0 happy_yonath[74895]:         ],
Oct 01 16:33:40 compute-0 happy_yonath[74895]:         "services": {}
Oct 01 16:33:40 compute-0 happy_yonath[74895]:     },
Oct 01 16:33:40 compute-0 happy_yonath[74895]:     "servicemap": {
Oct 01 16:33:40 compute-0 happy_yonath[74895]:         "epoch": 1,
Oct 01 16:33:40 compute-0 happy_yonath[74895]:         "modified": "2025-10-01T16:33:19.992352+0000",
Oct 01 16:33:40 compute-0 happy_yonath[74895]:         "services": {}
Oct 01 16:33:40 compute-0 happy_yonath[74895]:     },
Oct 01 16:33:40 compute-0 happy_yonath[74895]:     "progress_events": {}
Oct 01 16:33:40 compute-0 happy_yonath[74895]: }
Oct 01 16:33:40 compute-0 systemd[1]: libpod-b7e137551d29c8f119825fcc370388a32e947bfd290a7ee5e820d514b7ff3552.scope: Deactivated successfully.
Oct 01 16:33:40 compute-0 podman[74879]: 2025-10-01 16:33:40.964767484 +0000 UTC m=+0.535300085 container died b7e137551d29c8f119825fcc370388a32e947bfd290a7ee5e820d514b7ff3552 (image=quay.io/ceph/ceph:v18, name=happy_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 01 16:33:41 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3648322070' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 16:33:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a5795b124d6672655f7cbaeae5e8a97cac9b87274912138a988b46ec10696e7-merged.mount: Deactivated successfully.
Oct 01 16:33:41 compute-0 podman[74879]: 2025-10-01 16:33:41.226113893 +0000 UTC m=+0.796646494 container remove b7e137551d29c8f119825fcc370388a32e947bfd290a7ee5e820d514b7ff3552 (image=quay.io/ceph/ceph:v18, name=happy_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:33:41 compute-0 systemd[1]: libpod-conmon-b7e137551d29c8f119825fcc370388a32e947bfd290a7ee5e820d514b7ff3552.scope: Deactivated successfully.
Oct 01 16:33:41 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:33:41.585+0000 7f5d20517140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 01 16:33:41 compute-0 ceph-mgr[74571]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 01 16:33:41 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'rook'
Oct 01 16:33:43 compute-0 podman[74936]: 2025-10-01 16:33:43.324845344 +0000 UTC m=+0.065076430 container create 4f985928d9b7bb120aa4a39f5da100303fe3acb0f7db65102aa35821139dc03f (image=quay.io/ceph/ceph:v18, name=sad_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 01 16:33:43 compute-0 systemd[1]: Started libpod-conmon-4f985928d9b7bb120aa4a39f5da100303fe3acb0f7db65102aa35821139dc03f.scope.
Oct 01 16:33:43 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:33:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a52380497e8878012ab964ce100d8f17ad05b1b3701e87b1651753143b0072fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a52380497e8878012ab964ce100d8f17ad05b1b3701e87b1651753143b0072fe/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a52380497e8878012ab964ce100d8f17ad05b1b3701e87b1651753143b0072fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:43 compute-0 podman[74936]: 2025-10-01 16:33:43.29025403 +0000 UTC m=+0.030485136 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:33:43 compute-0 podman[74936]: 2025-10-01 16:33:43.388443752 +0000 UTC m=+0.128674838 container init 4f985928d9b7bb120aa4a39f5da100303fe3acb0f7db65102aa35821139dc03f (image=quay.io/ceph/ceph:v18, name=sad_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:33:43 compute-0 podman[74936]: 2025-10-01 16:33:43.393360529 +0000 UTC m=+0.133591615 container start 4f985928d9b7bb120aa4a39f5da100303fe3acb0f7db65102aa35821139dc03f (image=quay.io/ceph/ceph:v18, name=sad_chaum, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:33:43 compute-0 podman[74936]: 2025-10-01 16:33:43.397787529 +0000 UTC m=+0.138018645 container attach 4f985928d9b7bb120aa4a39f5da100303fe3acb0f7db65102aa35821139dc03f (image=quay.io/ceph/ceph:v18, name=sad_chaum, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 01 16:33:43 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:33:43.675+0000 7f5d20517140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 01 16:33:43 compute-0 ceph-mgr[74571]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 01 16:33:43 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'selftest'
Oct 01 16:33:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 01 16:33:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1684769264' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 16:33:43 compute-0 sad_chaum[74950]: 
Oct 01 16:33:43 compute-0 sad_chaum[74950]: {
Oct 01 16:33:43 compute-0 sad_chaum[74950]:     "fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:33:43 compute-0 sad_chaum[74950]:     "health": {
Oct 01 16:33:43 compute-0 sad_chaum[74950]:         "status": "HEALTH_OK",
Oct 01 16:33:43 compute-0 sad_chaum[74950]:         "checks": {},
Oct 01 16:33:43 compute-0 sad_chaum[74950]:         "mutes": []
Oct 01 16:33:43 compute-0 sad_chaum[74950]:     },
Oct 01 16:33:43 compute-0 sad_chaum[74950]:     "election_epoch": 5,
Oct 01 16:33:43 compute-0 sad_chaum[74950]:     "quorum": [
Oct 01 16:33:43 compute-0 sad_chaum[74950]:         0
Oct 01 16:33:43 compute-0 sad_chaum[74950]:     ],
Oct 01 16:33:43 compute-0 sad_chaum[74950]:     "quorum_names": [
Oct 01 16:33:43 compute-0 sad_chaum[74950]:         "compute-0"
Oct 01 16:33:43 compute-0 sad_chaum[74950]:     ],
Oct 01 16:33:43 compute-0 sad_chaum[74950]:     "quorum_age": 19,
Oct 01 16:33:43 compute-0 sad_chaum[74950]:     "monmap": {
Oct 01 16:33:43 compute-0 sad_chaum[74950]:         "epoch": 1,
Oct 01 16:33:43 compute-0 sad_chaum[74950]:         "min_mon_release_name": "reef",
Oct 01 16:33:43 compute-0 sad_chaum[74950]:         "num_mons": 1
Oct 01 16:33:43 compute-0 sad_chaum[74950]:     },
Oct 01 16:33:43 compute-0 sad_chaum[74950]:     "osdmap": {
Oct 01 16:33:43 compute-0 sad_chaum[74950]:         "epoch": 1,
Oct 01 16:33:43 compute-0 sad_chaum[74950]:         "num_osds": 0,
Oct 01 16:33:43 compute-0 sad_chaum[74950]:         "num_up_osds": 0,
Oct 01 16:33:43 compute-0 sad_chaum[74950]:         "osd_up_since": 0,
Oct 01 16:33:43 compute-0 sad_chaum[74950]:         "num_in_osds": 0,
Oct 01 16:33:43 compute-0 sad_chaum[74950]:         "osd_in_since": 0,
Oct 01 16:33:43 compute-0 sad_chaum[74950]:         "num_remapped_pgs": 0
Oct 01 16:33:43 compute-0 sad_chaum[74950]:     },
Oct 01 16:33:43 compute-0 sad_chaum[74950]:     "pgmap": {
Oct 01 16:33:43 compute-0 sad_chaum[74950]:         "pgs_by_state": [],
Oct 01 16:33:43 compute-0 sad_chaum[74950]:         "num_pgs": 0,
Oct 01 16:33:43 compute-0 sad_chaum[74950]:         "num_pools": 0,
Oct 01 16:33:43 compute-0 sad_chaum[74950]:         "num_objects": 0,
Oct 01 16:33:43 compute-0 sad_chaum[74950]:         "data_bytes": 0,
Oct 01 16:33:43 compute-0 sad_chaum[74950]:         "bytes_used": 0,
Oct 01 16:33:43 compute-0 sad_chaum[74950]:         "bytes_avail": 0,
Oct 01 16:33:43 compute-0 sad_chaum[74950]:         "bytes_total": 0
Oct 01 16:33:43 compute-0 sad_chaum[74950]:     },
Oct 01 16:33:43 compute-0 sad_chaum[74950]:     "fsmap": {
Oct 01 16:33:43 compute-0 sad_chaum[74950]:         "epoch": 1,
Oct 01 16:33:43 compute-0 sad_chaum[74950]:         "by_rank": [],
Oct 01 16:33:43 compute-0 sad_chaum[74950]:         "up:standby": 0
Oct 01 16:33:43 compute-0 sad_chaum[74950]:     },
Oct 01 16:33:43 compute-0 sad_chaum[74950]:     "mgrmap": {
Oct 01 16:33:43 compute-0 sad_chaum[74950]:         "available": false,
Oct 01 16:33:43 compute-0 sad_chaum[74950]:         "num_standbys": 0,
Oct 01 16:33:43 compute-0 sad_chaum[74950]:         "modules": [
Oct 01 16:33:43 compute-0 sad_chaum[74950]:             "iostat",
Oct 01 16:33:43 compute-0 sad_chaum[74950]:             "nfs",
Oct 01 16:33:43 compute-0 sad_chaum[74950]:             "restful"
Oct 01 16:33:43 compute-0 sad_chaum[74950]:         ],
Oct 01 16:33:43 compute-0 sad_chaum[74950]:         "services": {}
Oct 01 16:33:43 compute-0 sad_chaum[74950]:     },
Oct 01 16:33:43 compute-0 sad_chaum[74950]:     "servicemap": {
Oct 01 16:33:43 compute-0 sad_chaum[74950]:         "epoch": 1,
Oct 01 16:33:43 compute-0 sad_chaum[74950]:         "modified": "2025-10-01T16:33:19.992352+0000",
Oct 01 16:33:43 compute-0 sad_chaum[74950]:         "services": {}
Oct 01 16:33:43 compute-0 sad_chaum[74950]:     },
Oct 01 16:33:43 compute-0 sad_chaum[74950]:     "progress_events": {}
Oct 01 16:33:43 compute-0 sad_chaum[74950]: }
Oct 01 16:33:43 compute-0 systemd[1]: libpod-4f985928d9b7bb120aa4a39f5da100303fe3acb0f7db65102aa35821139dc03f.scope: Deactivated successfully.
Oct 01 16:33:43 compute-0 podman[74936]: 2025-10-01 16:33:43.769670739 +0000 UTC m=+0.509901825 container died 4f985928d9b7bb120aa4a39f5da100303fe3acb0f7db65102aa35821139dc03f (image=quay.io/ceph/ceph:v18, name=sad_chaum, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:33:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-a52380497e8878012ab964ce100d8f17ad05b1b3701e87b1651753143b0072fe-merged.mount: Deactivated successfully.
Oct 01 16:33:43 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1684769264' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 16:33:43 compute-0 podman[74936]: 2025-10-01 16:33:43.817079238 +0000 UTC m=+0.557310324 container remove 4f985928d9b7bb120aa4a39f5da100303fe3acb0f7db65102aa35821139dc03f (image=quay.io/ceph/ceph:v18, name=sad_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 01 16:33:43 compute-0 systemd[1]: libpod-conmon-4f985928d9b7bb120aa4a39f5da100303fe3acb0f7db65102aa35821139dc03f.scope: Deactivated successfully.
Oct 01 16:33:43 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:33:43.929+0000 7f5d20517140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 01 16:33:43 compute-0 ceph-mgr[74571]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 01 16:33:43 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'snap_schedule'
Oct 01 16:33:44 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:33:44.178+0000 7f5d20517140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 01 16:33:44 compute-0 ceph-mgr[74571]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 01 16:33:44 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'stats'
Oct 01 16:33:44 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'status'
Oct 01 16:33:44 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:33:44.695+0000 7f5d20517140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 01 16:33:44 compute-0 ceph-mgr[74571]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 01 16:33:44 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'telegraf'
Oct 01 16:33:44 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:33:44.944+0000 7f5d20517140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 01 16:33:44 compute-0 ceph-mgr[74571]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 01 16:33:44 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'telemetry'
Oct 01 16:33:45 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:33:45.595+0000 7f5d20517140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 01 16:33:45 compute-0 ceph-mgr[74571]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 01 16:33:45 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'test_orchestrator'
Oct 01 16:33:45 compute-0 podman[74989]: 2025-10-01 16:33:45.878726489 +0000 UTC m=+0.043959943 container create f9454c67b0f9a55c3aa02832d8c3eb45fe329bb4ad0fc77e32a2f048db696a27 (image=quay.io/ceph/ceph:v18, name=boring_mclean, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 01 16:33:45 compute-0 systemd[1]: Started libpod-conmon-f9454c67b0f9a55c3aa02832d8c3eb45fe329bb4ad0fc77e32a2f048db696a27.scope.
Oct 01 16:33:45 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:33:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77064b0874b5dcac26cfd79a0d816e385401335a8126878d3c22ba9fcb50fa87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77064b0874b5dcac26cfd79a0d816e385401335a8126878d3c22ba9fcb50fa87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77064b0874b5dcac26cfd79a0d816e385401335a8126878d3c22ba9fcb50fa87/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:45 compute-0 podman[74989]: 2025-10-01 16:33:45.858193532 +0000 UTC m=+0.023427066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:33:45 compute-0 podman[74989]: 2025-10-01 16:33:45.960420791 +0000 UTC m=+0.125654275 container init f9454c67b0f9a55c3aa02832d8c3eb45fe329bb4ad0fc77e32a2f048db696a27 (image=quay.io/ceph/ceph:v18, name=boring_mclean, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:33:45 compute-0 podman[74989]: 2025-10-01 16:33:45.965712621 +0000 UTC m=+0.130946065 container start f9454c67b0f9a55c3aa02832d8c3eb45fe329bb4ad0fc77e32a2f048db696a27 (image=quay.io/ceph/ceph:v18, name=boring_mclean, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 01 16:33:45 compute-0 podman[74989]: 2025-10-01 16:33:45.969424377 +0000 UTC m=+0.134657851 container attach f9454c67b0f9a55c3aa02832d8c3eb45fe329bb4ad0fc77e32a2f048db696a27 (image=quay.io/ceph/ceph:v18, name=boring_mclean, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 01 16:33:46 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:33:46.313+0000 7f5d20517140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 01 16:33:46 compute-0 ceph-mgr[74571]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 01 16:33:46 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'volumes'
Oct 01 16:33:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 01 16:33:46 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/446562810' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 16:33:46 compute-0 boring_mclean[75005]: 
Oct 01 16:33:46 compute-0 boring_mclean[75005]: {
Oct 01 16:33:46 compute-0 boring_mclean[75005]:     "fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:33:46 compute-0 boring_mclean[75005]:     "health": {
Oct 01 16:33:46 compute-0 boring_mclean[75005]:         "status": "HEALTH_OK",
Oct 01 16:33:46 compute-0 boring_mclean[75005]:         "checks": {},
Oct 01 16:33:46 compute-0 boring_mclean[75005]:         "mutes": []
Oct 01 16:33:46 compute-0 boring_mclean[75005]:     },
Oct 01 16:33:46 compute-0 boring_mclean[75005]:     "election_epoch": 5,
Oct 01 16:33:46 compute-0 boring_mclean[75005]:     "quorum": [
Oct 01 16:33:46 compute-0 boring_mclean[75005]:         0
Oct 01 16:33:46 compute-0 boring_mclean[75005]:     ],
Oct 01 16:33:46 compute-0 boring_mclean[75005]:     "quorum_names": [
Oct 01 16:33:46 compute-0 boring_mclean[75005]:         "compute-0"
Oct 01 16:33:46 compute-0 boring_mclean[75005]:     ],
Oct 01 16:33:46 compute-0 boring_mclean[75005]:     "quorum_age": 22,
Oct 01 16:33:46 compute-0 boring_mclean[75005]:     "monmap": {
Oct 01 16:33:46 compute-0 boring_mclean[75005]:         "epoch": 1,
Oct 01 16:33:46 compute-0 boring_mclean[75005]:         "min_mon_release_name": "reef",
Oct 01 16:33:46 compute-0 boring_mclean[75005]:         "num_mons": 1
Oct 01 16:33:46 compute-0 boring_mclean[75005]:     },
Oct 01 16:33:46 compute-0 boring_mclean[75005]:     "osdmap": {
Oct 01 16:33:46 compute-0 boring_mclean[75005]:         "epoch": 1,
Oct 01 16:33:46 compute-0 boring_mclean[75005]:         "num_osds": 0,
Oct 01 16:33:46 compute-0 boring_mclean[75005]:         "num_up_osds": 0,
Oct 01 16:33:46 compute-0 boring_mclean[75005]:         "osd_up_since": 0,
Oct 01 16:33:46 compute-0 boring_mclean[75005]:         "num_in_osds": 0,
Oct 01 16:33:46 compute-0 boring_mclean[75005]:         "osd_in_since": 0,
Oct 01 16:33:46 compute-0 boring_mclean[75005]:         "num_remapped_pgs": 0
Oct 01 16:33:46 compute-0 boring_mclean[75005]:     },
Oct 01 16:33:46 compute-0 boring_mclean[75005]:     "pgmap": {
Oct 01 16:33:46 compute-0 boring_mclean[75005]:         "pgs_by_state": [],
Oct 01 16:33:46 compute-0 boring_mclean[75005]:         "num_pgs": 0,
Oct 01 16:33:46 compute-0 boring_mclean[75005]:         "num_pools": 0,
Oct 01 16:33:46 compute-0 boring_mclean[75005]:         "num_objects": 0,
Oct 01 16:33:46 compute-0 boring_mclean[75005]:         "data_bytes": 0,
Oct 01 16:33:46 compute-0 boring_mclean[75005]:         "bytes_used": 0,
Oct 01 16:33:46 compute-0 boring_mclean[75005]:         "bytes_avail": 0,
Oct 01 16:33:46 compute-0 boring_mclean[75005]:         "bytes_total": 0
Oct 01 16:33:46 compute-0 boring_mclean[75005]:     },
Oct 01 16:33:46 compute-0 boring_mclean[75005]:     "fsmap": {
Oct 01 16:33:46 compute-0 boring_mclean[75005]:         "epoch": 1,
Oct 01 16:33:46 compute-0 boring_mclean[75005]:         "by_rank": [],
Oct 01 16:33:46 compute-0 boring_mclean[75005]:         "up:standby": 0
Oct 01 16:33:46 compute-0 boring_mclean[75005]:     },
Oct 01 16:33:46 compute-0 boring_mclean[75005]:     "mgrmap": {
Oct 01 16:33:46 compute-0 boring_mclean[75005]:         "available": false,
Oct 01 16:33:46 compute-0 boring_mclean[75005]:         "num_standbys": 0,
Oct 01 16:33:46 compute-0 boring_mclean[75005]:         "modules": [
Oct 01 16:33:46 compute-0 boring_mclean[75005]:             "iostat",
Oct 01 16:33:46 compute-0 boring_mclean[75005]:             "nfs",
Oct 01 16:33:46 compute-0 boring_mclean[75005]:             "restful"
Oct 01 16:33:46 compute-0 boring_mclean[75005]:         ],
Oct 01 16:33:46 compute-0 boring_mclean[75005]:         "services": {}
Oct 01 16:33:46 compute-0 boring_mclean[75005]:     },
Oct 01 16:33:46 compute-0 boring_mclean[75005]:     "servicemap": {
Oct 01 16:33:46 compute-0 boring_mclean[75005]:         "epoch": 1,
Oct 01 16:33:46 compute-0 boring_mclean[75005]:         "modified": "2025-10-01T16:33:19.992352+0000",
Oct 01 16:33:46 compute-0 boring_mclean[75005]:         "services": {}
Oct 01 16:33:46 compute-0 boring_mclean[75005]:     },
Oct 01 16:33:46 compute-0 boring_mclean[75005]:     "progress_events": {}
Oct 01 16:33:46 compute-0 boring_mclean[75005]: }
Oct 01 16:33:46 compute-0 systemd[1]: libpod-f9454c67b0f9a55c3aa02832d8c3eb45fe329bb4ad0fc77e32a2f048db696a27.scope: Deactivated successfully.
Oct 01 16:33:46 compute-0 podman[74989]: 2025-10-01 16:33:46.368830211 +0000 UTC m=+0.534063695 container died f9454c67b0f9a55c3aa02832d8c3eb45fe329bb4ad0fc77e32a2f048db696a27 (image=quay.io/ceph/ceph:v18, name=boring_mclean, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 01 16:33:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-77064b0874b5dcac26cfd79a0d816e385401335a8126878d3c22ba9fcb50fa87-merged.mount: Deactivated successfully.
Oct 01 16:33:46 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/446562810' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 16:33:46 compute-0 podman[74989]: 2025-10-01 16:33:46.416376474 +0000 UTC m=+0.581609938 container remove f9454c67b0f9a55c3aa02832d8c3eb45fe329bb4ad0fc77e32a2f048db696a27 (image=quay.io/ceph/ceph:v18, name=boring_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:33:46 compute-0 systemd[1]: libpod-conmon-f9454c67b0f9a55c3aa02832d8c3eb45fe329bb4ad0fc77e32a2f048db696a27.scope: Deactivated successfully.
Oct 01 16:33:47 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:33:47.083+0000 7f5d20517140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'zabbix'
Oct 01 16:33:47 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:33:47.333+0000 7f5d20517140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: ms_deliver_dispatch: unhandled message 0x55e822c1b1e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Oct 01 16:33:47 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.pmbdpj
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: mgr handle_mgr_map Activating!
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: mgr handle_mgr_map I am now activating
Oct 01 16:33:47 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.pmbdpj(active, starting, since 0.00991343s)
Oct 01 16:33:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Oct 01 16:33:47 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/806812274' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 01 16:33:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).mds e1 all = 1
Oct 01 16:33:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 01 16:33:47 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/806812274' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 01 16:33:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Oct 01 16:33:47 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/806812274' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 01 16:33:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct 01 16:33:47 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/806812274' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 01 16:33:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.pmbdpj", "id": "compute-0.pmbdpj"} v 0) v1
Oct 01 16:33:47 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/806812274' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.pmbdpj", "id": "compute-0.pmbdpj"}]: dispatch
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: mgr load Constructed class from module: balancer
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: mgr load Constructed class from module: crash
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: [balancer INFO root] Starting
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: mgr load Constructed class from module: devicehealth
Oct 01 16:33:47 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : Manager daemon compute-0.pmbdpj is now available
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_16:33:47
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: [balancer INFO root] No pools available
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: mgr load Constructed class from module: iostat
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: [devicehealth INFO root] Starting
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: mgr load Constructed class from module: nfs
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: mgr load Constructed class from module: orchestrator
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: mgr load Constructed class from module: pg_autoscaler
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: mgr load Constructed class from module: progress
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: [progress INFO root] Loading...
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: [progress INFO root] No stored events to load
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: [progress INFO root] Loaded [] historic events
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: [progress INFO root] Loaded OSDMap, ready.
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: [rbd_support INFO root] recovery thread starting
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: [rbd_support INFO root] starting setup
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: mgr load Constructed class from module: rbd_support
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: mgr load Constructed class from module: restful
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: [restful INFO root] server_addr: :: server_port: 8003
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: mgr load Constructed class from module: status
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: [restful WARNING root] server not running: no certificate configured
Oct 01 16:33:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.pmbdpj/mirror_snapshot_schedule"} v 0) v1
Oct 01 16:33:47 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/806812274' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.pmbdpj/mirror_snapshot_schedule"}]: dispatch
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: mgr load Constructed class from module: telemetry
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: [rbd_support INFO root] PerfHandler: starting
Oct 01 16:33:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TaskHandler: starting
Oct 01 16:33:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.pmbdpj/trash_purge_schedule"} v 0) v1
Oct 01 16:33:47 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/806812274' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.pmbdpj/trash_purge_schedule"}]: dispatch
Oct 01 16:33:47 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/806812274' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:33:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: [rbd_support INFO root] setup complete
Oct 01 16:33:47 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/806812274' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:33:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Oct 01 16:33:47 compute-0 ceph-mgr[74571]: mgr load Constructed class from module: volumes
Oct 01 16:33:47 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/806812274' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:33:47 compute-0 ceph-mon[74273]: Activating manager daemon compute-0.pmbdpj
Oct 01 16:33:47 compute-0 ceph-mon[74273]: mgrmap e2: compute-0.pmbdpj(active, starting, since 0.00991343s)
Oct 01 16:33:47 compute-0 ceph-mon[74273]: from='mgr.14102 192.168.122.100:0/806812274' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 01 16:33:47 compute-0 ceph-mon[74273]: from='mgr.14102 192.168.122.100:0/806812274' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 01 16:33:47 compute-0 ceph-mon[74273]: from='mgr.14102 192.168.122.100:0/806812274' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 01 16:33:47 compute-0 ceph-mon[74273]: from='mgr.14102 192.168.122.100:0/806812274' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 01 16:33:47 compute-0 ceph-mon[74273]: from='mgr.14102 192.168.122.100:0/806812274' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.pmbdpj", "id": "compute-0.pmbdpj"}]: dispatch
Oct 01 16:33:47 compute-0 ceph-mon[74273]: Manager daemon compute-0.pmbdpj is now available
Oct 01 16:33:47 compute-0 ceph-mon[74273]: from='mgr.14102 192.168.122.100:0/806812274' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.pmbdpj/mirror_snapshot_schedule"}]: dispatch
Oct 01 16:33:47 compute-0 ceph-mon[74273]: from='mgr.14102 192.168.122.100:0/806812274' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.pmbdpj/trash_purge_schedule"}]: dispatch
Oct 01 16:33:47 compute-0 ceph-mon[74273]: from='mgr.14102 192.168.122.100:0/806812274' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:33:47 compute-0 ceph-mon[74273]: from='mgr.14102 192.168.122.100:0/806812274' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:33:47 compute-0 ceph-mon[74273]: from='mgr.14102 192.168.122.100:0/806812274' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:33:48 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.pmbdpj(active, since 1.02095s)
Oct 01 16:33:48 compute-0 podman[75123]: 2025-10-01 16:33:48.471885057 +0000 UTC m=+0.034743691 container create 881ddf9c12a53a732e2f137da1202cec2f208b167a01bb9c1812b89069cec164 (image=quay.io/ceph/ceph:v18, name=peaceful_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:33:48 compute-0 systemd[1]: Started libpod-conmon-881ddf9c12a53a732e2f137da1202cec2f208b167a01bb9c1812b89069cec164.scope.
Oct 01 16:33:48 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:33:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d866bbedca84752d9769eabc5f5abe18ceb7252fabcd99e1c62cf79c55074735/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d866bbedca84752d9769eabc5f5abe18ceb7252fabcd99e1c62cf79c55074735/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d866bbedca84752d9769eabc5f5abe18ceb7252fabcd99e1c62cf79c55074735/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:48 compute-0 podman[75123]: 2025-10-01 16:33:48.456324978 +0000 UTC m=+0.019183662 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:33:49 compute-0 ceph-mgr[74571]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 01 16:33:49 compute-0 podman[75123]: 2025-10-01 16:33:49.365755505 +0000 UTC m=+0.928614159 container init 881ddf9c12a53a732e2f137da1202cec2f208b167a01bb9c1812b89069cec164 (image=quay.io/ceph/ceph:v18, name=peaceful_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 01 16:33:49 compute-0 podman[75123]: 2025-10-01 16:33:49.371708755 +0000 UTC m=+0.934567389 container start 881ddf9c12a53a732e2f137da1202cec2f208b167a01bb9c1812b89069cec164 (image=quay.io/ceph/ceph:v18, name=peaceful_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:33:49 compute-0 podman[75123]: 2025-10-01 16:33:49.668845616 +0000 UTC m=+1.231704250 container attach 881ddf9c12a53a732e2f137da1202cec2f208b167a01bb9c1812b89069cec164 (image=quay.io/ceph/ceph:v18, name=peaceful_sutherland, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:33:49 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.pmbdpj(active, since 2s)
Oct 01 16:33:49 compute-0 ceph-mon[74273]: mgrmap e3: compute-0.pmbdpj(active, since 1.02095s)
Oct 01 16:33:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 01 16:33:49 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/213564519' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]: 
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]: {
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:     "fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:     "health": {
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:         "status": "HEALTH_OK",
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:         "checks": {},
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:         "mutes": []
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:     },
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:     "election_epoch": 5,
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:     "quorum": [
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:         0
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:     ],
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:     "quorum_names": [
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:         "compute-0"
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:     ],
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:     "quorum_age": 26,
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:     "monmap": {
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:         "epoch": 1,
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:         "min_mon_release_name": "reef",
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:         "num_mons": 1
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:     },
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:     "osdmap": {
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:         "epoch": 1,
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:         "num_osds": 0,
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:         "num_up_osds": 0,
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:         "osd_up_since": 0,
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:         "num_in_osds": 0,
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:         "osd_in_since": 0,
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:         "num_remapped_pgs": 0
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:     },
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:     "pgmap": {
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:         "pgs_by_state": [],
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:         "num_pgs": 0,
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:         "num_pools": 0,
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:         "num_objects": 0,
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:         "data_bytes": 0,
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:         "bytes_used": 0,
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:         "bytes_avail": 0,
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:         "bytes_total": 0
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:     },
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:     "fsmap": {
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:         "epoch": 1,
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:         "by_rank": [],
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:         "up:standby": 0
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:     },
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:     "mgrmap": {
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:         "available": true,
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:         "num_standbys": 0,
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:         "modules": [
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:             "iostat",
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:             "nfs",
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:             "restful"
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:         ],
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:         "services": {}
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:     },
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:     "servicemap": {
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:         "epoch": 1,
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:         "modified": "2025-10-01T16:33:19.992352+0000",
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:         "services": {}
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:     },
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]:     "progress_events": {}
Oct 01 16:33:49 compute-0 peaceful_sutherland[75139]: }
Oct 01 16:33:49 compute-0 systemd[1]: libpod-881ddf9c12a53a732e2f137da1202cec2f208b167a01bb9c1812b89069cec164.scope: Deactivated successfully.
Oct 01 16:33:49 compute-0 podman[75123]: 2025-10-01 16:33:49.987628883 +0000 UTC m=+1.550487517 container died 881ddf9c12a53a732e2f137da1202cec2f208b167a01bb9c1812b89069cec164 (image=quay.io/ceph/ceph:v18, name=peaceful_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 01 16:33:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d866bbedca84752d9769eabc5f5abe18ceb7252fabcd99e1c62cf79c55074735-merged.mount: Deactivated successfully.
Oct 01 16:33:50 compute-0 podman[75123]: 2025-10-01 16:33:50.046558711 +0000 UTC m=+1.609417355 container remove 881ddf9c12a53a732e2f137da1202cec2f208b167a01bb9c1812b89069cec164 (image=quay.io/ceph/ceph:v18, name=peaceful_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:33:50 compute-0 systemd[1]: libpod-conmon-881ddf9c12a53a732e2f137da1202cec2f208b167a01bb9c1812b89069cec164.scope: Deactivated successfully.
Oct 01 16:33:50 compute-0 podman[75182]: 2025-10-01 16:33:50.12297444 +0000 UTC m=+0.054010274 container create e65dcdf315c2fca47396c8ff3675c95a8b3955853c92014bf3aa4f154e0891a9 (image=quay.io/ceph/ceph:v18, name=magical_chaum, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 16:33:50 compute-0 systemd[1]: Started libpod-conmon-e65dcdf315c2fca47396c8ff3675c95a8b3955853c92014bf3aa4f154e0891a9.scope.
Oct 01 16:33:50 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:33:50 compute-0 podman[75182]: 2025-10-01 16:33:50.097909807 +0000 UTC m=+0.028945631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:33:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/227e78b2855e41a173e50d89cbdf628a629ea2f71198d664811c2f8c0a6e020e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/227e78b2855e41a173e50d89cbdf628a629ea2f71198d664811c2f8c0a6e020e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/227e78b2855e41a173e50d89cbdf628a629ea2f71198d664811c2f8c0a6e020e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/227e78b2855e41a173e50d89cbdf628a629ea2f71198d664811c2f8c0a6e020e/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:50 compute-0 podman[75182]: 2025-10-01 16:33:50.209017282 +0000 UTC m=+0.140053126 container init e65dcdf315c2fca47396c8ff3675c95a8b3955853c92014bf3aa4f154e0891a9 (image=quay.io/ceph/ceph:v18, name=magical_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 01 16:33:50 compute-0 podman[75182]: 2025-10-01 16:33:50.216763728 +0000 UTC m=+0.147799562 container start e65dcdf315c2fca47396c8ff3675c95a8b3955853c92014bf3aa4f154e0891a9 (image=quay.io/ceph/ceph:v18, name=magical_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:33:50 compute-0 podman[75182]: 2025-10-01 16:33:50.221392534 +0000 UTC m=+0.152428378 container attach e65dcdf315c2fca47396c8ff3675c95a8b3955853c92014bf3aa4f154e0891a9 (image=quay.io/ceph/ceph:v18, name=magical_chaum, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 01 16:33:50 compute-0 ceph-mon[74273]: mgrmap e4: compute-0.pmbdpj(active, since 2s)
Oct 01 16:33:50 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/213564519' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 16:33:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct 01 16:33:50 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/33976565' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 01 16:33:50 compute-0 systemd[1]: libpod-e65dcdf315c2fca47396c8ff3675c95a8b3955853c92014bf3aa4f154e0891a9.scope: Deactivated successfully.
Oct 01 16:33:50 compute-0 conmon[75198]: conmon e65dcdf315c2fca47396 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e65dcdf315c2fca47396c8ff3675c95a8b3955853c92014bf3aa4f154e0891a9.scope/container/memory.events
Oct 01 16:33:50 compute-0 podman[75182]: 2025-10-01 16:33:50.750830269 +0000 UTC m=+0.681866083 container died e65dcdf315c2fca47396c8ff3675c95a8b3955853c92014bf3aa4f154e0891a9 (image=quay.io/ceph/ceph:v18, name=magical_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:33:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-227e78b2855e41a173e50d89cbdf628a629ea2f71198d664811c2f8c0a6e020e-merged.mount: Deactivated successfully.
Oct 01 16:33:50 compute-0 podman[75182]: 2025-10-01 16:33:50.793567858 +0000 UTC m=+0.724603662 container remove e65dcdf315c2fca47396c8ff3675c95a8b3955853c92014bf3aa4f154e0891a9 (image=quay.io/ceph/ceph:v18, name=magical_chaum, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Oct 01 16:33:50 compute-0 systemd[1]: libpod-conmon-e65dcdf315c2fca47396c8ff3675c95a8b3955853c92014bf3aa4f154e0891a9.scope: Deactivated successfully.
Oct 01 16:33:50 compute-0 podman[75237]: 2025-10-01 16:33:50.863006131 +0000 UTC m=+0.048551696 container create 45eaf27ca0fd71ce9a8aeef9fc56233eb99163dd2736743b0f6d52c8923e3d2f (image=quay.io/ceph/ceph:v18, name=focused_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 01 16:33:50 compute-0 systemd[1]: Started libpod-conmon-45eaf27ca0fd71ce9a8aeef9fc56233eb99163dd2736743b0f6d52c8923e3d2f.scope.
Oct 01 16:33:50 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:33:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b93640d0fffdbab55625b2ff234b6739a31265eb805f6d4f286a54da52cafa1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b93640d0fffdbab55625b2ff234b6739a31265eb805f6d4f286a54da52cafa1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b93640d0fffdbab55625b2ff234b6739a31265eb805f6d4f286a54da52cafa1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:50 compute-0 podman[75237]: 2025-10-01 16:33:50.929283294 +0000 UTC m=+0.114828849 container init 45eaf27ca0fd71ce9a8aeef9fc56233eb99163dd2736743b0f6d52c8923e3d2f (image=quay.io/ceph/ceph:v18, name=focused_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:33:50 compute-0 podman[75237]: 2025-10-01 16:33:50.839395625 +0000 UTC m=+0.024941200 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:33:50 compute-0 podman[75237]: 2025-10-01 16:33:50.936363553 +0000 UTC m=+0.121909088 container start 45eaf27ca0fd71ce9a8aeef9fc56233eb99163dd2736743b0f6d52c8923e3d2f (image=quay.io/ceph/ceph:v18, name=focused_torvalds, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 01 16:33:50 compute-0 podman[75237]: 2025-10-01 16:33:50.940061796 +0000 UTC m=+0.125607361 container attach 45eaf27ca0fd71ce9a8aeef9fc56233eb99163dd2736743b0f6d52c8923e3d2f (image=quay.io/ceph/ceph:v18, name=focused_torvalds, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:33:51 compute-0 ceph-mgr[74571]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 01 16:33:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Oct 01 16:33:51 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3893839395' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct 01 16:33:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/33976565' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 01 16:33:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3893839395' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct 01 16:33:51 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3893839395' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct 01 16:33:51 compute-0 ceph-mgr[74571]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 01 16:33:51 compute-0 ceph-mgr[74571]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 01 16:33:51 compute-0 ceph-mgr[74571]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct 01 16:33:51 compute-0 ceph-mgr[74571]: mgr respawn  1: '-n'
Oct 01 16:33:51 compute-0 ceph-mgr[74571]: mgr respawn  2: 'mgr.compute-0.pmbdpj'
Oct 01 16:33:51 compute-0 ceph-mgr[74571]: mgr respawn  3: '-f'
Oct 01 16:33:51 compute-0 ceph-mgr[74571]: mgr respawn  4: '--setuser'
Oct 01 16:33:51 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.pmbdpj(active, since 4s)
Oct 01 16:33:51 compute-0 systemd[1]: libpod-45eaf27ca0fd71ce9a8aeef9fc56233eb99163dd2736743b0f6d52c8923e3d2f.scope: Deactivated successfully.
Oct 01 16:33:51 compute-0 podman[75237]: 2025-10-01 16:33:51.734838088 +0000 UTC m=+0.920383623 container died 45eaf27ca0fd71ce9a8aeef9fc56233eb99163dd2736743b0f6d52c8923e3d2f (image=quay.io/ceph/ceph:v18, name=focused_torvalds, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 01 16:33:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b93640d0fffdbab55625b2ff234b6739a31265eb805f6d4f286a54da52cafa1-merged.mount: Deactivated successfully.
Oct 01 16:33:51 compute-0 podman[75237]: 2025-10-01 16:33:51.774751336 +0000 UTC m=+0.960296871 container remove 45eaf27ca0fd71ce9a8aeef9fc56233eb99163dd2736743b0f6d52c8923e3d2f (image=quay.io/ceph/ceph:v18, name=focused_torvalds, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:33:51 compute-0 systemd[1]: libpod-conmon-45eaf27ca0fd71ce9a8aeef9fc56233eb99163dd2736743b0f6d52c8923e3d2f.scope: Deactivated successfully.
Oct 01 16:33:51 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: ignoring --setuser ceph since I am not root
Oct 01 16:33:51 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: ignoring --setgroup ceph since I am not root
Oct 01 16:33:51 compute-0 ceph-mgr[74571]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct 01 16:33:51 compute-0 ceph-mgr[74571]: pidfile_write: ignore empty --pid-file
Oct 01 16:33:51 compute-0 podman[75294]: 2025-10-01 16:33:51.837203433 +0000 UTC m=+0.043135950 container create d9206762538e8e4fe238ff7406f0edb103144d43bb022344172b32d7979e47a4 (image=quay.io/ceph/ceph:v18, name=stupefied_goldwasser, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 01 16:33:51 compute-0 systemd[1]: Started libpod-conmon-d9206762538e8e4fe238ff7406f0edb103144d43bb022344172b32d7979e47a4.scope.
Oct 01 16:33:51 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:33:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7c40fd25743437d9229a95fde88c425956b844ac3796cdc60fc35bbcef83539/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7c40fd25743437d9229a95fde88c425956b844ac3796cdc60fc35bbcef83539/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7c40fd25743437d9229a95fde88c425956b844ac3796cdc60fc35bbcef83539/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:51 compute-0 podman[75294]: 2025-10-01 16:33:51.912051802 +0000 UTC m=+0.117984339 container init d9206762538e8e4fe238ff7406f0edb103144d43bb022344172b32d7979e47a4 (image=quay.io/ceph/ceph:v18, name=stupefied_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 01 16:33:51 compute-0 podman[75294]: 2025-10-01 16:33:51.819097686 +0000 UTC m=+0.025030233 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:33:51 compute-0 podman[75294]: 2025-10-01 16:33:51.917762316 +0000 UTC m=+0.123694843 container start d9206762538e8e4fe238ff7406f0edb103144d43bb022344172b32d7979e47a4 (image=quay.io/ceph/ceph:v18, name=stupefied_goldwasser, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 01 16:33:51 compute-0 podman[75294]: 2025-10-01 16:33:51.921230064 +0000 UTC m=+0.127162601 container attach d9206762538e8e4fe238ff7406f0edb103144d43bb022344172b32d7979e47a4 (image=quay.io/ceph/ceph:v18, name=stupefied_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:33:51 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'alerts'
Oct 01 16:33:52 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:33:52.241+0000 7f81cdee7140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 01 16:33:52 compute-0 ceph-mgr[74571]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 01 16:33:52 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'balancer'
Oct 01 16:33:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct 01 16:33:52 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4252546028' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 01 16:33:52 compute-0 stupefied_goldwasser[75334]: {
Oct 01 16:33:52 compute-0 stupefied_goldwasser[75334]:     "epoch": 5,
Oct 01 16:33:52 compute-0 stupefied_goldwasser[75334]:     "available": true,
Oct 01 16:33:52 compute-0 stupefied_goldwasser[75334]:     "active_name": "compute-0.pmbdpj",
Oct 01 16:33:52 compute-0 stupefied_goldwasser[75334]:     "num_standby": 0
Oct 01 16:33:52 compute-0 stupefied_goldwasser[75334]: }
Oct 01 16:33:52 compute-0 systemd[1]: libpod-d9206762538e8e4fe238ff7406f0edb103144d43bb022344172b32d7979e47a4.scope: Deactivated successfully.
Oct 01 16:33:52 compute-0 podman[75294]: 2025-10-01 16:33:52.472234523 +0000 UTC m=+0.678167040 container died d9206762538e8e4fe238ff7406f0edb103144d43bb022344172b32d7979e47a4 (image=quay.io/ceph/ceph:v18, name=stupefied_goldwasser, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:33:52 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:33:52.490+0000 7f81cdee7140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 01 16:33:52 compute-0 ceph-mgr[74571]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 01 16:33:52 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'cephadm'
Oct 01 16:33:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7c40fd25743437d9229a95fde88c425956b844ac3796cdc60fc35bbcef83539-merged.mount: Deactivated successfully.
Oct 01 16:33:52 compute-0 podman[75294]: 2025-10-01 16:33:52.523484747 +0000 UTC m=+0.729417264 container remove d9206762538e8e4fe238ff7406f0edb103144d43bb022344172b32d7979e47a4 (image=quay.io/ceph/ceph:v18, name=stupefied_goldwasser, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 16:33:52 compute-0 systemd[1]: libpod-conmon-d9206762538e8e4fe238ff7406f0edb103144d43bb022344172b32d7979e47a4.scope: Deactivated successfully.
Oct 01 16:33:52 compute-0 podman[75374]: 2025-10-01 16:33:52.585346398 +0000 UTC m=+0.046793842 container create 619a64fe252a792da1ac42d6ec448d2ddddecdfa3334e61027aba952e7a7d72e (image=quay.io/ceph/ceph:v18, name=nice_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 01 16:33:52 compute-0 systemd[1]: Started libpod-conmon-619a64fe252a792da1ac42d6ec448d2ddddecdfa3334e61027aba952e7a7d72e.scope.
Oct 01 16:33:52 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:33:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41160dc86e9b6a14ae682a0a83f4c5b658146a9f6d31a3a5d09fd6a63842bc64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41160dc86e9b6a14ae682a0a83f4c5b658146a9f6d31a3a5d09fd6a63842bc64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41160dc86e9b6a14ae682a0a83f4c5b658146a9f6d31a3a5d09fd6a63842bc64/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:33:52 compute-0 podman[75374]: 2025-10-01 16:33:52.646050871 +0000 UTC m=+0.107498325 container init 619a64fe252a792da1ac42d6ec448d2ddddecdfa3334e61027aba952e7a7d72e (image=quay.io/ceph/ceph:v18, name=nice_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 01 16:33:52 compute-0 podman[75374]: 2025-10-01 16:33:52.556935791 +0000 UTC m=+0.018383285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:33:52 compute-0 podman[75374]: 2025-10-01 16:33:52.654285069 +0000 UTC m=+0.115732523 container start 619a64fe252a792da1ac42d6ec448d2ddddecdfa3334e61027aba952e7a7d72e (image=quay.io/ceph/ceph:v18, name=nice_albattani, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Oct 01 16:33:52 compute-0 podman[75374]: 2025-10-01 16:33:52.65790824 +0000 UTC m=+0.119355704 container attach 619a64fe252a792da1ac42d6ec448d2ddddecdfa3334e61027aba952e7a7d72e (image=quay.io/ceph/ceph:v18, name=nice_albattani, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:33:52 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3893839395' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct 01 16:33:52 compute-0 ceph-mon[74273]: mgrmap e5: compute-0.pmbdpj(active, since 4s)
Oct 01 16:33:52 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4252546028' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 01 16:33:54 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'crash'
Oct 01 16:33:54 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:33:54.684+0000 7f81cdee7140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 01 16:33:54 compute-0 ceph-mgr[74571]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 01 16:33:54 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'dashboard'
Oct 01 16:33:56 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'devicehealth'
Oct 01 16:33:56 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:33:56.363+0000 7f81cdee7140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 01 16:33:56 compute-0 ceph-mgr[74571]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 01 16:33:56 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'diskprediction_local'
Oct 01 16:33:56 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 01 16:33:56 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 01 16:33:56 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]:   from numpy import show_config as show_numpy_config
Oct 01 16:33:56 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:33:56.889+0000 7f81cdee7140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 01 16:33:56 compute-0 ceph-mgr[74571]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 01 16:33:56 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'influx'
Oct 01 16:33:57 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:33:57.130+0000 7f81cdee7140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 01 16:33:57 compute-0 ceph-mgr[74571]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 01 16:33:57 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'insights'
Oct 01 16:33:57 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'iostat'
Oct 01 16:33:57 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:33:57.622+0000 7f81cdee7140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 01 16:33:57 compute-0 ceph-mgr[74571]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 01 16:33:57 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'k8sevents'
Oct 01 16:33:59 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'localpool'
Oct 01 16:33:59 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'mds_autoscaler'
Oct 01 16:34:00 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'mirroring'
Oct 01 16:34:00 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'nfs'
Oct 01 16:34:01 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:34:01.196+0000 7f81cdee7140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 01 16:34:01 compute-0 ceph-mgr[74571]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 01 16:34:01 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'orchestrator'
Oct 01 16:34:01 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:34:01.848+0000 7f81cdee7140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 01 16:34:01 compute-0 ceph-mgr[74571]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 01 16:34:01 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'osd_perf_query'
Oct 01 16:34:02 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:34:02.124+0000 7f81cdee7140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 01 16:34:02 compute-0 ceph-mgr[74571]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 01 16:34:02 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'osd_support'
Oct 01 16:34:02 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:34:02.407+0000 7f81cdee7140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 01 16:34:02 compute-0 ceph-mgr[74571]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 01 16:34:02 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'pg_autoscaler'
Oct 01 16:34:02 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:34:02.688+0000 7f81cdee7140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 01 16:34:02 compute-0 ceph-mgr[74571]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 01 16:34:02 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'progress'
Oct 01 16:34:02 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:34:02.930+0000 7f81cdee7140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 01 16:34:02 compute-0 ceph-mgr[74571]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 01 16:34:02 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'prometheus'
Oct 01 16:34:03 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:34:03.938+0000 7f81cdee7140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 01 16:34:03 compute-0 ceph-mgr[74571]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 01 16:34:03 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'rbd_support'
Oct 01 16:34:04 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:34:04.218+0000 7f81cdee7140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 01 16:34:04 compute-0 ceph-mgr[74571]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 01 16:34:04 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'restful'
Oct 01 16:34:04 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'rgw'
Oct 01 16:34:05 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:34:05.600+0000 7f81cdee7140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 01 16:34:05 compute-0 ceph-mgr[74571]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 01 16:34:05 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'rook'
Oct 01 16:34:07 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:34:07.736+0000 7f81cdee7140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 01 16:34:07 compute-0 ceph-mgr[74571]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 01 16:34:07 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'selftest'
Oct 01 16:34:07 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:34:07.991+0000 7f81cdee7140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 01 16:34:07 compute-0 ceph-mgr[74571]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 01 16:34:07 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'snap_schedule'
Oct 01 16:34:08 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:34:08.246+0000 7f81cdee7140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 01 16:34:08 compute-0 ceph-mgr[74571]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 01 16:34:08 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'stats'
Oct 01 16:34:08 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'status'
Oct 01 16:34:08 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:34:08.760+0000 7f81cdee7140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 01 16:34:08 compute-0 ceph-mgr[74571]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 01 16:34:08 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'telegraf'
Oct 01 16:34:09 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:34:09.005+0000 7f81cdee7140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 01 16:34:09 compute-0 ceph-mgr[74571]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 01 16:34:09 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'telemetry'
Oct 01 16:34:09 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:34:09.608+0000 7f81cdee7140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 01 16:34:09 compute-0 ceph-mgr[74571]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 01 16:34:09 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'test_orchestrator'
Oct 01 16:34:10 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:34:10.296+0000 7f81cdee7140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 01 16:34:10 compute-0 ceph-mgr[74571]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 01 16:34:10 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'volumes'
Oct 01 16:34:11 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:34:11.005+0000 7f81cdee7140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: mgr[py] Loading python module 'zabbix'
Oct 01 16:34:11 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T16:34:11.239+0000 7f81cdee7140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 01 16:34:11 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : Active manager daemon compute-0.pmbdpj restarted
Oct 01 16:34:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Oct 01 16:34:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 01 16:34:11 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.pmbdpj
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: ms_deliver_dispatch: unhandled message 0x5625c1eab1e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Oct 01 16:34:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct 01 16:34:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct 01 16:34:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: mgr handle_mgr_map Activating!
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: mgr handle_mgr_map I am now activating
Oct 01 16:34:11 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Oct 01 16:34:11 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.pmbdpj(active, starting, since 0.0196787s)
Oct 01 16:34:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct 01 16:34:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 01 16:34:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.pmbdpj", "id": "compute-0.pmbdpj"} v 0) v1
Oct 01 16:34:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.pmbdpj", "id": "compute-0.pmbdpj"}]: dispatch
Oct 01 16:34:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Oct 01 16:34:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 01 16:34:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).mds e1 all = 1
Oct 01 16:34:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 01 16:34:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 01 16:34:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Oct 01 16:34:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: mgr load Constructed class from module: balancer
Oct 01 16:34:11 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : Manager daemon compute-0.pmbdpj is now available
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Starting
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_16:34:11
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [balancer INFO root] No pools available
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Oct 01 16:34:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Oct 01 16:34:11 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Oct 01 16:34:11 compute-0 ceph-mon[74273]: Active manager daemon compute-0.pmbdpj restarted
Oct 01 16:34:11 compute-0 ceph-mon[74273]: Activating manager daemon compute-0.pmbdpj
Oct 01 16:34:11 compute-0 ceph-mon[74273]: osdmap e2: 0 total, 0 up, 0 in
Oct 01 16:34:11 compute-0 ceph-mon[74273]: mgrmap e6: compute-0.pmbdpj(active, starting, since 0.0196787s)
Oct 01 16:34:11 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 01 16:34:11 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.pmbdpj", "id": "compute-0.pmbdpj"}]: dispatch
Oct 01 16:34:11 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 01 16:34:11 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 01 16:34:11 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 01 16:34:11 compute-0 ceph-mon[74273]: Manager daemon compute-0.pmbdpj is now available
Oct 01 16:34:11 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: mgr load Constructed class from module: cephadm
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: mgr load Constructed class from module: crash
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: mgr load Constructed class from module: devicehealth
Oct 01 16:34:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 01 16:34:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: mgr load Constructed class from module: iostat
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [devicehealth INFO root] Starting
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: mgr load Constructed class from module: nfs
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: mgr load Constructed class from module: orchestrator
Oct 01 16:34:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 01 16:34:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: mgr load Constructed class from module: pg_autoscaler
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: mgr load Constructed class from module: progress
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [progress INFO root] Loading...
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [progress INFO root] No stored events to load
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [progress INFO root] Loaded [] historic events
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [progress INFO root] Loaded OSDMap, ready.
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] recovery thread starting
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] starting setup
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: mgr load Constructed class from module: rbd_support
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: mgr load Constructed class from module: restful
Oct 01 16:34:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.pmbdpj/mirror_snapshot_schedule"} v 0) v1
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [restful INFO root] server_addr: :: server_port: 8003
Oct 01 16:34:11 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.pmbdpj/mirror_snapshot_schedule"}]: dispatch
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [restful WARNING root] server not running: no certificate configured
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: mgr load Constructed class from module: status
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: mgr load Constructed class from module: telemetry
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] PerfHandler: starting
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TaskHandler: starting
Oct 01 16:34:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.pmbdpj/trash_purge_schedule"} v 0) v1
Oct 01 16:34:11 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.pmbdpj/trash_purge_schedule"}]: dispatch
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] setup complete
Oct 01 16:34:11 compute-0 ceph-mgr[74571]: mgr load Constructed class from module: volumes
Oct 01 16:34:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Oct 01 16:34:12 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Oct 01 16:34:12 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:12 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.pmbdpj(active, since 1.02575s)
Oct 01 16:34:12 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Oct 01 16:34:12 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Oct 01 16:34:12 compute-0 nice_albattani[75391]: {
Oct 01 16:34:12 compute-0 nice_albattani[75391]:     "mgrmap_epoch": 7,
Oct 01 16:34:12 compute-0 nice_albattani[75391]:     "initialized": true
Oct 01 16:34:12 compute-0 nice_albattani[75391]: }
Oct 01 16:34:12 compute-0 systemd[1]: libpod-619a64fe252a792da1ac42d6ec448d2ddddecdfa3334e61027aba952e7a7d72e.scope: Deactivated successfully.
Oct 01 16:34:12 compute-0 podman[75374]: 2025-10-01 16:34:12.294812413 +0000 UTC m=+19.756259927 container died 619a64fe252a792da1ac42d6ec448d2ddddecdfa3334e61027aba952e7a7d72e (image=quay.io/ceph/ceph:v18, name=nice_albattani, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:34:12 compute-0 ceph-mon[74273]: Found migration_current of "None". Setting to last migration.
Oct 01 16:34:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 01 16:34:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 01 16:34:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.pmbdpj/mirror_snapshot_schedule"}]: dispatch
Oct 01 16:34:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.pmbdpj/trash_purge_schedule"}]: dispatch
Oct 01 16:34:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:12 compute-0 ceph-mon[74273]: mgrmap e7: compute-0.pmbdpj(active, since 1.02575s)
Oct 01 16:34:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-41160dc86e9b6a14ae682a0a83f4c5b658146a9f6d31a3a5d09fd6a63842bc64-merged.mount: Deactivated successfully.
Oct 01 16:34:12 compute-0 podman[75374]: 2025-10-01 16:34:12.348108489 +0000 UTC m=+19.809555963 container remove 619a64fe252a792da1ac42d6ec448d2ddddecdfa3334e61027aba952e7a7d72e (image=quay.io/ceph/ceph:v18, name=nice_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:34:12 compute-0 systemd[1]: libpod-conmon-619a64fe252a792da1ac42d6ec448d2ddddecdfa3334e61027aba952e7a7d72e.scope: Deactivated successfully.
Oct 01 16:34:12 compute-0 podman[75552]: 2025-10-01 16:34:12.415611593 +0000 UTC m=+0.043793767 container create 8becada522e40ec065988c90aa5f63d0cb9e317b0ce25d835b0a5315707f0fca (image=quay.io/ceph/ceph:v18, name=pedantic_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 01 16:34:12 compute-0 systemd[1]: Started libpod-conmon-8becada522e40ec065988c90aa5f63d0cb9e317b0ce25d835b0a5315707f0fca.scope.
Oct 01 16:34:12 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:34:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1ae468ded7e2f52c21155016cd766850c8e3d292bc9c1959ffda0d689fa8d59/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1ae468ded7e2f52c21155016cd766850c8e3d292bc9c1959ffda0d689fa8d59/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1ae468ded7e2f52c21155016cd766850c8e3d292bc9c1959ffda0d689fa8d59/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:12 compute-0 podman[75552]: 2025-10-01 16:34:12.489474287 +0000 UTC m=+0.117656491 container init 8becada522e40ec065988c90aa5f63d0cb9e317b0ce25d835b0a5315707f0fca (image=quay.io/ceph/ceph:v18, name=pedantic_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 01 16:34:12 compute-0 podman[75552]: 2025-10-01 16:34:12.3964801 +0000 UTC m=+0.024662314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:34:12 compute-0 podman[75552]: 2025-10-01 16:34:12.494789402 +0000 UTC m=+0.122971596 container start 8becada522e40ec065988c90aa5f63d0cb9e317b0ce25d835b0a5315707f0fca (image=quay.io/ceph/ceph:v18, name=pedantic_mendeleev, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:34:12 compute-0 podman[75552]: 2025-10-01 16:34:12.498935946 +0000 UTC m=+0.127118160 container attach 8becada522e40ec065988c90aa5f63d0cb9e317b0ce25d835b0a5315707f0fca (image=quay.io/ceph/ceph:v18, name=pedantic_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:34:12 compute-0 ceph-mgr[74571]: [cephadm INFO cherrypy.error] [01/Oct/2025:16:34:12] ENGINE Bus STARTING
Oct 01 16:34:12 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : [01/Oct/2025:16:34:12] ENGINE Bus STARTING
Oct 01 16:34:13 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 16:34:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Oct 01 16:34:13 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 01 16:34:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 01 16:34:13 compute-0 systemd[1]: libpod-8becada522e40ec065988c90aa5f63d0cb9e317b0ce25d835b0a5315707f0fca.scope: Deactivated successfully.
Oct 01 16:34:13 compute-0 podman[75552]: 2025-10-01 16:34:13.090618492 +0000 UTC m=+0.718800676 container died 8becada522e40ec065988c90aa5f63d0cb9e317b0ce25d835b0a5315707f0fca (image=quay.io/ceph/ceph:v18, name=pedantic_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 01 16:34:13 compute-0 ceph-mgr[74571]: [cephadm INFO cherrypy.error] [01/Oct/2025:16:34:13] ENGINE Serving on https://192.168.122.100:7150
Oct 01 16:34:13 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : [01/Oct/2025:16:34:13] ENGINE Serving on https://192.168.122.100:7150
Oct 01 16:34:13 compute-0 ceph-mgr[74571]: [cephadm INFO cherrypy.error] [01/Oct/2025:16:34:13] ENGINE Client ('192.168.122.100', 51146) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 01 16:34:13 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : [01/Oct/2025:16:34:13] ENGINE Client ('192.168.122.100', 51146) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 01 16:34:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1ae468ded7e2f52c21155016cd766850c8e3d292bc9c1959ffda0d689fa8d59-merged.mount: Deactivated successfully.
Oct 01 16:34:13 compute-0 podman[75552]: 2025-10-01 16:34:13.13097015 +0000 UTC m=+0.759152334 container remove 8becada522e40ec065988c90aa5f63d0cb9e317b0ce25d835b0a5315707f0fca (image=quay.io/ceph/ceph:v18, name=pedantic_mendeleev, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 01 16:34:13 compute-0 systemd[1]: libpod-conmon-8becada522e40ec065988c90aa5f63d0cb9e317b0ce25d835b0a5315707f0fca.scope: Deactivated successfully.
Oct 01 16:34:13 compute-0 podman[75630]: 2025-10-01 16:34:13.201038239 +0000 UTC m=+0.050590988 container create 76031414787f7ec3642090b960020d18eea16a693c821d7f76049240b43e522d (image=quay.io/ceph/ceph:v18, name=gifted_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 01 16:34:13 compute-0 ceph-mgr[74571]: [cephadm INFO cherrypy.error] [01/Oct/2025:16:34:13] ENGINE Serving on http://192.168.122.100:8765
Oct 01 16:34:13 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : [01/Oct/2025:16:34:13] ENGINE Serving on http://192.168.122.100:8765
Oct 01 16:34:13 compute-0 ceph-mgr[74571]: [cephadm INFO cherrypy.error] [01/Oct/2025:16:34:13] ENGINE Bus STARTED
Oct 01 16:34:13 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : [01/Oct/2025:16:34:13] ENGINE Bus STARTED
Oct 01 16:34:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 01 16:34:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 01 16:34:13 compute-0 systemd[1]: Started libpod-conmon-76031414787f7ec3642090b960020d18eea16a693c821d7f76049240b43e522d.scope.
Oct 01 16:34:13 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:34:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9645d08233048481105118cf5caba2ca9e35aed3c3f04ee9a3bfe65ed515f47/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9645d08233048481105118cf5caba2ca9e35aed3c3f04ee9a3bfe65ed515f47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9645d08233048481105118cf5caba2ca9e35aed3c3f04ee9a3bfe65ed515f47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:13 compute-0 ceph-mgr[74571]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 01 16:34:13 compute-0 podman[75630]: 2025-10-01 16:34:13.175353301 +0000 UTC m=+0.024906120 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:34:13 compute-0 podman[75630]: 2025-10-01 16:34:13.275738935 +0000 UTC m=+0.125291684 container init 76031414787f7ec3642090b960020d18eea16a693c821d7f76049240b43e522d (image=quay.io/ceph/ceph:v18, name=gifted_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 01 16:34:13 compute-0 podman[75630]: 2025-10-01 16:34:13.288058906 +0000 UTC m=+0.137611665 container start 76031414787f7ec3642090b960020d18eea16a693c821d7f76049240b43e522d (image=quay.io/ceph/ceph:v18, name=gifted_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 01 16:34:13 compute-0 podman[75630]: 2025-10-01 16:34:13.293748489 +0000 UTC m=+0.143301238 container attach 76031414787f7ec3642090b960020d18eea16a693c821d7f76049240b43e522d (image=quay.io/ceph/ceph:v18, name=gifted_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 01 16:34:13 compute-0 ceph-mon[74273]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Oct 01 16:34:13 compute-0 ceph-mon[74273]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Oct 01 16:34:13 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:13 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 01 16:34:13 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 01 16:34:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019921210 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:34:13 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 16:34:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Oct 01 16:34:13 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:13 compute-0 ceph-mgr[74571]: [cephadm INFO root] Set ssh ssh_user
Oct 01 16:34:13 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Oct 01 16:34:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Oct 01 16:34:13 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:13 compute-0 ceph-mgr[74571]: [cephadm INFO root] Set ssh ssh_config
Oct 01 16:34:13 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Oct 01 16:34:13 compute-0 ceph-mgr[74571]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Oct 01 16:34:13 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Oct 01 16:34:13 compute-0 gifted_perlman[75647]: ssh user set to ceph-admin. sudo will be used
Oct 01 16:34:13 compute-0 systemd[1]: libpod-76031414787f7ec3642090b960020d18eea16a693c821d7f76049240b43e522d.scope: Deactivated successfully.
Oct 01 16:34:13 compute-0 podman[75630]: 2025-10-01 16:34:13.839169268 +0000 UTC m=+0.688721997 container died 76031414787f7ec3642090b960020d18eea16a693c821d7f76049240b43e522d (image=quay.io/ceph/ceph:v18, name=gifted_perlman, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 16:34:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9645d08233048481105118cf5caba2ca9e35aed3c3f04ee9a3bfe65ed515f47-merged.mount: Deactivated successfully.
Oct 01 16:34:13 compute-0 podman[75630]: 2025-10-01 16:34:13.880284126 +0000 UTC m=+0.729836855 container remove 76031414787f7ec3642090b960020d18eea16a693c821d7f76049240b43e522d (image=quay.io/ceph/ceph:v18, name=gifted_perlman, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:34:13 compute-0 systemd[1]: libpod-conmon-76031414787f7ec3642090b960020d18eea16a693c821d7f76049240b43e522d.scope: Deactivated successfully.
Oct 01 16:34:13 compute-0 podman[75686]: 2025-10-01 16:34:13.940699831 +0000 UTC m=+0.041694204 container create 712c8017d9072ecc458c10c84f0b7f3041d1817d2082bc5c79189f447db743d5 (image=quay.io/ceph/ceph:v18, name=upbeat_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:34:13 compute-0 systemd[1]: Started libpod-conmon-712c8017d9072ecc458c10c84f0b7f3041d1817d2082bc5c79189f447db743d5.scope.
Oct 01 16:34:14 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59d9e626e74c29230e46c127f584834dbbfe6d7146ff3e3195ff1551668158b1/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59d9e626e74c29230e46c127f584834dbbfe6d7146ff3e3195ff1551668158b1/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59d9e626e74c29230e46c127f584834dbbfe6d7146ff3e3195ff1551668158b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59d9e626e74c29230e46c127f584834dbbfe6d7146ff3e3195ff1551668158b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59d9e626e74c29230e46c127f584834dbbfe6d7146ff3e3195ff1551668158b1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:14 compute-0 podman[75686]: 2025-10-01 16:34:13.924302777 +0000 UTC m=+0.025297160 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:34:14 compute-0 podman[75686]: 2025-10-01 16:34:14.057838558 +0000 UTC m=+0.158832951 container init 712c8017d9072ecc458c10c84f0b7f3041d1817d2082bc5c79189f447db743d5 (image=quay.io/ceph/ceph:v18, name=upbeat_goldberg, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 16:34:14 compute-0 podman[75686]: 2025-10-01 16:34:14.069148683 +0000 UTC m=+0.170143056 container start 712c8017d9072ecc458c10c84f0b7f3041d1817d2082bc5c79189f447db743d5 (image=quay.io/ceph/ceph:v18, name=upbeat_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 01 16:34:14 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.pmbdpj(active, since 2s)
Oct 01 16:34:14 compute-0 podman[75686]: 2025-10-01 16:34:14.07415935 +0000 UTC m=+0.175153713 container attach 712c8017d9072ecc458c10c84f0b7f3041d1817d2082bc5c79189f447db743d5 (image=quay.io/ceph/ceph:v18, name=upbeat_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:34:14 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 16:34:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Oct 01 16:34:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:14 compute-0 ceph-mgr[74571]: [cephadm INFO root] Set ssh ssh_identity_key
Oct 01 16:34:14 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Oct 01 16:34:14 compute-0 ceph-mgr[74571]: [cephadm INFO root] Set ssh private key
Oct 01 16:34:14 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Set ssh private key
Oct 01 16:34:14 compute-0 systemd[1]: libpod-712c8017d9072ecc458c10c84f0b7f3041d1817d2082bc5c79189f447db743d5.scope: Deactivated successfully.
Oct 01 16:34:14 compute-0 podman[75686]: 2025-10-01 16:34:14.595603163 +0000 UTC m=+0.696597556 container died 712c8017d9072ecc458c10c84f0b7f3041d1817d2082bc5c79189f447db743d5 (image=quay.io/ceph/ceph:v18, name=upbeat_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:34:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-59d9e626e74c29230e46c127f584834dbbfe6d7146ff3e3195ff1551668158b1-merged.mount: Deactivated successfully.
Oct 01 16:34:14 compute-0 podman[75686]: 2025-10-01 16:34:14.638840205 +0000 UTC m=+0.739834568 container remove 712c8017d9072ecc458c10c84f0b7f3041d1817d2082bc5c79189f447db743d5 (image=quay.io/ceph/ceph:v18, name=upbeat_goldberg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:34:14 compute-0 systemd[1]: libpod-conmon-712c8017d9072ecc458c10c84f0b7f3041d1817d2082bc5c79189f447db743d5.scope: Deactivated successfully.
Oct 01 16:34:14 compute-0 podman[75743]: 2025-10-01 16:34:14.705441166 +0000 UTC m=+0.044268939 container create 19ff44004f9da5dcc5fed6cbaa642686d9ab0507d7d0a3413fb620c0abce5f78 (image=quay.io/ceph/ceph:v18, name=epic_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:34:14 compute-0 systemd[1]: Started libpod-conmon-19ff44004f9da5dcc5fed6cbaa642686d9ab0507d7d0a3413fb620c0abce5f78.scope.
Oct 01 16:34:14 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e3295f26f435389d8550cce7da65afe0ecd6b6d42ed800f2d4da59f0e803d88/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e3295f26f435389d8550cce7da65afe0ecd6b6d42ed800f2d4da59f0e803d88/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e3295f26f435389d8550cce7da65afe0ecd6b6d42ed800f2d4da59f0e803d88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e3295f26f435389d8550cce7da65afe0ecd6b6d42ed800f2d4da59f0e803d88/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e3295f26f435389d8550cce7da65afe0ecd6b6d42ed800f2d4da59f0e803d88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:14 compute-0 podman[75743]: 2025-10-01 16:34:14.771922564 +0000 UTC m=+0.110750357 container init 19ff44004f9da5dcc5fed6cbaa642686d9ab0507d7d0a3413fb620c0abce5f78 (image=quay.io/ceph/ceph:v18, name=epic_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 01 16:34:14 compute-0 podman[75743]: 2025-10-01 16:34:14.686276952 +0000 UTC m=+0.025104725 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:34:14 compute-0 podman[75743]: 2025-10-01 16:34:14.782314416 +0000 UTC m=+0.121142169 container start 19ff44004f9da5dcc5fed6cbaa642686d9ab0507d7d0a3413fb620c0abce5f78 (image=quay.io/ceph/ceph:v18, name=epic_brattain, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 01 16:34:14 compute-0 podman[75743]: 2025-10-01 16:34:14.785430205 +0000 UTC m=+0.124257968 container attach 19ff44004f9da5dcc5fed6cbaa642686d9ab0507d7d0a3413fb620c0abce5f78 (image=quay.io/ceph/ceph:v18, name=epic_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:34:14 compute-0 ceph-mon[74273]: [01/Oct/2025:16:34:12] ENGINE Bus STARTING
Oct 01 16:34:14 compute-0 ceph-mon[74273]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 16:34:14 compute-0 ceph-mon[74273]: [01/Oct/2025:16:34:13] ENGINE Serving on https://192.168.122.100:7150
Oct 01 16:34:14 compute-0 ceph-mon[74273]: [01/Oct/2025:16:34:13] ENGINE Client ('192.168.122.100', 51146) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 01 16:34:14 compute-0 ceph-mon[74273]: [01/Oct/2025:16:34:13] ENGINE Serving on http://192.168.122.100:8765
Oct 01 16:34:14 compute-0 ceph-mon[74273]: [01/Oct/2025:16:34:13] ENGINE Bus STARTED
Oct 01 16:34:14 compute-0 ceph-mon[74273]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 16:34:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:14 compute-0 ceph-mon[74273]: Set ssh ssh_user
Oct 01 16:34:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:14 compute-0 ceph-mon[74273]: Set ssh ssh_config
Oct 01 16:34:14 compute-0 ceph-mon[74273]: ssh user set to ceph-admin. sudo will be used
Oct 01 16:34:14 compute-0 ceph-mon[74273]: mgrmap e8: compute-0.pmbdpj(active, since 2s)
Oct 01 16:34:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:15 compute-0 ceph-mgr[74571]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 01 16:34:15 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 16:34:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Oct 01 16:34:15 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:15 compute-0 ceph-mgr[74571]: [cephadm INFO root] Set ssh ssh_identity_pub
Oct 01 16:34:15 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Oct 01 16:34:15 compute-0 systemd[1]: libpod-19ff44004f9da5dcc5fed6cbaa642686d9ab0507d7d0a3413fb620c0abce5f78.scope: Deactivated successfully.
Oct 01 16:34:15 compute-0 conmon[75759]: conmon 19ff44004f9da5dcc5fe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-19ff44004f9da5dcc5fed6cbaa642686d9ab0507d7d0a3413fb620c0abce5f78.scope/container/memory.events
Oct 01 16:34:15 compute-0 podman[75743]: 2025-10-01 16:34:15.387675918 +0000 UTC m=+0.726503681 container died 19ff44004f9da5dcc5fed6cbaa642686d9ab0507d7d0a3413fb620c0abce5f78 (image=quay.io/ceph/ceph:v18, name=epic_brattain, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 01 16:34:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e3295f26f435389d8550cce7da65afe0ecd6b6d42ed800f2d4da59f0e803d88-merged.mount: Deactivated successfully.
Oct 01 16:34:15 compute-0 podman[75743]: 2025-10-01 16:34:15.453080519 +0000 UTC m=+0.791908312 container remove 19ff44004f9da5dcc5fed6cbaa642686d9ab0507d7d0a3413fb620c0abce5f78 (image=quay.io/ceph/ceph:v18, name=epic_brattain, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 01 16:34:15 compute-0 systemd[1]: libpod-conmon-19ff44004f9da5dcc5fed6cbaa642686d9ab0507d7d0a3413fb620c0abce5f78.scope: Deactivated successfully.
Oct 01 16:34:15 compute-0 podman[75797]: 2025-10-01 16:34:15.520161933 +0000 UTC m=+0.047605203 container create d89568a19f7e50ba8f9311089f22ed60d43e1b4734aaf88a8ac3670e1e851b9c (image=quay.io/ceph/ceph:v18, name=elastic_wright, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 01 16:34:15 compute-0 systemd[1]: Started libpod-conmon-d89568a19f7e50ba8f9311089f22ed60d43e1b4734aaf88a8ac3670e1e851b9c.scope.
Oct 01 16:34:15 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:34:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde5ab88d89ecc393d49ce3358cd004770c495b1b9b4294585a1ebad3834dd5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde5ab88d89ecc393d49ce3358cd004770c495b1b9b4294585a1ebad3834dd5e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde5ab88d89ecc393d49ce3358cd004770c495b1b9b4294585a1ebad3834dd5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:15 compute-0 podman[75797]: 2025-10-01 16:34:15.499327157 +0000 UTC m=+0.026770537 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:34:15 compute-0 podman[75797]: 2025-10-01 16:34:15.617703915 +0000 UTC m=+0.145147205 container init d89568a19f7e50ba8f9311089f22ed60d43e1b4734aaf88a8ac3670e1e851b9c (image=quay.io/ceph/ceph:v18, name=elastic_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 01 16:34:15 compute-0 podman[75797]: 2025-10-01 16:34:15.622338622 +0000 UTC m=+0.149781912 container start d89568a19f7e50ba8f9311089f22ed60d43e1b4734aaf88a8ac3670e1e851b9c (image=quay.io/ceph/ceph:v18, name=elastic_wright, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:34:15 compute-0 podman[75797]: 2025-10-01 16:34:15.625624985 +0000 UTC m=+0.153068275 container attach d89568a19f7e50ba8f9311089f22ed60d43e1b4734aaf88a8ac3670e1e851b9c (image=quay.io/ceph/ceph:v18, name=elastic_wright, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 01 16:34:15 compute-0 ceph-mon[74273]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 16:34:15 compute-0 ceph-mon[74273]: Set ssh ssh_identity_key
Oct 01 16:34:15 compute-0 ceph-mon[74273]: Set ssh private key
Oct 01 16:34:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:16 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 16:34:16 compute-0 elastic_wright[75813]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCjKRo4/0YRZZKOKE5RnP4kHB8mb1ILOY8LfXrPVoJjSuGvTID/hejxcJ+t8yQIu6mAAqzp6j2flBSuvarwnk92/G6257ewLPXfegouZ1+gIlUDanEb23KTS33DYWueQkI/RCG5mVWQpghvRKJDIJe/hT/xBG0Rp+YKLsSytvC6CuHu188jDVyYn0nlHGUf4GX/fIbWekEyaTZxyltcsgY7UV6Plk3LOGDN5pS6/8kIjHGJCUxo2I/1anH+sItOE0zZEDmb8QbnoVEM21u9IjCOF+BTvrKhZpOfOpIUckLgokXEi5Tt3k+N0fecVbAsHU58HV7Rr49Opw7x1IeKyDS8gn+bSovtrm9ju6aVcjcq86K1vZZeVnQXgdP5Y2JruHsMoVn5/tHZqfxifLC8Gmajj2Ly9r7vzHoskYoiG3DvGay1hcSLaMtjpG99N3C8tl+T1ZI9pHGKKylSFZC59XkaUtuTTsCwzjSfbOQNiRNoUHYEZDvLJ2Dd5nQ4o7lrQpk= zuul@controller
Oct 01 16:34:16 compute-0 systemd[1]: libpod-d89568a19f7e50ba8f9311089f22ed60d43e1b4734aaf88a8ac3670e1e851b9c.scope: Deactivated successfully.
Oct 01 16:34:16 compute-0 podman[75839]: 2025-10-01 16:34:16.25371868 +0000 UTC m=+0.039251012 container died d89568a19f7e50ba8f9311089f22ed60d43e1b4734aaf88a8ac3670e1e851b9c (image=quay.io/ceph/ceph:v18, name=elastic_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:34:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-dde5ab88d89ecc393d49ce3358cd004770c495b1b9b4294585a1ebad3834dd5e-merged.mount: Deactivated successfully.
Oct 01 16:34:16 compute-0 podman[75839]: 2025-10-01 16:34:16.380381627 +0000 UTC m=+0.165913959 container remove d89568a19f7e50ba8f9311089f22ed60d43e1b4734aaf88a8ac3670e1e851b9c (image=quay.io/ceph/ceph:v18, name=elastic_wright, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Oct 01 16:34:16 compute-0 systemd[1]: libpod-conmon-d89568a19f7e50ba8f9311089f22ed60d43e1b4734aaf88a8ac3670e1e851b9c.scope: Deactivated successfully.
Oct 01 16:34:16 compute-0 podman[75853]: 2025-10-01 16:34:16.459612477 +0000 UTC m=+0.053063470 container create 0d021f2984b2cf7d74ab3df974cb5b37ebc449c20cb34fad09fbdbe245feec05 (image=quay.io/ceph/ceph:v18, name=flamboyant_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:34:16 compute-0 systemd[1]: Started libpod-conmon-0d021f2984b2cf7d74ab3df974cb5b37ebc449c20cb34fad09fbdbe245feec05.scope.
Oct 01 16:34:16 compute-0 podman[75853]: 2025-10-01 16:34:16.430312988 +0000 UTC m=+0.023764041 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:34:16 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:34:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36fd00b67b91692fbc71f78add5bd62407692e350caa93eb415b7699279bd3e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36fd00b67b91692fbc71f78add5bd62407692e350caa93eb415b7699279bd3e8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36fd00b67b91692fbc71f78add5bd62407692e350caa93eb415b7699279bd3e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:16 compute-0 podman[75853]: 2025-10-01 16:34:16.55123174 +0000 UTC m=+0.144682773 container init 0d021f2984b2cf7d74ab3df974cb5b37ebc449c20cb34fad09fbdbe245feec05 (image=quay.io/ceph/ceph:v18, name=flamboyant_kepler, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 01 16:34:16 compute-0 podman[75853]: 2025-10-01 16:34:16.558278397 +0000 UTC m=+0.151729350 container start 0d021f2984b2cf7d74ab3df974cb5b37ebc449c20cb34fad09fbdbe245feec05 (image=quay.io/ceph/ceph:v18, name=flamboyant_kepler, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 01 16:34:16 compute-0 podman[75853]: 2025-10-01 16:34:16.562245488 +0000 UTC m=+0.155696521 container attach 0d021f2984b2cf7d74ab3df974cb5b37ebc449c20cb34fad09fbdbe245feec05 (image=quay.io/ceph/ceph:v18, name=flamboyant_kepler, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:34:16 compute-0 ceph-mon[74273]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 16:34:16 compute-0 ceph-mon[74273]: Set ssh ssh_identity_pub
Oct 01 16:34:17 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 16:34:17 compute-0 ceph-mgr[74571]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 01 16:34:17 compute-0 sshd-session[75896]: Accepted publickey for ceph-admin from 192.168.122.100 port 53748 ssh2: RSA SHA256:KPvZnRcsTOaBZYiLSl21+XqX/cMo4GccpaCtxoWDcjI
Oct 01 16:34:17 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Oct 01 16:34:17 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 01 16:34:17 compute-0 systemd-logind[788]: New session 21 of user ceph-admin.
Oct 01 16:34:17 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 01 16:34:17 compute-0 systemd[1]: Starting User Manager for UID 42477...
Oct 01 16:34:17 compute-0 systemd[75900]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 01 16:34:17 compute-0 sshd-session[75908]: Accepted publickey for ceph-admin from 192.168.122.100 port 53752 ssh2: RSA SHA256:KPvZnRcsTOaBZYiLSl21+XqX/cMo4GccpaCtxoWDcjI
Oct 01 16:34:17 compute-0 systemd[75900]: Queued start job for default target Main User Target.
Oct 01 16:34:17 compute-0 systemd-logind[788]: New session 23 of user ceph-admin.
Oct 01 16:34:17 compute-0 systemd[75900]: Created slice User Application Slice.
Oct 01 16:34:17 compute-0 systemd[75900]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 01 16:34:17 compute-0 systemd[75900]: Started Daily Cleanup of User's Temporary Directories.
Oct 01 16:34:17 compute-0 systemd[75900]: Reached target Paths.
Oct 01 16:34:17 compute-0 systemd[75900]: Reached target Timers.
Oct 01 16:34:17 compute-0 systemd[75900]: Starting D-Bus User Message Bus Socket...
Oct 01 16:34:17 compute-0 systemd[75900]: Starting Create User's Volatile Files and Directories...
Oct 01 16:34:17 compute-0 systemd[75900]: Finished Create User's Volatile Files and Directories.
Oct 01 16:34:17 compute-0 systemd[75900]: Listening on D-Bus User Message Bus Socket.
Oct 01 16:34:17 compute-0 systemd[75900]: Reached target Sockets.
Oct 01 16:34:17 compute-0 systemd[75900]: Reached target Basic System.
Oct 01 16:34:17 compute-0 systemd[75900]: Reached target Main User Target.
Oct 01 16:34:17 compute-0 systemd[75900]: Startup finished in 137ms.
Oct 01 16:34:17 compute-0 systemd[1]: Started User Manager for UID 42477.
Oct 01 16:34:17 compute-0 systemd[1]: Started Session 21 of User ceph-admin.
Oct 01 16:34:17 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Oct 01 16:34:17 compute-0 sshd-session[75896]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 01 16:34:17 compute-0 sshd-session[75908]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 01 16:34:17 compute-0 sudo[75921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:17 compute-0 sudo[75921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:17 compute-0 sudo[75921]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:17 compute-0 sudo[75946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:34:17 compute-0 sudo[75946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:17 compute-0 sudo[75946]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:17 compute-0 ceph-mon[74273]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 16:34:17 compute-0 sshd-session[75971]: Accepted publickey for ceph-admin from 192.168.122.100 port 53764 ssh2: RSA SHA256:KPvZnRcsTOaBZYiLSl21+XqX/cMo4GccpaCtxoWDcjI
Oct 01 16:34:18 compute-0 systemd-logind[788]: New session 24 of user ceph-admin.
Oct 01 16:34:18 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Oct 01 16:34:18 compute-0 sshd-session[75971]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 01 16:34:18 compute-0 sudo[75975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:18 compute-0 sudo[75975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:18 compute-0 sudo[75975]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:18 compute-0 sudo[76000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Oct 01 16:34:18 compute-0 sudo[76000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:18 compute-0 sudo[76000]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:18 compute-0 sshd-session[76025]: Accepted publickey for ceph-admin from 192.168.122.100 port 52056 ssh2: RSA SHA256:KPvZnRcsTOaBZYiLSl21+XqX/cMo4GccpaCtxoWDcjI
Oct 01 16:34:18 compute-0 systemd-logind[788]: New session 25 of user ceph-admin.
Oct 01 16:34:18 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Oct 01 16:34:18 compute-0 sshd-session[76025]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 01 16:34:18 compute-0 sudo[76029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:18 compute-0 sudo[76029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:18 compute-0 sudo[76029]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:18 compute-0 sudo[76054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Oct 01 16:34:18 compute-0 sudo[76054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:18 compute-0 sudo[76054]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:18 compute-0 ceph-mgr[74571]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Oct 01 16:34:18 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Oct 01 16:34:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053017 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:34:18 compute-0 sshd-session[76079]: Accepted publickey for ceph-admin from 192.168.122.100 port 52064 ssh2: RSA SHA256:KPvZnRcsTOaBZYiLSl21+XqX/cMo4GccpaCtxoWDcjI
Oct 01 16:34:18 compute-0 systemd-logind[788]: New session 26 of user ceph-admin.
Oct 01 16:34:18 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Oct 01 16:34:18 compute-0 sshd-session[76079]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 01 16:34:18 compute-0 ceph-mon[74273]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 16:34:18 compute-0 sudo[76083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:18 compute-0 sudo[76083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:18 compute-0 sudo[76083]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:19 compute-0 sudo[76108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5
Oct 01 16:34:19 compute-0 sudo[76108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:19 compute-0 sudo[76108]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:19 compute-0 ceph-mgr[74571]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 01 16:34:19 compute-0 sshd-session[76133]: Accepted publickey for ceph-admin from 192.168.122.100 port 52080 ssh2: RSA SHA256:KPvZnRcsTOaBZYiLSl21+XqX/cMo4GccpaCtxoWDcjI
Oct 01 16:34:19 compute-0 systemd-logind[788]: New session 27 of user ceph-admin.
Oct 01 16:34:19 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Oct 01 16:34:19 compute-0 sshd-session[76133]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 01 16:34:19 compute-0 sudo[76137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:19 compute-0 sudo[76137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:19 compute-0 sudo[76137]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:19 compute-0 sudo[76162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5
Oct 01 16:34:19 compute-0 sudo[76162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:19 compute-0 sudo[76162]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:19 compute-0 sshd-session[76187]: Accepted publickey for ceph-admin from 192.168.122.100 port 52094 ssh2: RSA SHA256:KPvZnRcsTOaBZYiLSl21+XqX/cMo4GccpaCtxoWDcjI
Oct 01 16:34:19 compute-0 systemd-logind[788]: New session 28 of user ceph-admin.
Oct 01 16:34:19 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Oct 01 16:34:19 compute-0 sshd-session[76187]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 01 16:34:19 compute-0 sudo[76191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:19 compute-0 sudo[76191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:19 compute-0 sudo[76191]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:19 compute-0 ceph-mon[74273]: Deploying cephadm binary to compute-0
Oct 01 16:34:19 compute-0 sudo[76216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Oct 01 16:34:19 compute-0 sudo[76216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:19 compute-0 sudo[76216]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:20 compute-0 sshd-session[76241]: Accepted publickey for ceph-admin from 192.168.122.100 port 52110 ssh2: RSA SHA256:KPvZnRcsTOaBZYiLSl21+XqX/cMo4GccpaCtxoWDcjI
Oct 01 16:34:20 compute-0 systemd-logind[788]: New session 29 of user ceph-admin.
Oct 01 16:34:20 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Oct 01 16:34:20 compute-0 sshd-session[76241]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 01 16:34:20 compute-0 sudo[76245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:20 compute-0 sudo[76245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:20 compute-0 sudo[76245]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:20 compute-0 sudo[76270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5
Oct 01 16:34:20 compute-0 sudo[76270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:20 compute-0 sudo[76270]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:20 compute-0 sshd-session[76295]: Accepted publickey for ceph-admin from 192.168.122.100 port 52120 ssh2: RSA SHA256:KPvZnRcsTOaBZYiLSl21+XqX/cMo4GccpaCtxoWDcjI
Oct 01 16:34:20 compute-0 systemd-logind[788]: New session 30 of user ceph-admin.
Oct 01 16:34:20 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Oct 01 16:34:20 compute-0 sshd-session[76295]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 01 16:34:20 compute-0 sudo[76299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:20 compute-0 sudo[76299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:20 compute-0 sudo[76299]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:20 compute-0 sudo[76324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Oct 01 16:34:20 compute-0 sudo[76324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:20 compute-0 sudo[76324]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:20 compute-0 sshd-session[76349]: Accepted publickey for ceph-admin from 192.168.122.100 port 52136 ssh2: RSA SHA256:KPvZnRcsTOaBZYiLSl21+XqX/cMo4GccpaCtxoWDcjI
Oct 01 16:34:20 compute-0 systemd-logind[788]: New session 31 of user ceph-admin.
Oct 01 16:34:20 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Oct 01 16:34:20 compute-0 sshd-session[76349]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 01 16:34:21 compute-0 ceph-mgr[74571]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 01 16:34:21 compute-0 sshd-session[76376]: Accepted publickey for ceph-admin from 192.168.122.100 port 52138 ssh2: RSA SHA256:KPvZnRcsTOaBZYiLSl21+XqX/cMo4GccpaCtxoWDcjI
Oct 01 16:34:21 compute-0 systemd-logind[788]: New session 32 of user ceph-admin.
Oct 01 16:34:21 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Oct 01 16:34:21 compute-0 sshd-session[76376]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 01 16:34:21 compute-0 sudo[76380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:21 compute-0 sudo[76380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:21 compute-0 sudo[76380]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:21 compute-0 sudo[76405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Oct 01 16:34:21 compute-0 sudo[76405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:21 compute-0 sudo[76405]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:21 compute-0 sshd-session[76430]: Accepted publickey for ceph-admin from 192.168.122.100 port 52152 ssh2: RSA SHA256:KPvZnRcsTOaBZYiLSl21+XqX/cMo4GccpaCtxoWDcjI
Oct 01 16:34:21 compute-0 systemd-logind[788]: New session 33 of user ceph-admin.
Oct 01 16:34:22 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Oct 01 16:34:22 compute-0 sshd-session[76430]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 01 16:34:22 compute-0 sudo[76434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:22 compute-0 sudo[76434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:22 compute-0 sudo[76434]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:22 compute-0 sudo[76459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Oct 01 16:34:22 compute-0 sudo[76459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:22 compute-0 sudo[76459]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 01 16:34:22 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:22 compute-0 ceph-mgr[74571]: [cephadm INFO root] Added host compute-0
Oct 01 16:34:22 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Added host compute-0
Oct 01 16:34:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 01 16:34:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 01 16:34:22 compute-0 flamboyant_kepler[75870]: Added host 'compute-0' with addr '192.168.122.100'
Oct 01 16:34:22 compute-0 systemd[1]: libpod-0d021f2984b2cf7d74ab3df974cb5b37ebc449c20cb34fad09fbdbe245feec05.scope: Deactivated successfully.
Oct 01 16:34:22 compute-0 podman[75853]: 2025-10-01 16:34:22.491152535 +0000 UTC m=+6.084603508 container died 0d021f2984b2cf7d74ab3df974cb5b37ebc449c20cb34fad09fbdbe245feec05 (image=quay.io/ceph/ceph:v18, name=flamboyant_kepler, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 01 16:34:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-36fd00b67b91692fbc71f78add5bd62407692e350caa93eb415b7699279bd3e8-merged.mount: Deactivated successfully.
Oct 01 16:34:22 compute-0 sudo[76505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:22 compute-0 sudo[76505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:22 compute-0 sudo[76505]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:22 compute-0 podman[75853]: 2025-10-01 16:34:22.545503347 +0000 UTC m=+6.138954300 container remove 0d021f2984b2cf7d74ab3df974cb5b37ebc449c20cb34fad09fbdbe245feec05 (image=quay.io/ceph/ceph:v18, name=flamboyant_kepler, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 16:34:22 compute-0 systemd[1]: libpod-conmon-0d021f2984b2cf7d74ab3df974cb5b37ebc449c20cb34fad09fbdbe245feec05.scope: Deactivated successfully.
Oct 01 16:34:22 compute-0 sudo[76541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:34:22 compute-0 sudo[76541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:22 compute-0 sudo[76541]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:22 compute-0 podman[76550]: 2025-10-01 16:34:22.609050801 +0000 UTC m=+0.041379886 container create 351ad27d90f146b71e6efa5034bfec775159ae948885072cd7c4d64c336ee299 (image=quay.io/ceph/ceph:v18, name=exciting_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 01 16:34:22 compute-0 systemd[1]: Started libpod-conmon-351ad27d90f146b71e6efa5034bfec775159ae948885072cd7c4d64c336ee299.scope.
Oct 01 16:34:22 compute-0 sudo[76583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:22 compute-0 sudo[76583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:22 compute-0 sudo[76583]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:22 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:34:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ca0b8ba44a563e4c5fa4b3b280adfaf90eb532caca603f343c6e355f07eb82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ca0b8ba44a563e4c5fa4b3b280adfaf90eb532caca603f343c6e355f07eb82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ca0b8ba44a563e4c5fa4b3b280adfaf90eb532caca603f343c6e355f07eb82/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:22 compute-0 podman[76550]: 2025-10-01 16:34:22.681214973 +0000 UTC m=+0.113544078 container init 351ad27d90f146b71e6efa5034bfec775159ae948885072cd7c4d64c336ee299 (image=quay.io/ceph/ceph:v18, name=exciting_wozniak, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:34:22 compute-0 podman[76550]: 2025-10-01 16:34:22.590751339 +0000 UTC m=+0.023080474 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:34:22 compute-0 podman[76550]: 2025-10-01 16:34:22.687367688 +0000 UTC m=+0.119696773 container start 351ad27d90f146b71e6efa5034bfec775159ae948885072cd7c4d64c336ee299 (image=quay.io/ceph/ceph:v18, name=exciting_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:34:22 compute-0 podman[76550]: 2025-10-01 16:34:22.691126313 +0000 UTC m=+0.123455428 container attach 351ad27d90f146b71e6efa5034bfec775159ae948885072cd7c4d64c336ee299 (image=quay.io/ceph/ceph:v18, name=exciting_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 01 16:34:22 compute-0 sudo[76613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph:v18 --timeout 895 inspect-image
Oct 01 16:34:22 compute-0 sudo[76613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:22 compute-0 podman[76666]: 2025-10-01 16:34:22.927614493 +0000 UTC m=+0.043378076 container create 93181092bcaa204229ef0380a30bbb2018d5cbf38e3c7fe302aa5f444c5c179c (image=quay.io/ceph/ceph:v18, name=elegant_lumiere, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:34:22 compute-0 systemd[1]: Started libpod-conmon-93181092bcaa204229ef0380a30bbb2018d5cbf38e3c7fe302aa5f444c5c179c.scope.
Oct 01 16:34:22 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:34:22 compute-0 podman[76666]: 2025-10-01 16:34:22.975916662 +0000 UTC m=+0.091680255 container init 93181092bcaa204229ef0380a30bbb2018d5cbf38e3c7fe302aa5f444c5c179c (image=quay.io/ceph/ceph:v18, name=elegant_lumiere, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:34:22 compute-0 podman[76666]: 2025-10-01 16:34:22.981191655 +0000 UTC m=+0.096955238 container start 93181092bcaa204229ef0380a30bbb2018d5cbf38e3c7fe302aa5f444c5c179c (image=quay.io/ceph/ceph:v18, name=elegant_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 01 16:34:22 compute-0 podman[76666]: 2025-10-01 16:34:22.984079308 +0000 UTC m=+0.099842891 container attach 93181092bcaa204229ef0380a30bbb2018d5cbf38e3c7fe302aa5f444c5c179c (image=quay.io/ceph/ceph:v18, name=elegant_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 01 16:34:22 compute-0 podman[76666]: 2025-10-01 16:34:22.902593961 +0000 UTC m=+0.018357574 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:34:23 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 16:34:23 compute-0 ceph-mgr[74571]: [cephadm INFO root] Saving service mon spec with placement count:5
Oct 01 16:34:23 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Oct 01 16:34:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 01 16:34:23 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:23 compute-0 exciting_wozniak[76608]: Scheduled mon update...
Oct 01 16:34:23 compute-0 systemd[1]: libpod-351ad27d90f146b71e6efa5034bfec775159ae948885072cd7c4d64c336ee299.scope: Deactivated successfully.
Oct 01 16:34:23 compute-0 podman[76550]: 2025-10-01 16:34:23.244475551 +0000 UTC m=+0.676804646 container died 351ad27d90f146b71e6efa5034bfec775159ae948885072cd7c4d64c336ee299 (image=quay.io/ceph/ceph:v18, name=exciting_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Oct 01 16:34:23 compute-0 elegant_lumiere[76684]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Oct 01 16:34:23 compute-0 ceph-mgr[74571]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 01 16:34:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0ca0b8ba44a563e4c5fa4b3b280adfaf90eb532caca603f343c6e355f07eb82-merged.mount: Deactivated successfully.
Oct 01 16:34:23 compute-0 systemd[1]: libpod-93181092bcaa204229ef0380a30bbb2018d5cbf38e3c7fe302aa5f444c5c179c.scope: Deactivated successfully.
Oct 01 16:34:23 compute-0 conmon[76684]: conmon 93181092bcaa204229ef <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-93181092bcaa204229ef0380a30bbb2018d5cbf38e3c7fe302aa5f444c5c179c.scope/container/memory.events
Oct 01 16:34:23 compute-0 podman[76666]: 2025-10-01 16:34:23.279258399 +0000 UTC m=+0.395021982 container died 93181092bcaa204229ef0380a30bbb2018d5cbf38e3c7fe302aa5f444c5c179c (image=quay.io/ceph/ceph:v18, name=elegant_lumiere, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:34:23 compute-0 podman[76550]: 2025-10-01 16:34:23.303504861 +0000 UTC m=+0.735833956 container remove 351ad27d90f146b71e6efa5034bfec775159ae948885072cd7c4d64c336ee299 (image=quay.io/ceph/ceph:v18, name=exciting_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 01 16:34:23 compute-0 systemd[1]: libpod-conmon-351ad27d90f146b71e6efa5034bfec775159ae948885072cd7c4d64c336ee299.scope: Deactivated successfully.
Oct 01 16:34:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-f180e042eb11334f974f80bfdeaae2480952a359ba1c71f100006653f519972e-merged.mount: Deactivated successfully.
Oct 01 16:34:23 compute-0 podman[76666]: 2025-10-01 16:34:23.350024176 +0000 UTC m=+0.465787769 container remove 93181092bcaa204229ef0380a30bbb2018d5cbf38e3c7fe302aa5f444c5c179c (image=quay.io/ceph/ceph:v18, name=elegant_lumiere, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 01 16:34:23 compute-0 systemd[1]: libpod-conmon-93181092bcaa204229ef0380a30bbb2018d5cbf38e3c7fe302aa5f444c5c179c.scope: Deactivated successfully.
Oct 01 16:34:23 compute-0 podman[76733]: 2025-10-01 16:34:23.369580839 +0000 UTC m=+0.045990752 container create 5f58175d291c88c6f77baeeb693cb70556bad387cd9db528d116aad80f685072 (image=quay.io/ceph/ceph:v18, name=goofy_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:34:23 compute-0 sudo[76613]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Oct 01 16:34:23 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:23 compute-0 systemd[1]: Started libpod-conmon-5f58175d291c88c6f77baeeb693cb70556bad387cd9db528d116aad80f685072.scope.
Oct 01 16:34:23 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:34:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebed305ff84a791ce6e38ab145fef412099e503fae0d521a7333535902324a80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebed305ff84a791ce6e38ab145fef412099e503fae0d521a7333535902324a80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebed305ff84a791ce6e38ab145fef412099e503fae0d521a7333535902324a80/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:23 compute-0 podman[76733]: 2025-10-01 16:34:23.433438891 +0000 UTC m=+0.109848824 container init 5f58175d291c88c6f77baeeb693cb70556bad387cd9db528d116aad80f685072 (image=quay.io/ceph/ceph:v18, name=goofy_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 01 16:34:23 compute-0 podman[76733]: 2025-10-01 16:34:23.439147055 +0000 UTC m=+0.115556978 container start 5f58175d291c88c6f77baeeb693cb70556bad387cd9db528d116aad80f685072 (image=quay.io/ceph/ceph:v18, name=goofy_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:34:23 compute-0 podman[76733]: 2025-10-01 16:34:23.442596673 +0000 UTC m=+0.119006586 container attach 5f58175d291c88c6f77baeeb693cb70556bad387cd9db528d116aad80f685072 (image=quay.io/ceph/ceph:v18, name=goofy_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:34:23 compute-0 podman[76733]: 2025-10-01 16:34:23.346493297 +0000 UTC m=+0.022903240 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:34:23 compute-0 sudo[76754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:23 compute-0 sudo[76754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:23 compute-0 ceph-mon[74273]: Added host compute-0
Oct 01 16:34:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 01 16:34:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:23 compute-0 sudo[76754]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:23 compute-0 sudo[76781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:34:23 compute-0 sudo[76781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:23 compute-0 sudo[76781]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:23 compute-0 sudo[76806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:23 compute-0 sudo[76806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:23 compute-0 sudo[76806]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:23 compute-0 sudo[76831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 01 16:34:23 compute-0 sudo[76831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054709 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:34:23 compute-0 sudo[76831]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:34:23 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:23 compute-0 sudo[76894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:23 compute-0 sudo[76894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:23 compute-0 sudo[76894]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:23 compute-0 sudo[76919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:34:23 compute-0 sudo[76919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:23 compute-0 sudo[76919]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:23 compute-0 sudo[76944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:23 compute-0 sudo[76944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:23 compute-0 sudo[76944]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:23 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 16:34:23 compute-0 ceph-mgr[74571]: [cephadm INFO root] Saving service mgr spec with placement count:2
Oct 01 16:34:23 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Oct 01 16:34:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 01 16:34:23 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:23 compute-0 goofy_montalcini[76751]: Scheduled mgr update...
Oct 01 16:34:23 compute-0 systemd[1]: libpod-5f58175d291c88c6f77baeeb693cb70556bad387cd9db528d116aad80f685072.scope: Deactivated successfully.
Oct 01 16:34:23 compute-0 podman[76733]: 2025-10-01 16:34:23.995650713 +0000 UTC m=+0.672060626 container died 5f58175d291c88c6f77baeeb693cb70556bad387cd9db528d116aad80f685072 (image=quay.io/ceph/ceph:v18, name=goofy_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 01 16:34:24 compute-0 sudo[76969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 01 16:34:24 compute-0 sudo[76969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebed305ff84a791ce6e38ab145fef412099e503fae0d521a7333535902324a80-merged.mount: Deactivated successfully.
Oct 01 16:34:24 compute-0 podman[76733]: 2025-10-01 16:34:24.035586461 +0000 UTC m=+0.711996364 container remove 5f58175d291c88c6f77baeeb693cb70556bad387cd9db528d116aad80f685072 (image=quay.io/ceph/ceph:v18, name=goofy_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:34:24 compute-0 systemd[1]: libpod-conmon-5f58175d291c88c6f77baeeb693cb70556bad387cd9db528d116aad80f685072.scope: Deactivated successfully.
Oct 01 16:34:24 compute-0 podman[77008]: 2025-10-01 16:34:24.095947495 +0000 UTC m=+0.042241018 container create 95dafbdfee985f9fcf45dfe1044c8af0bfae153b6ee902b4c3d1eb05820b6e49 (image=quay.io/ceph/ceph:v18, name=trusting_bell, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 01 16:34:24 compute-0 systemd[1]: Started libpod-conmon-95dafbdfee985f9fcf45dfe1044c8af0bfae153b6ee902b4c3d1eb05820b6e49.scope.
Oct 01 16:34:24 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/294dccab3c38979498b63350176601d65d115eb5407984dd42fdad1c6aec99ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/294dccab3c38979498b63350176601d65d115eb5407984dd42fdad1c6aec99ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/294dccab3c38979498b63350176601d65d115eb5407984dd42fdad1c6aec99ab/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:24 compute-0 podman[77008]: 2025-10-01 16:34:24.168627779 +0000 UTC m=+0.114921322 container init 95dafbdfee985f9fcf45dfe1044c8af0bfae153b6ee902b4c3d1eb05820b6e49 (image=quay.io/ceph/ceph:v18, name=trusting_bell, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:34:24 compute-0 podman[77008]: 2025-10-01 16:34:24.077775286 +0000 UTC m=+0.024068829 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:34:24 compute-0 podman[77008]: 2025-10-01 16:34:24.17500075 +0000 UTC m=+0.121294273 container start 95dafbdfee985f9fcf45dfe1044c8af0bfae153b6ee902b4c3d1eb05820b6e49 (image=quay.io/ceph/ceph:v18, name=trusting_bell, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 01 16:34:24 compute-0 podman[77008]: 2025-10-01 16:34:24.178075398 +0000 UTC m=+0.124368941 container attach 95dafbdfee985f9fcf45dfe1044c8af0bfae153b6ee902b4c3d1eb05820b6e49 (image=quay.io/ceph/ceph:v18, name=trusting_bell, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Oct 01 16:34:24 compute-0 podman[77102]: 2025-10-01 16:34:24.424984591 +0000 UTC m=+0.051289526 container exec bfdaa9b78cc1558959452c7020a00aa78f3da27e3ededf3766f2f88165c2443b (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 01 16:34:24 compute-0 podman[77102]: 2025-10-01 16:34:24.716333776 +0000 UTC m=+0.342638721 container exec_died bfdaa9b78cc1558959452c7020a00aa78f3da27e3ededf3766f2f88165c2443b (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:34:24 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 16:34:24 compute-0 ceph-mgr[74571]: [cephadm INFO root] Saving service crash spec with placement *
Oct 01 16:34:24 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Oct 01 16:34:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 01 16:34:24 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:24 compute-0 trusting_bell[77026]: Scheduled crash update...
Oct 01 16:34:24 compute-0 systemd[1]: libpod-95dafbdfee985f9fcf45dfe1044c8af0bfae153b6ee902b4c3d1eb05820b6e49.scope: Deactivated successfully.
Oct 01 16:34:24 compute-0 podman[77008]: 2025-10-01 16:34:24.785732798 +0000 UTC m=+0.732026321 container died 95dafbdfee985f9fcf45dfe1044c8af0bfae153b6ee902b4c3d1eb05820b6e49 (image=quay.io/ceph/ceph:v18, name=trusting_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:34:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-294dccab3c38979498b63350176601d65d115eb5407984dd42fdad1c6aec99ab-merged.mount: Deactivated successfully.
Oct 01 16:34:24 compute-0 ceph-mon[74273]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 16:34:24 compute-0 ceph-mon[74273]: Saving service mon spec with placement count:5
Oct 01 16:34:24 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:24 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:24 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:24 compute-0 podman[77008]: 2025-10-01 16:34:24.839449724 +0000 UTC m=+0.785743257 container remove 95dafbdfee985f9fcf45dfe1044c8af0bfae153b6ee902b4c3d1eb05820b6e49 (image=quay.io/ceph/ceph:v18, name=trusting_bell, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 01 16:34:24 compute-0 systemd[1]: libpod-conmon-95dafbdfee985f9fcf45dfe1044c8af0bfae153b6ee902b4c3d1eb05820b6e49.scope: Deactivated successfully.
Oct 01 16:34:24 compute-0 sudo[76969]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:34:24 compute-0 podman[77187]: 2025-10-01 16:34:24.912039606 +0000 UTC m=+0.056334103 container create a86db0268a6a210af4411c43ea09df1cc8188a5a76e17ee01f08a5eb2accd6d2 (image=quay.io/ceph/ceph:v18, name=busy_chaum, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:34:24 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:24 compute-0 systemd[1]: Started libpod-conmon-a86db0268a6a210af4411c43ea09df1cc8188a5a76e17ee01f08a5eb2accd6d2.scope.
Oct 01 16:34:24 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:34:24 compute-0 sudo[77203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:24 compute-0 sudo[77203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:24 compute-0 sudo[77203]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b5b6535a5e004b75ce78aabfc49fcd1f2525ef5bc32e7bc8b206234f57530c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b5b6535a5e004b75ce78aabfc49fcd1f2525ef5bc32e7bc8b206234f57530c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b5b6535a5e004b75ce78aabfc49fcd1f2525ef5bc32e7bc8b206234f57530c5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:24 compute-0 podman[77187]: 2025-10-01 16:34:24.877631127 +0000 UTC m=+0.021925664 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:34:24 compute-0 podman[77187]: 2025-10-01 16:34:24.978200256 +0000 UTC m=+0.122494753 container init a86db0268a6a210af4411c43ea09df1cc8188a5a76e17ee01f08a5eb2accd6d2 (image=quay.io/ceph/ceph:v18, name=busy_chaum, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:34:24 compute-0 podman[77187]: 2025-10-01 16:34:24.983927981 +0000 UTC m=+0.128222478 container start a86db0268a6a210af4411c43ea09df1cc8188a5a76e17ee01f08a5eb2accd6d2 (image=quay.io/ceph/ceph:v18, name=busy_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:34:24 compute-0 podman[77187]: 2025-10-01 16:34:24.98708217 +0000 UTC m=+0.131376667 container attach a86db0268a6a210af4411c43ea09df1cc8188a5a76e17ee01f08a5eb2accd6d2 (image=quay.io/ceph/ceph:v18, name=busy_chaum, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:34:25 compute-0 sudo[77233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:34:25 compute-0 sudo[77233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:25 compute-0 sudo[77233]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:25 compute-0 sudo[77260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:25 compute-0 sudo[77260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:25 compute-0 sudo[77260]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:25 compute-0 sudo[77285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 16:34:25 compute-0 sudo[77285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:25 compute-0 ceph-mgr[74571]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 01 16:34:25 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77324 (sysctl)
Oct 01 16:34:25 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct 01 16:34:25 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct 01 16:34:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Oct 01 16:34:25 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1402517819' entity='client.admin' 
Oct 01 16:34:25 compute-0 systemd[1]: libpod-a86db0268a6a210af4411c43ea09df1cc8188a5a76e17ee01f08a5eb2accd6d2.scope: Deactivated successfully.
Oct 01 16:34:25 compute-0 podman[77187]: 2025-10-01 16:34:25.50444368 +0000 UTC m=+0.648738207 container died a86db0268a6a210af4411c43ea09df1cc8188a5a76e17ee01f08a5eb2accd6d2 (image=quay.io/ceph/ceph:v18, name=busy_chaum, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 01 16:34:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b5b6535a5e004b75ce78aabfc49fcd1f2525ef5bc32e7bc8b206234f57530c5-merged.mount: Deactivated successfully.
Oct 01 16:34:25 compute-0 sudo[77285]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:25 compute-0 podman[77187]: 2025-10-01 16:34:25.550502883 +0000 UTC m=+0.694797380 container remove a86db0268a6a210af4411c43ea09df1cc8188a5a76e17ee01f08a5eb2accd6d2 (image=quay.io/ceph/ceph:v18, name=busy_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 01 16:34:25 compute-0 systemd[1]: libpod-conmon-a86db0268a6a210af4411c43ea09df1cc8188a5a76e17ee01f08a5eb2accd6d2.scope: Deactivated successfully.
Oct 01 16:34:25 compute-0 podman[77379]: 2025-10-01 16:34:25.611757999 +0000 UTC m=+0.040333390 container create 45d9e04bb99996db8986d4d2458b06d5d2aaa93a95df2efed6a59b37283ab875 (image=quay.io/ceph/ceph:v18, name=elastic_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 01 16:34:25 compute-0 sudo[77392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:25 compute-0 sudo[77392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:25 compute-0 sudo[77392]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:25 compute-0 systemd[1]: Started libpod-conmon-45d9e04bb99996db8986d4d2458b06d5d2aaa93a95df2efed6a59b37283ab875.scope.
Oct 01 16:34:25 compute-0 podman[77379]: 2025-10-01 16:34:25.592913804 +0000 UTC m=+0.021489244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:34:25 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:34:25 compute-0 sudo[77420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:34:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33aad1d631244a303f8689d8e66f1c6bc46ae3dea9f6ccfb8e08e29af5d620ad/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33aad1d631244a303f8689d8e66f1c6bc46ae3dea9f6ccfb8e08e29af5d620ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:25 compute-0 sudo[77420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33aad1d631244a303f8689d8e66f1c6bc46ae3dea9f6ccfb8e08e29af5d620ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:25 compute-0 sudo[77420]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:25 compute-0 podman[77379]: 2025-10-01 16:34:25.72785957 +0000 UTC m=+0.156434980 container init 45d9e04bb99996db8986d4d2458b06d5d2aaa93a95df2efed6a59b37283ab875 (image=quay.io/ceph/ceph:v18, name=elastic_hertz, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:34:25 compute-0 podman[77379]: 2025-10-01 16:34:25.733975765 +0000 UTC m=+0.162551155 container start 45d9e04bb99996db8986d4d2458b06d5d2aaa93a95df2efed6a59b37283ab875 (image=quay.io/ceph/ceph:v18, name=elastic_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 01 16:34:25 compute-0 podman[77379]: 2025-10-01 16:34:25.738432797 +0000 UTC m=+0.167008187 container attach 45d9e04bb99996db8986d4d2458b06d5d2aaa93a95df2efed6a59b37283ab875 (image=quay.io/ceph/ceph:v18, name=elastic_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 01 16:34:25 compute-0 sudo[77448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:25 compute-0 sudo[77448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:25 compute-0 sudo[77448]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:25 compute-0 sudo[77475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 01 16:34:25 compute-0 sudo[77475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:25 compute-0 ceph-mon[74273]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 16:34:25 compute-0 ceph-mon[74273]: Saving service mgr spec with placement count:2
Oct 01 16:34:25 compute-0 ceph-mon[74273]: from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 16:34:25 compute-0 ceph-mon[74273]: Saving service crash spec with placement *
Oct 01 16:34:25 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:25 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1402517819' entity='client.admin' 
Oct 01 16:34:26 compute-0 sudo[77475]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:34:26 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:26 compute-0 sudo[77520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:26 compute-0 sudo[77520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:26 compute-0 sudo[77520]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:26 compute-0 sudo[77562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:34:26 compute-0 sudo[77562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:26 compute-0 sudo[77562]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:26 compute-0 sudo[77587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:26 compute-0 sudo[77587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:26 compute-0 sudo[77587]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:26 compute-0 sudo[77612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- inventory --format=json-pretty --filter-for-batch
Oct 01 16:34:26 compute-0 sudo[77612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:26 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 16:34:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Oct 01 16:34:26 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:26 compute-0 systemd[1]: libpod-45d9e04bb99996db8986d4d2458b06d5d2aaa93a95df2efed6a59b37283ab875.scope: Deactivated successfully.
Oct 01 16:34:26 compute-0 podman[77379]: 2025-10-01 16:34:26.29750087 +0000 UTC m=+0.726076260 container died 45d9e04bb99996db8986d4d2458b06d5d2aaa93a95df2efed6a59b37283ab875 (image=quay.io/ceph/ceph:v18, name=elastic_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:34:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-33aad1d631244a303f8689d8e66f1c6bc46ae3dea9f6ccfb8e08e29af5d620ad-merged.mount: Deactivated successfully.
Oct 01 16:34:26 compute-0 podman[77379]: 2025-10-01 16:34:26.345284796 +0000 UTC m=+0.773860196 container remove 45d9e04bb99996db8986d4d2458b06d5d2aaa93a95df2efed6a59b37283ab875 (image=quay.io/ceph/ceph:v18, name=elastic_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 01 16:34:26 compute-0 systemd[1]: libpod-conmon-45d9e04bb99996db8986d4d2458b06d5d2aaa93a95df2efed6a59b37283ab875.scope: Deactivated successfully.
Oct 01 16:34:26 compute-0 podman[77654]: 2025-10-01 16:34:26.42385295 +0000 UTC m=+0.060325294 container create a802e5d7db01d8331f5a500aac25cc49586b86a9b3c7cc7ac0bbbf04c6226d22 (image=quay.io/ceph/ceph:v18, name=hopeful_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:34:26 compute-0 podman[77654]: 2025-10-01 16:34:26.38741024 +0000 UTC m=+0.023882604 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:34:26 compute-0 systemd[1]: Started libpod-conmon-a802e5d7db01d8331f5a500aac25cc49586b86a9b3c7cc7ac0bbbf04c6226d22.scope.
Oct 01 16:34:26 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:34:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48948948fd70c5dc0ac5ef861505ed79891476728aed6edb0050d4796fa5aa26/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48948948fd70c5dc0ac5ef861505ed79891476728aed6edb0050d4796fa5aa26/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48948948fd70c5dc0ac5ef861505ed79891476728aed6edb0050d4796fa5aa26/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:26 compute-0 podman[77654]: 2025-10-01 16:34:26.546886426 +0000 UTC m=+0.183358810 container init a802e5d7db01d8331f5a500aac25cc49586b86a9b3c7cc7ac0bbbf04c6226d22 (image=quay.io/ceph/ceph:v18, name=hopeful_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:34:26 compute-0 podman[77654]: 2025-10-01 16:34:26.552408435 +0000 UTC m=+0.188880779 container start a802e5d7db01d8331f5a500aac25cc49586b86a9b3c7cc7ac0bbbf04c6226d22 (image=quay.io/ceph/ceph:v18, name=hopeful_feistel, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 01 16:34:26 compute-0 podman[77654]: 2025-10-01 16:34:26.556241902 +0000 UTC m=+0.192714286 container attach a802e5d7db01d8331f5a500aac25cc49586b86a9b3c7cc7ac0bbbf04c6226d22 (image=quay.io/ceph/ceph:v18, name=hopeful_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:34:26 compute-0 podman[77715]: 2025-10-01 16:34:26.711691756 +0000 UTC m=+0.082408001 container create c5ec6d4f733d7c29e3d080c668f419b429b2927b4fbbae31b6f057ab4b221380 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Oct 01 16:34:26 compute-0 podman[77715]: 2025-10-01 16:34:26.649842535 +0000 UTC m=+0.020558810 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:34:26 compute-0 systemd[1]: Started libpod-conmon-c5ec6d4f733d7c29e3d080c668f419b429b2927b4fbbae31b6f057ab4b221380.scope.
Oct 01 16:34:26 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:34:26 compute-0 podman[77715]: 2025-10-01 16:34:26.790700621 +0000 UTC m=+0.161416866 container init c5ec6d4f733d7c29e3d080c668f419b429b2927b4fbbae31b6f057ab4b221380 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_neumann, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:34:26 compute-0 podman[77715]: 2025-10-01 16:34:26.796588949 +0000 UTC m=+0.167305194 container start c5ec6d4f733d7c29e3d080c668f419b429b2927b4fbbae31b6f057ab4b221380 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_neumann, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 01 16:34:26 compute-0 friendly_neumann[77731]: 167 167
Oct 01 16:34:26 compute-0 systemd[1]: libpod-c5ec6d4f733d7c29e3d080c668f419b429b2927b4fbbae31b6f057ab4b221380.scope: Deactivated successfully.
Oct 01 16:34:26 compute-0 podman[77715]: 2025-10-01 16:34:26.84653782 +0000 UTC m=+0.217254065 container attach c5ec6d4f733d7c29e3d080c668f419b429b2927b4fbbae31b6f057ab4b221380 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_neumann, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:34:26 compute-0 podman[77715]: 2025-10-01 16:34:26.84694195 +0000 UTC m=+0.217658195 container died c5ec6d4f733d7c29e3d080c668f419b429b2927b4fbbae31b6f057ab4b221380 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:34:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-d11696b0f86f889b2cc37f4b7090be1936be6d00430203243ed55a9d692d29c6-merged.mount: Deactivated successfully.
Oct 01 16:34:27 compute-0 podman[77715]: 2025-10-01 16:34:27.095335941 +0000 UTC m=+0.466052196 container remove c5ec6d4f733d7c29e3d080c668f419b429b2927b4fbbae31b6f057ab4b221380 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Oct 01 16:34:27 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:27 compute-0 ceph-mon[74273]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 16:34:27 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:27 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 16:34:27 compute-0 systemd[1]: libpod-conmon-c5ec6d4f733d7c29e3d080c668f419b429b2927b4fbbae31b6f057ab4b221380.scope: Deactivated successfully.
Oct 01 16:34:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 01 16:34:27 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:27 compute-0 ceph-mgr[74571]: [cephadm INFO root] Added label _admin to host compute-0
Oct 01 16:34:27 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Oct 01 16:34:27 compute-0 hopeful_feistel[77695]: Added label _admin to host compute-0
Oct 01 16:34:27 compute-0 systemd[1]: libpod-a802e5d7db01d8331f5a500aac25cc49586b86a9b3c7cc7ac0bbbf04c6226d22.scope: Deactivated successfully.
Oct 01 16:34:27 compute-0 podman[77654]: 2025-10-01 16:34:27.170215811 +0000 UTC m=+0.806688155 container died a802e5d7db01d8331f5a500aac25cc49586b86a9b3c7cc7ac0bbbf04c6226d22 (image=quay.io/ceph/ceph:v18, name=hopeful_feistel, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 01 16:34:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-48948948fd70c5dc0ac5ef861505ed79891476728aed6edb0050d4796fa5aa26-merged.mount: Deactivated successfully.
Oct 01 16:34:27 compute-0 podman[77654]: 2025-10-01 16:34:27.250405534 +0000 UTC m=+0.886877878 container remove a802e5d7db01d8331f5a500aac25cc49586b86a9b3c7cc7ac0bbbf04c6226d22 (image=quay.io/ceph/ceph:v18, name=hopeful_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:34:27 compute-0 systemd[1]: libpod-conmon-a802e5d7db01d8331f5a500aac25cc49586b86a9b3c7cc7ac0bbbf04c6226d22.scope: Deactivated successfully.
Oct 01 16:34:27 compute-0 ceph-mgr[74571]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 01 16:34:27 compute-0 podman[77784]: 2025-10-01 16:34:27.320478003 +0000 UTC m=+0.052464195 container create dd99aebd4d6bc8dabccc7c0ee94b70b4b06db16d6f3a2790e446337cb14eeee2 (image=quay.io/ceph/ceph:v18, name=nostalgic_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:34:27 compute-0 systemd[1]: Started libpod-conmon-dd99aebd4d6bc8dabccc7c0ee94b70b4b06db16d6f3a2790e446337cb14eeee2.scope.
Oct 01 16:34:27 compute-0 podman[77784]: 2025-10-01 16:34:27.290193409 +0000 UTC m=+0.022179591 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:34:27 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:34:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd0b77e41c64dd3ba0e2082a6d85408222fcc7a51c3baf82bfb0e4a777e81cb4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd0b77e41c64dd3ba0e2082a6d85408222fcc7a51c3baf82bfb0e4a777e81cb4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd0b77e41c64dd3ba0e2082a6d85408222fcc7a51c3baf82bfb0e4a777e81cb4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:27 compute-0 podman[77784]: 2025-10-01 16:34:27.417725848 +0000 UTC m=+0.149712060 container init dd99aebd4d6bc8dabccc7c0ee94b70b4b06db16d6f3a2790e446337cb14eeee2 (image=quay.io/ceph/ceph:v18, name=nostalgic_franklin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:34:27 compute-0 podman[77784]: 2025-10-01 16:34:27.426156951 +0000 UTC m=+0.158143153 container start dd99aebd4d6bc8dabccc7c0ee94b70b4b06db16d6f3a2790e446337cb14eeee2 (image=quay.io/ceph/ceph:v18, name=nostalgic_franklin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:34:27 compute-0 podman[77784]: 2025-10-01 16:34:27.43206041 +0000 UTC m=+0.164046622 container attach dd99aebd4d6bc8dabccc7c0ee94b70b4b06db16d6f3a2790e446337cb14eeee2 (image=quay.io/ceph/ceph:v18, name=nostalgic_franklin, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 01 16:34:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Oct 01 16:34:27 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4176433309' entity='client.admin' 
Oct 01 16:34:28 compute-0 systemd[1]: libpod-dd99aebd4d6bc8dabccc7c0ee94b70b4b06db16d6f3a2790e446337cb14eeee2.scope: Deactivated successfully.
Oct 01 16:34:28 compute-0 podman[77827]: 2025-10-01 16:34:28.05123335 +0000 UTC m=+0.024611112 container died dd99aebd4d6bc8dabccc7c0ee94b70b4b06db16d6f3a2790e446337cb14eeee2 (image=quay.io/ceph/ceph:v18, name=nostalgic_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:34:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd0b77e41c64dd3ba0e2082a6d85408222fcc7a51c3baf82bfb0e4a777e81cb4-merged.mount: Deactivated successfully.
Oct 01 16:34:28 compute-0 podman[77827]: 2025-10-01 16:34:28.136083672 +0000 UTC m=+0.109461374 container remove dd99aebd4d6bc8dabccc7c0ee94b70b4b06db16d6f3a2790e446337cb14eeee2 (image=quay.io/ceph/ceph:v18, name=nostalgic_franklin, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 01 16:34:28 compute-0 systemd[1]: libpod-conmon-dd99aebd4d6bc8dabccc7c0ee94b70b4b06db16d6f3a2790e446337cb14eeee2.scope: Deactivated successfully.
Oct 01 16:34:28 compute-0 ceph-mon[74273]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 16:34:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:28 compute-0 ceph-mon[74273]: Added label _admin to host compute-0
Oct 01 16:34:28 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4176433309' entity='client.admin' 
Oct 01 16:34:28 compute-0 podman[77842]: 2025-10-01 16:34:28.222890493 +0000 UTC m=+0.061716869 container create 0b54d5ff093d0b7ad0dbdba4c7ae57f3631e6907e898df06a0cf814d706b38d0 (image=quay.io/ceph/ceph:v18, name=determined_hofstadter, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 01 16:34:28 compute-0 systemd[1]: Started libpod-conmon-0b54d5ff093d0b7ad0dbdba4c7ae57f3631e6907e898df06a0cf814d706b38d0.scope.
Oct 01 16:34:28 compute-0 podman[77842]: 2025-10-01 16:34:28.180112343 +0000 UTC m=+0.018938750 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:34:28 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:34:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a25abba606019b67e06d859f76290abfbe9c357a9e31bc20e89b0653451eeb2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a25abba606019b67e06d859f76290abfbe9c357a9e31bc20e89b0653451eeb2e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a25abba606019b67e06d859f76290abfbe9c357a9e31bc20e89b0653451eeb2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:28 compute-0 podman[77842]: 2025-10-01 16:34:28.33646648 +0000 UTC m=+0.175292876 container init 0b54d5ff093d0b7ad0dbdba4c7ae57f3631e6907e898df06a0cf814d706b38d0 (image=quay.io/ceph/ceph:v18, name=determined_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 01 16:34:28 compute-0 podman[77842]: 2025-10-01 16:34:28.34201308 +0000 UTC m=+0.180839466 container start 0b54d5ff093d0b7ad0dbdba4c7ae57f3631e6907e898df06a0cf814d706b38d0 (image=quay.io/ceph/ceph:v18, name=determined_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:34:28 compute-0 podman[77842]: 2025-10-01 16:34:28.347397876 +0000 UTC m=+0.186224272 container attach 0b54d5ff093d0b7ad0dbdba4c7ae57f3631e6907e898df06a0cf814d706b38d0 (image=quay.io/ceph/ceph:v18, name=determined_hofstadter, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 01 16:34:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:34:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Oct 01 16:34:28 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3782120100' entity='client.admin' 
Oct 01 16:34:28 compute-0 determined_hofstadter[77859]: set mgr/dashboard/cluster/status
Oct 01 16:34:29 compute-0 systemd[1]: libpod-0b54d5ff093d0b7ad0dbdba4c7ae57f3631e6907e898df06a0cf814d706b38d0.scope: Deactivated successfully.
Oct 01 16:34:29 compute-0 podman[77885]: 2025-10-01 16:34:29.033694311 +0000 UTC m=+0.020872848 container died 0b54d5ff093d0b7ad0dbdba4c7ae57f3631e6907e898df06a0cf814d706b38d0 (image=quay.io/ceph/ceph:v18, name=determined_hofstadter, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:34:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-a25abba606019b67e06d859f76290abfbe9c357a9e31bc20e89b0653451eeb2e-merged.mount: Deactivated successfully.
Oct 01 16:34:29 compute-0 ceph-mgr[74571]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 01 16:34:29 compute-0 podman[77885]: 2025-10-01 16:34:29.412440942 +0000 UTC m=+0.399619469 container remove 0b54d5ff093d0b7ad0dbdba4c7ae57f3631e6907e898df06a0cf814d706b38d0 (image=quay.io/ceph/ceph:v18, name=determined_hofstadter, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:34:29 compute-0 systemd[1]: libpod-conmon-0b54d5ff093d0b7ad0dbdba4c7ae57f3631e6907e898df06a0cf814d706b38d0.scope: Deactivated successfully.
Oct 01 16:34:29 compute-0 sudo[73252]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:29 compute-0 podman[77907]: 2025-10-01 16:34:29.613493047 +0000 UTC m=+0.064261093 container create 8e06a262b55e32fc0ef4dd8c73287c868f5e4dfaaabede77582f534988b55eb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 01 16:34:29 compute-0 systemd[1]: Started libpod-conmon-8e06a262b55e32fc0ef4dd8c73287c868f5e4dfaaabede77582f534988b55eb5.scope.
Oct 01 16:34:29 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:34:29 compute-0 podman[77907]: 2025-10-01 16:34:29.590534688 +0000 UTC m=+0.041302754 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:34:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbdc9b4ad4b31fb4a95a0e116bef24e4c8131354d85ede3c927aefd2638b7a2a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbdc9b4ad4b31fb4a95a0e116bef24e4c8131354d85ede3c927aefd2638b7a2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbdc9b4ad4b31fb4a95a0e116bef24e4c8131354d85ede3c927aefd2638b7a2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbdc9b4ad4b31fb4a95a0e116bef24e4c8131354d85ede3c927aefd2638b7a2a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:29 compute-0 podman[77907]: 2025-10-01 16:34:29.706855634 +0000 UTC m=+0.157623690 container init 8e06a262b55e32fc0ef4dd8c73287c868f5e4dfaaabede77582f534988b55eb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_shannon, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 01 16:34:29 compute-0 podman[77907]: 2025-10-01 16:34:29.716276702 +0000 UTC m=+0.167044758 container start 8e06a262b55e32fc0ef4dd8c73287c868f5e4dfaaabede77582f534988b55eb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:34:29 compute-0 podman[77907]: 2025-10-01 16:34:29.720442017 +0000 UTC m=+0.171210113 container attach 8e06a262b55e32fc0ef4dd8c73287c868f5e4dfaaabede77582f534988b55eb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_shannon, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:34:29 compute-0 sudo[77953]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffxkskbeslyistcejxcekveqzkucgyle ; /usr/bin/python3'
Oct 01 16:34:29 compute-0 sudo[77953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:34:29 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3782120100' entity='client.admin' 
Oct 01 16:34:30 compute-0 python3[77955]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:34:30 compute-0 podman[77956]: 2025-10-01 16:34:30.119640724 +0000 UTC m=+0.078873242 container create 394e5121fbdf46159087b1c6a0bbd456e046adad380365fe7cd27631515e2d3f (image=quay.io/ceph/ceph:v18, name=ecstatic_sanderson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:34:30 compute-0 systemd[1]: Started libpod-conmon-394e5121fbdf46159087b1c6a0bbd456e046adad380365fe7cd27631515e2d3f.scope.
Oct 01 16:34:30 compute-0 podman[77956]: 2025-10-01 16:34:30.083213235 +0000 UTC m=+0.042445843 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:34:30 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:34:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e97ee3182cd6172097d95e47dd4067727ab45b445f1b003c3afcc1bc6cf3957/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e97ee3182cd6172097d95e47dd4067727ab45b445f1b003c3afcc1bc6cf3957/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:30 compute-0 podman[77956]: 2025-10-01 16:34:30.209191295 +0000 UTC m=+0.168423813 container init 394e5121fbdf46159087b1c6a0bbd456e046adad380365fe7cd27631515e2d3f (image=quay.io/ceph/ceph:v18, name=ecstatic_sanderson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:34:30 compute-0 podman[77956]: 2025-10-01 16:34:30.215861603 +0000 UTC m=+0.175094091 container start 394e5121fbdf46159087b1c6a0bbd456e046adad380365fe7cd27631515e2d3f (image=quay.io/ceph/ceph:v18, name=ecstatic_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 01 16:34:30 compute-0 podman[77956]: 2025-10-01 16:34:30.220923931 +0000 UTC m=+0.180156409 container attach 394e5121fbdf46159087b1c6a0bbd456e046adad380365fe7cd27631515e2d3f (image=quay.io/ceph/ceph:v18, name=ecstatic_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:34:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Oct 01 16:34:30 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3038849975' entity='client.admin' 
Oct 01 16:34:30 compute-0 systemd[1]: libpod-394e5121fbdf46159087b1c6a0bbd456e046adad380365fe7cd27631515e2d3f.scope: Deactivated successfully.
Oct 01 16:34:30 compute-0 podman[77956]: 2025-10-01 16:34:30.800204273 +0000 UTC m=+0.759436761 container died 394e5121fbdf46159087b1c6a0bbd456e046adad380365fe7cd27631515e2d3f (image=quay.io/ceph/ceph:v18, name=ecstatic_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:34:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e97ee3182cd6172097d95e47dd4067727ab45b445f1b003c3afcc1bc6cf3957-merged.mount: Deactivated successfully.
Oct 01 16:34:30 compute-0 podman[77956]: 2025-10-01 16:34:30.856589077 +0000 UTC m=+0.815821565 container remove 394e5121fbdf46159087b1c6a0bbd456e046adad380365fe7cd27631515e2d3f (image=quay.io/ceph/ceph:v18, name=ecstatic_sanderson, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 01 16:34:30 compute-0 systemd[1]: libpod-conmon-394e5121fbdf46159087b1c6a0bbd456e046adad380365fe7cd27631515e2d3f.scope: Deactivated successfully.
Oct 01 16:34:30 compute-0 sudo[77953]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:31 compute-0 silly_shannon[77925]: [
Oct 01 16:34:31 compute-0 silly_shannon[77925]:     {
Oct 01 16:34:31 compute-0 silly_shannon[77925]:         "available": false,
Oct 01 16:34:31 compute-0 silly_shannon[77925]:         "ceph_device": false,
Oct 01 16:34:31 compute-0 silly_shannon[77925]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 01 16:34:31 compute-0 silly_shannon[77925]:         "lsm_data": {},
Oct 01 16:34:31 compute-0 silly_shannon[77925]:         "lvs": [],
Oct 01 16:34:31 compute-0 silly_shannon[77925]:         "path": "/dev/sr0",
Oct 01 16:34:31 compute-0 silly_shannon[77925]:         "rejected_reasons": [
Oct 01 16:34:31 compute-0 silly_shannon[77925]:             "Has a FileSystem",
Oct 01 16:34:31 compute-0 silly_shannon[77925]:             "Insufficient space (<5GB)"
Oct 01 16:34:31 compute-0 silly_shannon[77925]:         ],
Oct 01 16:34:31 compute-0 silly_shannon[77925]:         "sys_api": {
Oct 01 16:34:31 compute-0 silly_shannon[77925]:             "actuators": null,
Oct 01 16:34:31 compute-0 silly_shannon[77925]:             "device_nodes": "sr0",
Oct 01 16:34:31 compute-0 silly_shannon[77925]:             "devname": "sr0",
Oct 01 16:34:31 compute-0 silly_shannon[77925]:             "human_readable_size": "482.00 KB",
Oct 01 16:34:31 compute-0 silly_shannon[77925]:             "id_bus": "ata",
Oct 01 16:34:31 compute-0 silly_shannon[77925]:             "model": "QEMU DVD-ROM",
Oct 01 16:34:31 compute-0 silly_shannon[77925]:             "nr_requests": "2",
Oct 01 16:34:31 compute-0 silly_shannon[77925]:             "parent": "/dev/sr0",
Oct 01 16:34:31 compute-0 silly_shannon[77925]:             "partitions": {},
Oct 01 16:34:31 compute-0 silly_shannon[77925]:             "path": "/dev/sr0",
Oct 01 16:34:31 compute-0 silly_shannon[77925]:             "removable": "1",
Oct 01 16:34:31 compute-0 silly_shannon[77925]:             "rev": "2.5+",
Oct 01 16:34:31 compute-0 silly_shannon[77925]:             "ro": "0",
Oct 01 16:34:31 compute-0 silly_shannon[77925]:             "rotational": "0",
Oct 01 16:34:31 compute-0 silly_shannon[77925]:             "sas_address": "",
Oct 01 16:34:31 compute-0 silly_shannon[77925]:             "sas_device_handle": "",
Oct 01 16:34:31 compute-0 silly_shannon[77925]:             "scheduler_mode": "mq-deadline",
Oct 01 16:34:31 compute-0 silly_shannon[77925]:             "sectors": 0,
Oct 01 16:34:31 compute-0 silly_shannon[77925]:             "sectorsize": "2048",
Oct 01 16:34:31 compute-0 silly_shannon[77925]:             "size": 493568.0,
Oct 01 16:34:31 compute-0 silly_shannon[77925]:             "support_discard": "2048",
Oct 01 16:34:31 compute-0 silly_shannon[77925]:             "type": "disk",
Oct 01 16:34:31 compute-0 silly_shannon[77925]:             "vendor": "QEMU"
Oct 01 16:34:31 compute-0 silly_shannon[77925]:         }
Oct 01 16:34:31 compute-0 silly_shannon[77925]:     }
Oct 01 16:34:31 compute-0 silly_shannon[77925]: ]
Oct 01 16:34:31 compute-0 systemd[1]: libpod-8e06a262b55e32fc0ef4dd8c73287c868f5e4dfaaabede77582f534988b55eb5.scope: Deactivated successfully.
Oct 01 16:34:31 compute-0 systemd[1]: libpod-8e06a262b55e32fc0ef4dd8c73287c868f5e4dfaaabede77582f534988b55eb5.scope: Consumed 1.499s CPU time.
Oct 01 16:34:31 compute-0 podman[77907]: 2025-10-01 16:34:31.202571391 +0000 UTC m=+1.653339437 container died 8e06a262b55e32fc0ef4dd8c73287c868f5e4dfaaabede77582f534988b55eb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_shannon, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:34:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-cbdc9b4ad4b31fb4a95a0e116bef24e4c8131354d85ede3c927aefd2638b7a2a-merged.mount: Deactivated successfully.
Oct 01 16:34:31 compute-0 podman[77907]: 2025-10-01 16:34:31.263233462 +0000 UTC m=+1.714001508 container remove 8e06a262b55e32fc0ef4dd8c73287c868f5e4dfaaabede77582f534988b55eb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 01 16:34:31 compute-0 ceph-mgr[74571]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Oct 01 16:34:31 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:34:31 compute-0 ceph-mon[74273]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct 01 16:34:31 compute-0 systemd[1]: libpod-conmon-8e06a262b55e32fc0ef4dd8c73287c868f5e4dfaaabede77582f534988b55eb5.scope: Deactivated successfully.
Oct 01 16:34:31 compute-0 sudo[77612]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:34:31 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:34:31 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:34:31 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:34:31 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 01 16:34:31 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 01 16:34:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:34:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:34:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 16:34:31 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:34:31 compute-0 ceph-mgr[74571]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct 01 16:34:31 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct 01 16:34:31 compute-0 sudo[79840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:31 compute-0 sudo[79840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:31 compute-0 sudo[79840]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:31 compute-0 sudo[79865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 01 16:34:31 compute-0 sudo[79865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:31 compute-0 sudo[79865]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:31 compute-0 sudo[79913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:31 compute-0 sudo[79913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:31 compute-0 sudo[79913]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:31 compute-0 sudo[80010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfadmvhfrsfzezgjdbcyzaaizyeheeum ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759336471.1878843-32983-49013091422605/async_wrapper.py j561543249976 30 /home/zuul/.ansible/tmp/ansible-tmp-1759336471.1878843-32983-49013091422605/AnsiballZ_command.py _'
Oct 01 16:34:31 compute-0 sudo[80010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:34:31 compute-0 sudo[79963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/etc/ceph
Oct 01 16:34:31 compute-0 sudo[79963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:31 compute-0 sudo[79963]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:31 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3038849975' entity='client.admin' 
Oct 01 16:34:31 compute-0 ceph-mon[74273]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct 01 16:34:31 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:31 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:31 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:31 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:31 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 01 16:34:31 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:34:31 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:34:31 compute-0 sudo[80015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:31 compute-0 sudo[80015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:31 compute-0 sudo[80015]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:31 compute-0 ansible-async_wrapper.py[80012]: Invoked with j561543249976 30 /home/zuul/.ansible/tmp/ansible-tmp-1759336471.1878843-32983-49013091422605/AnsiballZ_command.py _
Oct 01 16:34:31 compute-0 ansible-async_wrapper.py[80063]: Starting module and watcher
Oct 01 16:34:31 compute-0 ansible-async_wrapper.py[80063]: Start watching 80066 (30)
Oct 01 16:34:31 compute-0 ansible-async_wrapper.py[80066]: Start module (80066)
Oct 01 16:34:31 compute-0 ansible-async_wrapper.py[80012]: Return async_wrapper task started.
Oct 01 16:34:31 compute-0 sudo[80010]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:31 compute-0 sudo[80040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/etc/ceph/ceph.conf.new
Oct 01 16:34:31 compute-0 sudo[80040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:31 compute-0 sudo[80040]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:31 compute-0 sudo[80070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:31 compute-0 sudo[80070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:31 compute-0 sudo[80070]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:32 compute-0 python3[80067]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:34:32 compute-0 sudo[80095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5
Oct 01 16:34:32 compute-0 sudo[80095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:32 compute-0 sudo[80095]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:32 compute-0 podman[80118]: 2025-10-01 16:34:32.097097552 +0000 UTC m=+0.041811857 container create e9979af0da2587cfc1bb62a30c4f7a1ddf17bd10d1cb853d9b76c6c5faccab39 (image=quay.io/ceph/ceph:v18, name=elegant_turing, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 01 16:34:32 compute-0 systemd[1]: Started libpod-conmon-e9979af0da2587cfc1bb62a30c4f7a1ddf17bd10d1cb853d9b76c6c5faccab39.scope.
Oct 01 16:34:32 compute-0 sudo[80126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:32 compute-0 sudo[80126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:32 compute-0 sudo[80126]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:32 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:34:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c1b0a54edc9d0bc81f84e1785337ac4c1ee1f1c4168011686af15d52914833/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c1b0a54edc9d0bc81f84e1785337ac4c1ee1f1c4168011686af15d52914833/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:32 compute-0 podman[80118]: 2025-10-01 16:34:32.080468302 +0000 UTC m=+0.025182627 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:34:32 compute-0 podman[80118]: 2025-10-01 16:34:32.195129277 +0000 UTC m=+0.139843632 container init e9979af0da2587cfc1bb62a30c4f7a1ddf17bd10d1cb853d9b76c6c5faccab39 (image=quay.io/ceph/ceph:v18, name=elegant_turing, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Oct 01 16:34:32 compute-0 podman[80118]: 2025-10-01 16:34:32.203949099 +0000 UTC m=+0.148663414 container start e9979af0da2587cfc1bb62a30c4f7a1ddf17bd10d1cb853d9b76c6c5faccab39 (image=quay.io/ceph/ceph:v18, name=elegant_turing, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:34:32 compute-0 podman[80118]: 2025-10-01 16:34:32.206970736 +0000 UTC m=+0.151685101 container attach e9979af0da2587cfc1bb62a30c4f7a1ddf17bd10d1cb853d9b76c6c5faccab39 (image=quay.io/ceph/ceph:v18, name=elegant_turing, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 01 16:34:32 compute-0 sudo[80163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/etc/ceph/ceph.conf.new
Oct 01 16:34:32 compute-0 sudo[80163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:32 compute-0 sudo[80163]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:32 compute-0 sudo[80212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:32 compute-0 sudo[80212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:32 compute-0 sudo[80212]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:32 compute-0 sudo[80237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/etc/ceph/ceph.conf.new
Oct 01 16:34:32 compute-0 sudo[80237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:32 compute-0 sudo[80237]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:32 compute-0 sudo[80262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:32 compute-0 sudo[80262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:32 compute-0 sudo[80262]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:32 compute-0 sudo[80306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/etc/ceph/ceph.conf.new
Oct 01 16:34:32 compute-0 sudo[80306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:32 compute-0 sudo[80306]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:32 compute-0 sudo[80331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:32 compute-0 sudo[80331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:32 compute-0 sudo[80331]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:32 compute-0 sudo[80356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 01 16:34:32 compute-0 sudo[80356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:32 compute-0 sudo[80356]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:32 compute-0 ceph-mgr[74571]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/config/ceph.conf
Oct 01 16:34:32 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/config/ceph.conf
Oct 01 16:34:32 compute-0 ceph-mon[74273]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:34:32 compute-0 ceph-mon[74273]: Updating compute-0:/etc/ceph/ceph.conf
Oct 01 16:34:32 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 01 16:34:32 compute-0 elegant_turing[80159]: 
Oct 01 16:34:32 compute-0 elegant_turing[80159]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 01 16:34:32 compute-0 systemd[1]: libpod-e9979af0da2587cfc1bb62a30c4f7a1ddf17bd10d1cb853d9b76c6c5faccab39.scope: Deactivated successfully.
Oct 01 16:34:32 compute-0 podman[80118]: 2025-10-01 16:34:32.828206948 +0000 UTC m=+0.772921253 container died e9979af0da2587cfc1bb62a30c4f7a1ddf17bd10d1cb853d9b76c6c5faccab39 (image=quay.io/ceph/ceph:v18, name=elegant_turing, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:34:32 compute-0 sudo[80381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:32 compute-0 sudo[80381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:32 compute-0 sudo[80381]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:32 compute-0 sudo[80415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/config
Oct 01 16:34:32 compute-0 sudo[80415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:32 compute-0 sudo[80415]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-06c1b0a54edc9d0bc81f84e1785337ac4c1ee1f1c4168011686af15d52914833-merged.mount: Deactivated successfully.
Oct 01 16:34:32 compute-0 sudo[80446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:32 compute-0 sudo[80446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:32 compute-0 sudo[80446]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:32 compute-0 podman[80118]: 2025-10-01 16:34:32.97923287 +0000 UTC m=+0.923947175 container remove e9979af0da2587cfc1bb62a30c4f7a1ddf17bd10d1cb853d9b76c6c5faccab39 (image=quay.io/ceph/ceph:v18, name=elegant_turing, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:34:32 compute-0 systemd[1]: libpod-conmon-e9979af0da2587cfc1bb62a30c4f7a1ddf17bd10d1cb853d9b76c6c5faccab39.scope: Deactivated successfully.
Oct 01 16:34:32 compute-0 ansible-async_wrapper.py[80066]: Module complete (80066)
Oct 01 16:34:33 compute-0 sudo[80495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/config
Oct 01 16:34:33 compute-0 sudo[80495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:33 compute-0 sudo[80495]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:33 compute-0 sudo[80520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:33 compute-0 sudo[80520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:33 compute-0 sudo[80520]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:33 compute-0 sudo[80545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/config/ceph.conf.new
Oct 01 16:34:33 compute-0 sudo[80545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:33 compute-0 sudo[80545]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:33 compute-0 sudo[80594]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psvkenixakgrtwjhsbqrrstvlwbsqwvl ; /usr/bin/python3'
Oct 01 16:34:33 compute-0 sudo[80594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:34:33 compute-0 sudo[80593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:33 compute-0 sudo[80593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:33 compute-0 sudo[80593]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:33 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:34:33 compute-0 sudo[80621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5
Oct 01 16:34:33 compute-0 sudo[80621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:33 compute-0 sudo[80621]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:33 compute-0 python3[80612]: ansible-ansible.legacy.async_status Invoked with jid=j561543249976.80012 mode=status _async_dir=/root/.ansible_async
Oct 01 16:34:33 compute-0 sudo[80594]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:33 compute-0 sudo[80646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:33 compute-0 sudo[80646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:33 compute-0 sudo[80646]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:33 compute-0 sudo[80671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/config/ceph.conf.new
Oct 01 16:34:33 compute-0 sudo[80671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:33 compute-0 sudo[80671]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:33 compute-0 sudo[80765]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrbfxylsjmygvwpvsfhpzhsutbddrgut ; /usr/bin/python3'
Oct 01 16:34:33 compute-0 sudo[80765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:34:33 compute-0 sudo[80766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:33 compute-0 sudo[80766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:33 compute-0 sudo[80766]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:33 compute-0 sudo[80793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/config/ceph.conf.new
Oct 01 16:34:33 compute-0 sudo[80793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:33 compute-0 sudo[80793]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:33 compute-0 python3[80772]: ansible-ansible.legacy.async_status Invoked with jid=j561543249976.80012 mode=cleanup _async_dir=/root/.ansible_async
Oct 01 16:34:33 compute-0 sudo[80765]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:34:33 compute-0 sudo[80818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:33 compute-0 sudo[80818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:33 compute-0 sudo[80818]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:33 compute-0 sudo[80843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/config/ceph.conf.new
Oct 01 16:34:33 compute-0 sudo[80843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:33 compute-0 sudo[80843]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:33 compute-0 sudo[80868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:33 compute-0 sudo[80868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:33 compute-0 sudo[80868]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:33 compute-0 ceph-mon[74273]: Updating compute-0:/var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/config/ceph.conf
Oct 01 16:34:33 compute-0 ceph-mon[74273]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 01 16:34:33 compute-0 sudo[80893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/config/ceph.conf.new /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/config/ceph.conf
Oct 01 16:34:33 compute-0 sudo[80893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:33 compute-0 sudo[80893]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:33 compute-0 ceph-mgr[74571]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 01 16:34:33 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 01 16:34:33 compute-0 sudo[80918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:33 compute-0 sudo[80918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:33 compute-0 sudo[80918]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:33 compute-0 sudo[80979]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnhnwjmxihmqyhnrcoiymafidumaanpe ; /usr/bin/python3'
Oct 01 16:34:33 compute-0 sudo[80979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:34:33 compute-0 sudo[80955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 01 16:34:33 compute-0 sudo[80955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:33 compute-0 sudo[80955]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:34 compute-0 sudo[80994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:34 compute-0 sudo[80994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:34 compute-0 sudo[80994]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:34 compute-0 sudo[81019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/etc/ceph
Oct 01 16:34:34 compute-0 sudo[81019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:34 compute-0 sudo[81019]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:34 compute-0 python3[80991]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:34:34 compute-0 sudo[81044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:34 compute-0 sudo[81044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:34 compute-0 sudo[81044]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:34 compute-0 sudo[80979]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:34 compute-0 sudo[81071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/etc/ceph/ceph.client.admin.keyring.new
Oct 01 16:34:34 compute-0 sudo[81071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:34 compute-0 sudo[81071]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:34 compute-0 sudo[81096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:34 compute-0 sudo[81096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:34 compute-0 sudo[81096]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:34 compute-0 sudo[81121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5
Oct 01 16:34:34 compute-0 sudo[81121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:34 compute-0 sudo[81121]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:34 compute-0 sudo[81146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:34 compute-0 sudo[81146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:34 compute-0 sudo[81146]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:34 compute-0 sudo[81171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/etc/ceph/ceph.client.admin.keyring.new
Oct 01 16:34:34 compute-0 sudo[81171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:34 compute-0 sudo[81171]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:34 compute-0 sudo[81217]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyryxohuhnmjqfizqapwhladgbqxlugj ; /usr/bin/python3'
Oct 01 16:34:34 compute-0 sudo[81217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:34:34 compute-0 sudo[81245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:34 compute-0 sudo[81245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:34 compute-0 sudo[81245]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:34 compute-0 python3[81221]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:34:34 compute-0 sudo[81270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/etc/ceph/ceph.client.admin.keyring.new
Oct 01 16:34:34 compute-0 sudo[81270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:34 compute-0 sudo[81270]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:34 compute-0 sudo[81301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:34 compute-0 sudo[81301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:34 compute-0 sudo[81301]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:34 compute-0 podman[81294]: 2025-10-01 16:34:34.789758952 +0000 UTC m=+0.104638312 container create 6e62ba5d7427723db5b99c41082db428770cb5712e146c0e92fc4e4fa8d244e5 (image=quay.io/ceph/ceph:v18, name=keen_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:34:34 compute-0 sudo[81333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/etc/ceph/ceph.client.admin.keyring.new
Oct 01 16:34:34 compute-0 podman[81294]: 2025-10-01 16:34:34.708622075 +0000 UTC m=+0.023501475 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:34:34 compute-0 sudo[81333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:34 compute-0 sudo[81333]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:34 compute-0 ceph-mon[74273]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:34:34 compute-0 systemd[1]: Started libpod-conmon-6e62ba5d7427723db5b99c41082db428770cb5712e146c0e92fc4e4fa8d244e5.scope.
Oct 01 16:34:34 compute-0 sudo[81358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:34 compute-0 sudo[81358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:34 compute-0 sudo[81358]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:34 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:34:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1175a51da2e6741e58d2b0681ea9d9dbfe62380cd172770822d55107ecae23e5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1175a51da2e6741e58d2b0681ea9d9dbfe62380cd172770822d55107ecae23e5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1175a51da2e6741e58d2b0681ea9d9dbfe62380cd172770822d55107ecae23e5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:34 compute-0 podman[81294]: 2025-10-01 16:34:34.905031892 +0000 UTC m=+0.219911252 container init 6e62ba5d7427723db5b99c41082db428770cb5712e146c0e92fc4e4fa8d244e5 (image=quay.io/ceph/ceph:v18, name=keen_grothendieck, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:34:34 compute-0 podman[81294]: 2025-10-01 16:34:34.912383618 +0000 UTC m=+0.227263008 container start 6e62ba5d7427723db5b99c41082db428770cb5712e146c0e92fc4e4fa8d244e5 (image=quay.io/ceph/ceph:v18, name=keen_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:34:34 compute-0 podman[81294]: 2025-10-01 16:34:34.916208874 +0000 UTC m=+0.231088274 container attach 6e62ba5d7427723db5b99c41082db428770cb5712e146c0e92fc4e4fa8d244e5 (image=quay.io/ceph/ceph:v18, name=keen_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:34:34 compute-0 sudo[81388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Oct 01 16:34:34 compute-0 sudo[81388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:34 compute-0 sudo[81388]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:34 compute-0 ceph-mgr[74571]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/config/ceph.client.admin.keyring
Oct 01 16:34:34 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/config/ceph.client.admin.keyring
Oct 01 16:34:35 compute-0 sudo[81414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:35 compute-0 sudo[81414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:35 compute-0 sudo[81414]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:35 compute-0 sudo[81439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/config
Oct 01 16:34:35 compute-0 sudo[81439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:35 compute-0 sudo[81439]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:35 compute-0 sudo[81464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:35 compute-0 sudo[81464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:35 compute-0 sudo[81464]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:35 compute-0 sudo[81489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/config
Oct 01 16:34:35 compute-0 sudo[81489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:35 compute-0 sudo[81489]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:35 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:34:35 compute-0 sudo[81533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:35 compute-0 sudo[81533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:35 compute-0 sudo[81533]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:35 compute-0 sudo[81558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/config/ceph.client.admin.keyring.new
Oct 01 16:34:35 compute-0 sudo[81558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:35 compute-0 sudo[81558]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:35 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 01 16:34:35 compute-0 keen_grothendieck[81383]: 
Oct 01 16:34:35 compute-0 keen_grothendieck[81383]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 01 16:34:35 compute-0 sudo[81583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:35 compute-0 sudo[81583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:35 compute-0 sudo[81583]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:35 compute-0 systemd[1]: libpod-6e62ba5d7427723db5b99c41082db428770cb5712e146c0e92fc4e4fa8d244e5.scope: Deactivated successfully.
Oct 01 16:34:35 compute-0 podman[81294]: 2025-10-01 16:34:35.472947538 +0000 UTC m=+0.787826918 container died 6e62ba5d7427723db5b99c41082db428770cb5712e146c0e92fc4e4fa8d244e5 (image=quay.io/ceph/ceph:v18, name=keen_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:34:35 compute-0 sudo[81610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5
Oct 01 16:34:35 compute-0 sudo[81610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:35 compute-0 sudo[81610]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:35 compute-0 sudo[81646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:35 compute-0 sudo[81646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:35 compute-0 sudo[81646]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:35 compute-0 sudo[81671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/config/ceph.client.admin.keyring.new
Oct 01 16:34:35 compute-0 sudo[81671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:35 compute-0 sudo[81671]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-1175a51da2e6741e58d2b0681ea9d9dbfe62380cd172770822d55107ecae23e5-merged.mount: Deactivated successfully.
Oct 01 16:34:35 compute-0 podman[81294]: 2025-10-01 16:34:35.74714146 +0000 UTC m=+1.062020860 container remove 6e62ba5d7427723db5b99c41082db428770cb5712e146c0e92fc4e4fa8d244e5 (image=quay.io/ceph/ceph:v18, name=keen_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:34:35 compute-0 sudo[81720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:35 compute-0 sudo[81720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:35 compute-0 sudo[81720]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:35 compute-0 sudo[81217]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:35 compute-0 sudo[81745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/config/ceph.client.admin.keyring.new
Oct 01 16:34:35 compute-0 sudo[81745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:35 compute-0 sudo[81745]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:35 compute-0 ceph-mon[74273]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 01 16:34:35 compute-0 systemd[1]: libpod-conmon-6e62ba5d7427723db5b99c41082db428770cb5712e146c0e92fc4e4fa8d244e5.scope: Deactivated successfully.
Oct 01 16:34:35 compute-0 sudo[81770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:35 compute-0 sudo[81770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:35 compute-0 sudo[81770]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:35 compute-0 sudo[81795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/config/ceph.client.admin.keyring.new
Oct 01 16:34:35 compute-0 sudo[81795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:35 compute-0 sudo[81795]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:36 compute-0 sudo[81820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:36 compute-0 sudo[81820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:36 compute-0 sudo[81820]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:36 compute-0 sudo[81868]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyhzxoctqzhjalwecpshrcphssyafcdq ; /usr/bin/python3'
Oct 01 16:34:36 compute-0 sudo[81868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:34:36 compute-0 sudo[81869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/config/ceph.client.admin.keyring.new /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/config/ceph.client.admin.keyring
Oct 01 16:34:36 compute-0 sudo[81869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:36 compute-0 sudo[81869]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:34:36 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:34:36 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 16:34:36 compute-0 python3[81876]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:34:36 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:36 compute-0 ceph-mgr[74571]: [progress INFO root] update: starting ev d6005bf5-300f-4e40-8bc8-7919415d38b1 (Updating crash deployment (+1 -> 1))
Oct 01 16:34:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Oct 01 16:34:36 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 01 16:34:36 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 01 16:34:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:34:36 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:34:36 compute-0 ceph-mgr[74571]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Oct 01 16:34:36 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Oct 01 16:34:36 compute-0 podman[81896]: 2025-10-01 16:34:36.302389346 +0000 UTC m=+0.079020645 container create 3233a903db02f2e0fa2f5263b5eb2f2474ba2f53f5090a54a93d15c172f43e44 (image=quay.io/ceph/ceph:v18, name=wizardly_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:34:36 compute-0 sudo[81908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:36 compute-0 sudo[81908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:36 compute-0 sudo[81908]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:36 compute-0 podman[81896]: 2025-10-01 16:34:36.255613456 +0000 UTC m=+0.032244775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:34:36 compute-0 systemd[1]: Started libpod-conmon-3233a903db02f2e0fa2f5263b5eb2f2474ba2f53f5090a54a93d15c172f43e44.scope.
Oct 01 16:34:36 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:34:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89d68c13f110e079bbb6401654031748dc84721519799548ab08daddd9776c98/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89d68c13f110e079bbb6401654031748dc84721519799548ab08daddd9776c98/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89d68c13f110e079bbb6401654031748dc84721519799548ab08daddd9776c98/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:36 compute-0 podman[81896]: 2025-10-01 16:34:36.41346209 +0000 UTC m=+0.190093409 container init 3233a903db02f2e0fa2f5263b5eb2f2474ba2f53f5090a54a93d15c172f43e44 (image=quay.io/ceph/ceph:v18, name=wizardly_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:34:36 compute-0 sudo[81936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:34:36 compute-0 sudo[81936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:36 compute-0 podman[81896]: 2025-10-01 16:34:36.423961195 +0000 UTC m=+0.200592494 container start 3233a903db02f2e0fa2f5263b5eb2f2474ba2f53f5090a54a93d15c172f43e44 (image=quay.io/ceph/ceph:v18, name=wizardly_lamarr, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:34:36 compute-0 sudo[81936]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:36 compute-0 podman[81896]: 2025-10-01 16:34:36.427402172 +0000 UTC m=+0.204033471 container attach 3233a903db02f2e0fa2f5263b5eb2f2474ba2f53f5090a54a93d15c172f43e44 (image=quay.io/ceph/ceph:v18, name=wizardly_lamarr, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 01 16:34:36 compute-0 sudo[81965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:36 compute-0 sudo[81965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:36 compute-0 sudo[81965]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:36 compute-0 sudo[81990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5
Oct 01 16:34:36 compute-0 sudo[81990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:36 compute-0 ansible-async_wrapper.py[80063]: Done in kid B.
Oct 01 16:34:36 compute-0 ceph-mon[74273]: Updating compute-0:/var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/config/ceph.client.admin.keyring
Oct 01 16:34:36 compute-0 ceph-mon[74273]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:34:36 compute-0 ceph-mon[74273]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 01 16:34:36 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:36 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:36 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:36 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 01 16:34:36 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 01 16:34:36 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:34:36 compute-0 ceph-mon[74273]: Deploying daemon crash.compute-0 on compute-0
Oct 01 16:34:36 compute-0 podman[82072]: 2025-10-01 16:34:36.90029237 +0000 UTC m=+0.060486318 container create ddecf20061396bb1d1014482bf339c90642e12077ddcb32261db40570afe0efe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_liskov, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:34:36 compute-0 systemd[1]: Started libpod-conmon-ddecf20061396bb1d1014482bf339c90642e12077ddcb32261db40570afe0efe.scope.
Oct 01 16:34:36 compute-0 podman[82072]: 2025-10-01 16:34:36.8686349 +0000 UTC m=+0.028828858 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:34:36 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:34:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Oct 01 16:34:36 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2736565713' entity='client.admin' 
Oct 01 16:34:36 compute-0 podman[82072]: 2025-10-01 16:34:36.996619301 +0000 UTC m=+0.156813239 container init ddecf20061396bb1d1014482bf339c90642e12077ddcb32261db40570afe0efe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_liskov, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:34:37 compute-0 podman[82072]: 2025-10-01 16:34:37.003144276 +0000 UTC m=+0.163338234 container start ddecf20061396bb1d1014482bf339c90642e12077ddcb32261db40570afe0efe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_liskov, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:34:37 compute-0 gracious_liskov[82089]: 167 167
Oct 01 16:34:37 compute-0 podman[82072]: 2025-10-01 16:34:37.007397323 +0000 UTC m=+0.167591271 container attach ddecf20061396bb1d1014482bf339c90642e12077ddcb32261db40570afe0efe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 01 16:34:37 compute-0 systemd[1]: libpod-ddecf20061396bb1d1014482bf339c90642e12077ddcb32261db40570afe0efe.scope: Deactivated successfully.
Oct 01 16:34:37 compute-0 systemd[1]: libpod-3233a903db02f2e0fa2f5263b5eb2f2474ba2f53f5090a54a93d15c172f43e44.scope: Deactivated successfully.
Oct 01 16:34:37 compute-0 podman[81896]: 2025-10-01 16:34:37.014761139 +0000 UTC m=+0.791392468 container died 3233a903db02f2e0fa2f5263b5eb2f2474ba2f53f5090a54a93d15c172f43e44 (image=quay.io/ceph/ceph:v18, name=wizardly_lamarr, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:34:37 compute-0 podman[82096]: 2025-10-01 16:34:37.054695567 +0000 UTC m=+0.034730277 container died ddecf20061396bb1d1014482bf339c90642e12077ddcb32261db40570afe0efe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:34:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-89d68c13f110e079bbb6401654031748dc84721519799548ab08daddd9776c98-merged.mount: Deactivated successfully.
Oct 01 16:34:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-e27d05060b4f22688c3a49f0298066452c4d094927a9d79ab0eb005a67172da7-merged.mount: Deactivated successfully.
Oct 01 16:34:37 compute-0 podman[81896]: 2025-10-01 16:34:37.096358039 +0000 UTC m=+0.872989338 container remove 3233a903db02f2e0fa2f5263b5eb2f2474ba2f53f5090a54a93d15c172f43e44 (image=quay.io/ceph/ceph:v18, name=wizardly_lamarr, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 01 16:34:37 compute-0 podman[82096]: 2025-10-01 16:34:37.113331317 +0000 UTC m=+0.093365997 container remove ddecf20061396bb1d1014482bf339c90642e12077ddcb32261db40570afe0efe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:34:37 compute-0 systemd[1]: libpod-conmon-ddecf20061396bb1d1014482bf339c90642e12077ddcb32261db40570afe0efe.scope: Deactivated successfully.
Oct 01 16:34:37 compute-0 systemd[1]: libpod-conmon-3233a903db02f2e0fa2f5263b5eb2f2474ba2f53f5090a54a93d15c172f43e44.scope: Deactivated successfully.
Oct 01 16:34:37 compute-0 sudo[81868]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:37 compute-0 systemd[1]: Reloading.
Oct 01 16:34:37 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:34:37 compute-0 systemd-sysv-generator[82173]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:34:37 compute-0 systemd-rc-local-generator[82169]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:34:37 compute-0 sudo[82179]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acarhozvyfpvihejxfkboenccxgippfz ; /usr/bin/python3'
Oct 01 16:34:37 compute-0 sudo[82179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:34:37 compute-0 systemd[1]: Reloading.
Oct 01 16:34:37 compute-0 python3[82186]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:34:37 compute-0 systemd-rc-local-generator[82220]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:34:37 compute-0 systemd-sysv-generator[82224]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:34:37 compute-0 podman[82226]: 2025-10-01 16:34:37.651117403 +0000 UTC m=+0.054479676 container create 2aa1b085955e49eb45c19b8b5111c9d8cb0008551258c2c8d26b9a7d847c7a03 (image=quay.io/ceph/ceph:v18, name=competent_bose, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:34:37 compute-0 podman[82226]: 2025-10-01 16:34:37.628221655 +0000 UTC m=+0.031583958 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:34:37 compute-0 systemd[1]: Started libpod-conmon-2aa1b085955e49eb45c19b8b5111c9d8cb0008551258c2c8d26b9a7d847c7a03.scope.
Oct 01 16:34:37 compute-0 systemd[1]: Starting Ceph crash.compute-0 for f44264e3-e26a-5bd3-9e84-b4ba651d9cf5...
Oct 01 16:34:37 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65590a3891ea5a6a49dc5f316d104c945b6c9f08e611fc0cfed75c29e7d041b3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65590a3891ea5a6a49dc5f316d104c945b6c9f08e611fc0cfed75c29e7d041b3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65590a3891ea5a6a49dc5f316d104c945b6c9f08e611fc0cfed75c29e7d041b3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:37 compute-0 podman[82226]: 2025-10-01 16:34:37.829350382 +0000 UTC m=+0.232712685 container init 2aa1b085955e49eb45c19b8b5111c9d8cb0008551258c2c8d26b9a7d847c7a03 (image=quay.io/ceph/ceph:v18, name=competent_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 01 16:34:37 compute-0 podman[82226]: 2025-10-01 16:34:37.842532205 +0000 UTC m=+0.245894478 container start 2aa1b085955e49eb45c19b8b5111c9d8cb0008551258c2c8d26b9a7d847c7a03 (image=quay.io/ceph/ceph:v18, name=competent_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:34:37 compute-0 podman[82226]: 2025-10-01 16:34:37.846349652 +0000 UTC m=+0.249711925 container attach 2aa1b085955e49eb45c19b8b5111c9d8cb0008551258c2c8d26b9a7d847c7a03 (image=quay.io/ceph/ceph:v18, name=competent_bose, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:34:37 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2736565713' entity='client.admin' 
Oct 01 16:34:37 compute-0 ceph-mon[74273]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:34:38 compute-0 podman[82297]: 2025-10-01 16:34:38.020552918 +0000 UTC m=+0.041591791 container create 8c459a1c54f7fee05ef7fd6143969f6ec7200cac57af3ab1f02f5b585a3f62a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-crash-compute-0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 01 16:34:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b680e24d27c20b84cc81bb4c7cc1cf3a491d1907b4b4ddfd280b98661fc0e36f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b680e24d27c20b84cc81bb4c7cc1cf3a491d1907b4b4ddfd280b98661fc0e36f/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b680e24d27c20b84cc81bb4c7cc1cf3a491d1907b4b4ddfd280b98661fc0e36f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b680e24d27c20b84cc81bb4c7cc1cf3a491d1907b4b4ddfd280b98661fc0e36f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:38 compute-0 podman[82297]: 2025-10-01 16:34:38.085768254 +0000 UTC m=+0.106807147 container init 8c459a1c54f7fee05ef7fd6143969f6ec7200cac57af3ab1f02f5b585a3f62a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-crash-compute-0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:34:38 compute-0 podman[82297]: 2025-10-01 16:34:38.090388341 +0000 UTC m=+0.111427214 container start 8c459a1c54f7fee05ef7fd6143969f6ec7200cac57af3ab1f02f5b585a3f62a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-crash-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:34:38 compute-0 bash[82297]: 8c459a1c54f7fee05ef7fd6143969f6ec7200cac57af3ab1f02f5b585a3f62a2
Oct 01 16:34:38 compute-0 podman[82297]: 2025-10-01 16:34:38.002754099 +0000 UTC m=+0.023793002 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:34:38 compute-0 systemd[1]: Started Ceph crash.compute-0 for f44264e3-e26a-5bd3-9e84-b4ba651d9cf5.
Oct 01 16:34:38 compute-0 sudo[81990]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:34:38 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:34:38 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 01 16:34:38 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:38 compute-0 ceph-mgr[74571]: [progress INFO root] complete: finished ev d6005bf5-300f-4e40-8bc8-7919415d38b1 (Updating crash deployment (+1 -> 1))
Oct 01 16:34:38 compute-0 ceph-mgr[74571]: [progress INFO root] Completed event d6005bf5-300f-4e40-8bc8-7919415d38b1 (Updating crash deployment (+1 -> 1)) in 2 seconds
Oct 01 16:34:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 01 16:34:38 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:38 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev a6de1891-1146-4bed-997c-d5c455c7fc79 does not exist
Oct 01 16:34:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 01 16:34:38 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:38 compute-0 ceph-mgr[74571]: [progress INFO root] update: starting ev 738ace3e-9368-485d-8a9f-33d9578c09b2 (Updating mgr deployment (+1 -> 2))
Oct 01 16:34:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.xrtzhv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Oct 01 16:34:38 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.xrtzhv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 01 16:34:38 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.xrtzhv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 01 16:34:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 01 16:34:38 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 01 16:34:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:34:38 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:34:38 compute-0 ceph-mgr[74571]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.xrtzhv on compute-0
Oct 01 16:34:38 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.xrtzhv on compute-0
Oct 01 16:34:38 compute-0 sudo[82318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:38 compute-0 sudo[82318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:38 compute-0 sudo[82318]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:38 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-crash-compute-0[82313]: INFO:ceph-crash:pinging cluster to exercise our key
Oct 01 16:34:38 compute-0 sudo[82362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:34:38 compute-0 sudo[82362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:38 compute-0 sudo[82362]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:38 compute-0 sudo[82389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:38 compute-0 sudo[82389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:38 compute-0 sudo[82389]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:38 compute-0 sudo[82414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5
Oct 01 16:34:38 compute-0 sudo[82414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Oct 01 16:34:38 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4069051505' entity='client.admin' 
Oct 01 16:34:38 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-crash-compute-0[82313]: 2025-10-01T16:34:38.507+0000 7f6852ea0640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 01 16:34:38 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-crash-compute-0[82313]: 2025-10-01T16:34:38.507+0000 7f6852ea0640 -1 AuthRegistry(0x7f684c067440) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 01 16:34:38 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-crash-compute-0[82313]: 2025-10-01T16:34:38.508+0000 7f6852ea0640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 01 16:34:38 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-crash-compute-0[82313]: 2025-10-01T16:34:38.508+0000 7f6852ea0640 -1 AuthRegistry(0x7f6852e9f000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 01 16:34:38 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-crash-compute-0[82313]: 2025-10-01T16:34:38.509+0000 7f6851e9e640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct 01 16:34:38 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-crash-compute-0[82313]: 2025-10-01T16:34:38.509+0000 7f6852ea0640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Oct 01 16:34:38 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-crash-compute-0[82313]: [errno 13] RADOS permission denied (error connecting to the cluster)
Oct 01 16:34:38 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-crash-compute-0[82313]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Oct 01 16:34:38 compute-0 systemd[1]: libpod-2aa1b085955e49eb45c19b8b5111c9d8cb0008551258c2c8d26b9a7d847c7a03.scope: Deactivated successfully.
Oct 01 16:34:38 compute-0 podman[82226]: 2025-10-01 16:34:38.527918896 +0000 UTC m=+0.931281159 container died 2aa1b085955e49eb45c19b8b5111c9d8cb0008551258c2c8d26b9a7d847c7a03 (image=quay.io/ceph/ceph:v18, name=competent_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 01 16:34:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-65590a3891ea5a6a49dc5f316d104c945b6c9f08e611fc0cfed75c29e7d041b3-merged.mount: Deactivated successfully.
Oct 01 16:34:38 compute-0 podman[82226]: 2025-10-01 16:34:38.580959745 +0000 UTC m=+0.984322008 container remove 2aa1b085955e49eb45c19b8b5111c9d8cb0008551258c2c8d26b9a7d847c7a03 (image=quay.io/ceph/ceph:v18, name=competent_bose, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:34:38 compute-0 systemd[1]: libpod-conmon-2aa1b085955e49eb45c19b8b5111c9d8cb0008551258c2c8d26b9a7d847c7a03.scope: Deactivated successfully.
Oct 01 16:34:38 compute-0 sudo[82179]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:34:38 compute-0 sudo[82530]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-roypzrkqxnvgvdjcnuodbwdpvxzgvgnn ; /usr/bin/python3'
Oct 01 16:34:38 compute-0 sudo[82530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:34:38 compute-0 podman[82528]: 2025-10-01 16:34:38.824970615 +0000 UTC m=+0.044284779 container create c032b499974dd4e394d15d0b9cd80f9a7801ba3c12814958641a4da9d1c83687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Oct 01 16:34:38 compute-0 systemd[1]: Started libpod-conmon-c032b499974dd4e394d15d0b9cd80f9a7801ba3c12814958641a4da9d1c83687.scope.
Oct 01 16:34:38 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:34:38 compute-0 podman[82528]: 2025-10-01 16:34:38.805564465 +0000 UTC m=+0.024878639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:34:38 compute-0 podman[82528]: 2025-10-01 16:34:38.913211782 +0000 UTC m=+0.132525946 container init c032b499974dd4e394d15d0b9cd80f9a7801ba3c12814958641a4da9d1c83687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_joliot, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:34:38 compute-0 podman[82528]: 2025-10-01 16:34:38.922459466 +0000 UTC m=+0.141773610 container start c032b499974dd4e394d15d0b9cd80f9a7801ba3c12814958641a4da9d1c83687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_joliot, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:34:38 compute-0 podman[82528]: 2025-10-01 16:34:38.927737319 +0000 UTC m=+0.147051463 container attach c032b499974dd4e394d15d0b9cd80f9a7801ba3c12814958641a4da9d1c83687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 01 16:34:38 compute-0 crazy_joliot[82547]: 167 167
Oct 01 16:34:38 compute-0 systemd[1]: libpod-c032b499974dd4e394d15d0b9cd80f9a7801ba3c12814958641a4da9d1c83687.scope: Deactivated successfully.
Oct 01 16:34:38 compute-0 podman[82528]: 2025-10-01 16:34:38.93176249 +0000 UTC m=+0.151076644 container died c032b499974dd4e394d15d0b9cd80f9a7801ba3c12814958641a4da9d1c83687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_joliot, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 01 16:34:38 compute-0 python3[82543]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:34:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a8596ee0f4bff991c26297ef1656cc35a2e52bdb30738baca28272fac0b80d0-merged.mount: Deactivated successfully.
Oct 01 16:34:39 compute-0 podman[82528]: 2025-10-01 16:34:39.013800501 +0000 UTC m=+0.233114645 container remove c032b499974dd4e394d15d0b9cd80f9a7801ba3c12814958641a4da9d1c83687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_joliot, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:34:39 compute-0 systemd[1]: libpod-conmon-c032b499974dd4e394d15d0b9cd80f9a7801ba3c12814958641a4da9d1c83687.scope: Deactivated successfully.
Oct 01 16:34:39 compute-0 podman[82559]: 2025-10-01 16:34:39.057774442 +0000 UTC m=+0.090456105 container create e7b697623e008addd67677a527dfca04b192481a858d7de878633756eeb00b31 (image=quay.io/ceph/ceph:v18, name=heuristic_matsumoto, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 01 16:34:39 compute-0 systemd[1]: Started libpod-conmon-e7b697623e008addd67677a527dfca04b192481a858d7de878633756eeb00b31.scope.
Oct 01 16:34:39 compute-0 systemd[1]: Reloading.
Oct 01 16:34:39 compute-0 podman[82559]: 2025-10-01 16:34:39.032834542 +0000 UTC m=+0.065516215 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:34:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.xrtzhv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 01 16:34:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.xrtzhv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 01 16:34:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 01 16:34:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:34:39 compute-0 ceph-mon[74273]: Deploying daemon mgr.compute-0.xrtzhv on compute-0
Oct 01 16:34:39 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4069051505' entity='client.admin' 
Oct 01 16:34:39 compute-0 systemd-sysv-generator[82614]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:34:39 compute-0 systemd-rc-local-generator[82611]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:34:39 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:34:39 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:34:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f63c9a04ab410191f241aab9b050db9e2edef3418107d973019c4538dfc7e41/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f63c9a04ab410191f241aab9b050db9e2edef3418107d973019c4538dfc7e41/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f63c9a04ab410191f241aab9b050db9e2edef3418107d973019c4538dfc7e41/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:39 compute-0 podman[82559]: 2025-10-01 16:34:39.455749998 +0000 UTC m=+0.488431731 container init e7b697623e008addd67677a527dfca04b192481a858d7de878633756eeb00b31 (image=quay.io/ceph/ceph:v18, name=heuristic_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:34:39 compute-0 podman[82559]: 2025-10-01 16:34:39.470767197 +0000 UTC m=+0.503448870 container start e7b697623e008addd67677a527dfca04b192481a858d7de878633756eeb00b31 (image=quay.io/ceph/ceph:v18, name=heuristic_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:34:39 compute-0 systemd[1]: Reloading.
Oct 01 16:34:39 compute-0 podman[82559]: 2025-10-01 16:34:39.474872731 +0000 UTC m=+0.507554404 container attach e7b697623e008addd67677a527dfca04b192481a858d7de878633756eeb00b31 (image=quay.io/ceph/ceph:v18, name=heuristic_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:34:39 compute-0 systemd-sysv-generator[82657]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:34:39 compute-0 systemd-rc-local-generator[82653]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:34:39 compute-0 systemd[1]: Starting Ceph mgr.compute-0.xrtzhv for f44264e3-e26a-5bd3-9e84-b4ba651d9cf5...
Oct 01 16:34:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Oct 01 16:34:40 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3724680706' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct 01 16:34:40 compute-0 podman[82732]: 2025-10-01 16:34:40.101477258 +0000 UTC m=+0.049360147 container create 6820c90dcecf0c4922d00609f0b8b5e7ae06fc65d153e35637222a365cd26ada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-xrtzhv, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd2d2e6e56d321b6271bb1bcca9f66e5b3b7f07f1bd37e5b6dec403292e8dd18/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd2d2e6e56d321b6271bb1bcca9f66e5b3b7f07f1bd37e5b6dec403292e8dd18/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd2d2e6e56d321b6271bb1bcca9f66e5b3b7f07f1bd37e5b6dec403292e8dd18/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd2d2e6e56d321b6271bb1bcca9f66e5b3b7f07f1bd37e5b6dec403292e8dd18/merged/var/lib/ceph/mgr/ceph-compute-0.xrtzhv supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Oct 01 16:34:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 01 16:34:40 compute-0 ceph-mon[74273]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:34:40 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3724680706' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct 01 16:34:40 compute-0 podman[82732]: 2025-10-01 16:34:40.079313419 +0000 UTC m=+0.027196318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:34:40 compute-0 podman[82732]: 2025-10-01 16:34:40.176184124 +0000 UTC m=+0.124067033 container init 6820c90dcecf0c4922d00609f0b8b5e7ae06fc65d153e35637222a365cd26ada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-xrtzhv, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 01 16:34:40 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3724680706' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct 01 16:34:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Oct 01 16:34:40 compute-0 heuristic_matsumoto[82583]: set require_min_compat_client to mimic
Oct 01 16:34:40 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Oct 01 16:34:40 compute-0 podman[82732]: 2025-10-01 16:34:40.187760717 +0000 UTC m=+0.135643606 container start 6820c90dcecf0c4922d00609f0b8b5e7ae06fc65d153e35637222a365cd26ada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-xrtzhv, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:34:40 compute-0 bash[82732]: 6820c90dcecf0c4922d00609f0b8b5e7ae06fc65d153e35637222a365cd26ada
Oct 01 16:34:40 compute-0 systemd[1]: Started Ceph mgr.compute-0.xrtzhv for f44264e3-e26a-5bd3-9e84-b4ba651d9cf5.
Oct 01 16:34:40 compute-0 systemd[1]: libpod-e7b697623e008addd67677a527dfca04b192481a858d7de878633756eeb00b31.scope: Deactivated successfully.
Oct 01 16:34:40 compute-0 podman[82559]: 2025-10-01 16:34:40.208631953 +0000 UTC m=+1.241313596 container died e7b697623e008addd67677a527dfca04b192481a858d7de878633756eeb00b31 (image=quay.io/ceph/ceph:v18, name=heuristic_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:34:40 compute-0 ceph-mgr[82752]: set uid:gid to 167:167 (ceph:ceph)
Oct 01 16:34:40 compute-0 ceph-mgr[82752]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct 01 16:34:40 compute-0 ceph-mgr[82752]: pidfile_write: ignore empty --pid-file
Oct 01 16:34:40 compute-0 sudo[82414]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f63c9a04ab410191f241aab9b050db9e2edef3418107d973019c4538dfc7e41-merged.mount: Deactivated successfully.
Oct 01 16:34:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:34:40 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:34:40 compute-0 podman[82559]: 2025-10-01 16:34:40.315994484 +0000 UTC m=+1.348676127 container remove e7b697623e008addd67677a527dfca04b192481a858d7de878633756eeb00b31 (image=quay.io/ceph/ceph:v18, name=heuristic_matsumoto, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Oct 01 16:34:40 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 01 16:34:40 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:40 compute-0 ceph-mgr[74571]: [progress INFO root] complete: finished ev 738ace3e-9368-485d-8a9f-33d9578c09b2 (Updating mgr deployment (+1 -> 2))
Oct 01 16:34:40 compute-0 ceph-mgr[74571]: [progress INFO root] Completed event 738ace3e-9368-485d-8a9f-33d9578c09b2 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Oct 01 16:34:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 01 16:34:40 compute-0 systemd[1]: libpod-conmon-e7b697623e008addd67677a527dfca04b192481a858d7de878633756eeb00b31.scope: Deactivated successfully.
Oct 01 16:34:40 compute-0 sudo[82530]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:40 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:40 compute-0 ceph-mgr[82752]: mgr[py] Loading python module 'alerts'
Oct 01 16:34:40 compute-0 sudo[82789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:40 compute-0 sudo[82789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:40 compute-0 sudo[82789]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:40 compute-0 sudo[82814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 16:34:40 compute-0 sudo[82814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:40 compute-0 sudo[82814]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:40 compute-0 sudo[82839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:40 compute-0 sudo[82839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:40 compute-0 sudo[82839]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:40 compute-0 sudo[82864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:34:40 compute-0 sudo[82864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:40 compute-0 sudo[82864]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:40 compute-0 sudo[82889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:40 compute-0 sudo[82889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:40 compute-0 sudo[82889]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:40 compute-0 sudo[82914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 01 16:34:40 compute-0 sudo[82914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:40 compute-0 ceph-mgr[82752]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 01 16:34:40 compute-0 ceph-mgr[82752]: mgr[py] Loading python module 'balancer'
Oct 01 16:34:40 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-xrtzhv[82747]: 2025-10-01T16:34:40.715+0000 7fb65c873140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 01 16:34:40 compute-0 sudo[82962]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unpggssjgcslgfvejqoksqtdylvmpkxs ; /usr/bin/python3'
Oct 01 16:34:40 compute-0 sudo[82962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:34:40 compute-0 python3[82964]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:34:40 compute-0 podman[82990]: 2025-10-01 16:34:40.972180128 +0000 UTC m=+0.043327644 container create d12f96c6d9fa898ca7142d0c52881028a38ec99e69d5ed6543bddf1652216c90 (image=quay.io/ceph/ceph:v18, name=wonderful_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 01 16:34:40 compute-0 ceph-mgr[82752]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 01 16:34:40 compute-0 ceph-mgr[82752]: mgr[py] Loading python module 'cephadm'
Oct 01 16:34:40 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-xrtzhv[82747]: 2025-10-01T16:34:40.973+0000 7fb65c873140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 01 16:34:41 compute-0 systemd[1]: Started libpod-conmon-d12f96c6d9fa898ca7142d0c52881028a38ec99e69d5ed6543bddf1652216c90.scope.
Oct 01 16:34:41 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:34:41 compute-0 podman[82990]: 2025-10-01 16:34:40.950990233 +0000 UTC m=+0.022137769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:34:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0e6ce18d0e7f97000a8ed5ac83c479ddcb8a5b7da9fc1dd874dd585ac94a8c7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0e6ce18d0e7f97000a8ed5ac83c479ddcb8a5b7da9fc1dd874dd585ac94a8c7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0e6ce18d0e7f97000a8ed5ac83c479ddcb8a5b7da9fc1dd874dd585ac94a8c7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:41 compute-0 podman[82990]: 2025-10-01 16:34:41.071608658 +0000 UTC m=+0.142756194 container init d12f96c6d9fa898ca7142d0c52881028a38ec99e69d5ed6543bddf1652216c90 (image=quay.io/ceph/ceph:v18, name=wonderful_torvalds, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:34:41 compute-0 podman[82990]: 2025-10-01 16:34:41.083092608 +0000 UTC m=+0.154240124 container start d12f96c6d9fa898ca7142d0c52881028a38ec99e69d5ed6543bddf1652216c90 (image=quay.io/ceph/ceph:v18, name=wonderful_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:34:41 compute-0 podman[82990]: 2025-10-01 16:34:41.087861219 +0000 UTC m=+0.159008765 container attach d12f96c6d9fa898ca7142d0c52881028a38ec99e69d5ed6543bddf1652216c90 (image=quay.io/ceph/ceph:v18, name=wonderful_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 01 16:34:41 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3724680706' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct 01 16:34:41 compute-0 ceph-mon[74273]: osdmap e3: 0 total, 0 up, 0 in
Oct 01 16:34:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:41 compute-0 podman[83055]: 2025-10-01 16:34:41.252081424 +0000 UTC m=+0.063778341 container exec bfdaa9b78cc1558959452c7020a00aa78f3da27e3ededf3766f2f88165c2443b (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 01 16:34:41 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:34:41 compute-0 ceph-mgr[74571]: [progress INFO root] Writing back 2 completed events
Oct 01 16:34:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 01 16:34:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:34:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:34:41 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:34:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:34:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:34:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:34:41 compute-0 podman[83055]: 2025-10-01 16:34:41.369242022 +0000 UTC m=+0.180938939 container exec_died bfdaa9b78cc1558959452c7020a00aa78f3da27e3ededf3766f2f88165c2443b (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 01 16:34:41 compute-0 sudo[82914]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:34:41 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:34:41 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:34:41 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:34:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 16:34:41 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:34:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 16:34:41 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:41 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev bcd27dda-ab35-4626-bcdc-9e03e71d20b3 does not exist
Oct 01 16:34:41 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 78bf9fef-1d25-4136-bf7a-9a0e8ccc4f82 does not exist
Oct 01 16:34:41 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 046509a9-cd34-4e9f-8d51-61712bf98624 does not exist
Oct 01 16:34:41 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 16:34:41 compute-0 sudo[83157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:41 compute-0 sudo[83161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:41 compute-0 sudo[83157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:41 compute-0 sudo[83161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:41 compute-0 sudo[83157]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:41 compute-0 sudo[83161]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:41 compute-0 sudo[83208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:34:41 compute-0 sudo[83207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 16:34:41 compute-0 sudo[83208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:41 compute-0 sudo[83207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:41 compute-0 sudo[83208]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:41 compute-0 sudo[83207]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Oct 01 16:34:41 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Oct 01 16:34:41 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Oct 01 16:34:41 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Oct 01 16:34:41 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:41 compute-0 ceph-mgr[74571]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Oct 01 16:34:41 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Oct 01 16:34:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Oct 01 16:34:41 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 01 16:34:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Oct 01 16:34:41 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 01 16:34:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:34:41 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:34:41 compute-0 ceph-mgr[74571]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Oct 01 16:34:41 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Oct 01 16:34:41 compute-0 sudo[83257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:41 compute-0 sudo[83257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:41 compute-0 sudo[83257]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:41 compute-0 sudo[83280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:41 compute-0 sudo[83280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:41 compute-0 sudo[83280]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:41 compute-0 sudo[83294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Oct 01 16:34:41 compute-0 sudo[83294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:41 compute-0 sudo[83330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:34:41 compute-0 sudo[83330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:41 compute-0 sudo[83330]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:42 compute-0 sudo[83357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:42 compute-0 sudo[83357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:42 compute-0 sudo[83357]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:42 compute-0 sudo[83382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5
Oct 01 16:34:42 compute-0 sudo[83382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:42 compute-0 sudo[83294]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 01 16:34:42 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 01 16:34:42 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 01 16:34:42 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 01 16:34:42 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:42 compute-0 ceph-mgr[74571]: [cephadm INFO root] Added host compute-0
Oct 01 16:34:42 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Added host compute-0
Oct 01 16:34:42 compute-0 ceph-mgr[74571]: [cephadm INFO root] Saving service mon spec with placement compute-0
Oct 01 16:34:42 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Oct 01 16:34:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 01 16:34:42 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:42 compute-0 ceph-mgr[74571]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Oct 01 16:34:42 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Oct 01 16:34:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 01 16:34:42 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:42 compute-0 ceph-mgr[74571]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Oct 01 16:34:42 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Oct 01 16:34:42 compute-0 ceph-mgr[74571]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Oct 01 16:34:42 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Oct 01 16:34:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Oct 01 16:34:42 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:42 compute-0 podman[83443]: 2025-10-01 16:34:42.316152044 +0000 UTC m=+0.043206111 container create c7a48168fcced17c57c1b18a75e776a0da76d5428a10d345ed1781d6e3c24018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_pike, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Oct 01 16:34:42 compute-0 wonderful_torvalds[83020]: Added host 'compute-0' with addr '192.168.122.100'
Oct 01 16:34:42 compute-0 wonderful_torvalds[83020]: Scheduled mon update...
Oct 01 16:34:42 compute-0 wonderful_torvalds[83020]: Scheduled mgr update...
Oct 01 16:34:42 compute-0 wonderful_torvalds[83020]: Scheduled osd.default_drive_group update...
Oct 01 16:34:42 compute-0 podman[82990]: 2025-10-01 16:34:42.342980722 +0000 UTC m=+1.414128238 container died d12f96c6d9fa898ca7142d0c52881028a38ec99e69d5ed6543bddf1652216c90 (image=quay.io/ceph/ceph:v18, name=wonderful_torvalds, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 01 16:34:42 compute-0 ceph-mon[74273]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:34:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:34:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:34:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:42 compute-0 ceph-mon[74273]: from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 16:34:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 01 16:34:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 01 16:34:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:34:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:42 compute-0 systemd[1]: Started libpod-conmon-c7a48168fcced17c57c1b18a75e776a0da76d5428a10d345ed1781d6e3c24018.scope.
Oct 01 16:34:42 compute-0 systemd[1]: libpod-d12f96c6d9fa898ca7142d0c52881028a38ec99e69d5ed6543bddf1652216c90.scope: Deactivated successfully.
Oct 01 16:34:42 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:34:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0e6ce18d0e7f97000a8ed5ac83c479ddcb8a5b7da9fc1dd874dd585ac94a8c7-merged.mount: Deactivated successfully.
Oct 01 16:34:42 compute-0 podman[82990]: 2025-10-01 16:34:42.389741132 +0000 UTC m=+1.460888648 container remove d12f96c6d9fa898ca7142d0c52881028a38ec99e69d5ed6543bddf1652216c90 (image=quay.io/ceph/ceph:v18, name=wonderful_torvalds, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:34:42 compute-0 podman[83443]: 2025-10-01 16:34:42.298571171 +0000 UTC m=+0.025625258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:34:42 compute-0 podman[83443]: 2025-10-01 16:34:42.398794691 +0000 UTC m=+0.125848788 container init c7a48168fcced17c57c1b18a75e776a0da76d5428a10d345ed1781d6e3c24018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:34:42 compute-0 podman[83443]: 2025-10-01 16:34:42.405192322 +0000 UTC m=+0.132246389 container start c7a48168fcced17c57c1b18a75e776a0da76d5428a10d345ed1781d6e3c24018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_pike, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Oct 01 16:34:42 compute-0 systemd[1]: libpod-conmon-d12f96c6d9fa898ca7142d0c52881028a38ec99e69d5ed6543bddf1652216c90.scope: Deactivated successfully.
Oct 01 16:34:42 compute-0 eager_pike[83467]: 167 167
Oct 01 16:34:42 compute-0 podman[83443]: 2025-10-01 16:34:42.411637675 +0000 UTC m=+0.138691772 container attach c7a48168fcced17c57c1b18a75e776a0da76d5428a10d345ed1781d6e3c24018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_pike, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 01 16:34:42 compute-0 systemd[1]: libpod-c7a48168fcced17c57c1b18a75e776a0da76d5428a10d345ed1781d6e3c24018.scope: Deactivated successfully.
Oct 01 16:34:42 compute-0 podman[83443]: 2025-10-01 16:34:42.413855701 +0000 UTC m=+0.140909768 container died c7a48168fcced17c57c1b18a75e776a0da76d5428a10d345ed1781d6e3c24018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:34:42 compute-0 sudo[82962]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-935ec5672a9585a23583b4e321145b5d23d1d84b502d7696fc40ffcbe423937b-merged.mount: Deactivated successfully.
Oct 01 16:34:42 compute-0 podman[83443]: 2025-10-01 16:34:42.462843577 +0000 UTC m=+0.189897654 container remove c7a48168fcced17c57c1b18a75e776a0da76d5428a10d345ed1781d6e3c24018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_pike, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 16:34:42 compute-0 systemd[1]: libpod-conmon-c7a48168fcced17c57c1b18a75e776a0da76d5428a10d345ed1781d6e3c24018.scope: Deactivated successfully.
Oct 01 16:34:42 compute-0 sudo[83382]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:34:42 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:34:42 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:42 compute-0 ceph-mgr[74571]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.pmbdpj (unknown last config time)...
Oct 01 16:34:42 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.pmbdpj (unknown last config time)...
Oct 01 16:34:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.pmbdpj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Oct 01 16:34:42 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.pmbdpj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 01 16:34:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 01 16:34:42 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 01 16:34:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:34:42 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:34:42 compute-0 ceph-mgr[74571]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.pmbdpj on compute-0
Oct 01 16:34:42 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.pmbdpj on compute-0
Oct 01 16:34:42 compute-0 sudo[83500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:42 compute-0 sudo[83500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:42 compute-0 sudo[83500]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:42 compute-0 sudo[83525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:34:42 compute-0 sudo[83525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:42 compute-0 sudo[83525]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:42 compute-0 sudo[83573]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rczgcfrhzwjkhlsvuwczhyrbxdyzyqho ; /usr/bin/python3'
Oct 01 16:34:42 compute-0 sudo[83573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:34:42 compute-0 sudo[83574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:42 compute-0 sudo[83574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:42 compute-0 sudo[83574]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:42 compute-0 sudo[83601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5
Oct 01 16:34:42 compute-0 sudo[83601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:42 compute-0 python3[83581]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:34:42 compute-0 podman[83627]: 2025-10-01 16:34:42.845439026 +0000 UTC m=+0.035700913 container create da22b5df5b8efc3b00165a60dd7ea7971e73fd801ddc1feaed0dc8b5bafe222c (image=quay.io/ceph/ceph:v18, name=happy_fermat, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:34:42 compute-0 systemd[1]: Started libpod-conmon-da22b5df5b8efc3b00165a60dd7ea7971e73fd801ddc1feaed0dc8b5bafe222c.scope.
Oct 01 16:34:42 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:34:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b5403e6f98ec894244db0283d2051f730f2958972af9ce977c9fb640d3a4953/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b5403e6f98ec894244db0283d2051f730f2958972af9ce977c9fb640d3a4953/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b5403e6f98ec894244db0283d2051f730f2958972af9ce977c9fb640d3a4953/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:42 compute-0 podman[83627]: 2025-10-01 16:34:42.828937209 +0000 UTC m=+0.019199116 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:34:42 compute-0 podman[83627]: 2025-10-01 16:34:42.932134944 +0000 UTC m=+0.122396831 container init da22b5df5b8efc3b00165a60dd7ea7971e73fd801ddc1feaed0dc8b5bafe222c (image=quay.io/ceph/ceph:v18, name=happy_fermat, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 01 16:34:42 compute-0 podman[83627]: 2025-10-01 16:34:42.939520061 +0000 UTC m=+0.129781948 container start da22b5df5b8efc3b00165a60dd7ea7971e73fd801ddc1feaed0dc8b5bafe222c (image=quay.io/ceph/ceph:v18, name=happy_fermat, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:34:42 compute-0 podman[83627]: 2025-10-01 16:34:42.944914647 +0000 UTC m=+0.135176544 container attach da22b5df5b8efc3b00165a60dd7ea7971e73fd801ddc1feaed0dc8b5bafe222c (image=quay.io/ceph/ceph:v18, name=happy_fermat, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 01 16:34:42 compute-0 podman[83660]: 2025-10-01 16:34:42.989093522 +0000 UTC m=+0.036330058 container create d052009a62ba67463595b2a25612cc5617ad0ffe5496958415e039ee06fe4c64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cori, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 01 16:34:43 compute-0 systemd[1]: Started libpod-conmon-d052009a62ba67463595b2a25612cc5617ad0ffe5496958415e039ee06fe4c64.scope.
Oct 01 16:34:43 compute-0 ceph-mgr[82752]: mgr[py] Loading python module 'crash'
Oct 01 16:34:43 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:34:43 compute-0 podman[83660]: 2025-10-01 16:34:43.054038651 +0000 UTC m=+0.101275207 container init d052009a62ba67463595b2a25612cc5617ad0ffe5496958415e039ee06fe4c64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:34:43 compute-0 podman[83660]: 2025-10-01 16:34:43.059190091 +0000 UTC m=+0.106426617 container start d052009a62ba67463595b2a25612cc5617ad0ffe5496958415e039ee06fe4c64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cori, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 16:34:43 compute-0 hardcore_cori[83676]: 167 167
Oct 01 16:34:43 compute-0 systemd[1]: libpod-d052009a62ba67463595b2a25612cc5617ad0ffe5496958415e039ee06fe4c64.scope: Deactivated successfully.
Oct 01 16:34:43 compute-0 conmon[83676]: conmon d052009a62ba67463595 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d052009a62ba67463595b2a25612cc5617ad0ffe5496958415e039ee06fe4c64.scope/container/memory.events
Oct 01 16:34:43 compute-0 podman[83660]: 2025-10-01 16:34:42.972022261 +0000 UTC m=+0.019258817 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:34:43 compute-0 podman[83660]: 2025-10-01 16:34:43.062378542 +0000 UTC m=+0.109615108 container attach d052009a62ba67463595b2a25612cc5617ad0ffe5496958415e039ee06fe4c64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:34:43 compute-0 podman[83660]: 2025-10-01 16:34:43.08130683 +0000 UTC m=+0.128543366 container died d052009a62ba67463595b2a25612cc5617ad0ffe5496958415e039ee06fe4c64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cori, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:34:43 compute-0 podman[83660]: 2025-10-01 16:34:43.118073728 +0000 UTC m=+0.165310274 container remove d052009a62ba67463595b2a25612cc5617ad0ffe5496958415e039ee06fe4c64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Oct 01 16:34:43 compute-0 systemd[1]: libpod-conmon-d052009a62ba67463595b2a25612cc5617ad0ffe5496958415e039ee06fe4c64.scope: Deactivated successfully.
Oct 01 16:34:43 compute-0 sudo[83601]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:34:43 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:34:43 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:43 compute-0 sudo[83695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:43 compute-0 sudo[83695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:43 compute-0 sudo[83695]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:43 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:34:43 compute-0 sudo[83720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:34:43 compute-0 sudo[83720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:43 compute-0 sudo[83720]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fb5eda08064b8d5e8e52e7465d315e5a3c6f0457f5ecfa78e2aa796905a140e-merged.mount: Deactivated successfully.
Oct 01 16:34:43 compute-0 ceph-mgr[82752]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 01 16:34:43 compute-0 ceph-mgr[82752]: mgr[py] Loading python module 'dashboard'
Oct 01 16:34:43 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-xrtzhv[82747]: 2025-10-01T16:34:43.334+0000 7fb65c873140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 01 16:34:43 compute-0 sudo[83764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:43 compute-0 sudo[83764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:43 compute-0 sudo[83764]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:43 compute-0 sudo[83789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 01 16:34:43 compute-0 sudo[83789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:43 compute-0 ceph-mon[74273]: Reconfiguring mon.compute-0 (unknown last config time)...
Oct 01 16:34:43 compute-0 ceph-mon[74273]: Reconfiguring daemon mon.compute-0 on compute-0
Oct 01 16:34:43 compute-0 ceph-mon[74273]: Added host compute-0
Oct 01 16:34:43 compute-0 ceph-mon[74273]: Saving service mon spec with placement compute-0
Oct 01 16:34:43 compute-0 ceph-mon[74273]: Saving service mgr spec with placement compute-0
Oct 01 16:34:43 compute-0 ceph-mon[74273]: Marking host: compute-0 for OSDSpec preview refresh.
Oct 01 16:34:43 compute-0 ceph-mon[74273]: Saving service osd.default_drive_group spec with placement compute-0
Oct 01 16:34:43 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:43 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:43 compute-0 ceph-mon[74273]: Reconfiguring mgr.compute-0.pmbdpj (unknown last config time)...
Oct 01 16:34:43 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.pmbdpj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 01 16:34:43 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 01 16:34:43 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:34:43 compute-0 ceph-mon[74273]: Reconfiguring daemon mgr.compute-0.pmbdpj on compute-0
Oct 01 16:34:43 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:43 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 01 16:34:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4202555826' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 01 16:34:43 compute-0 happy_fermat[83645]: 
Oct 01 16:34:43 compute-0 happy_fermat[83645]: {"fsid":"f44264e3-e26a-5bd3-9e84-b4ba651d9cf5","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":79,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-10-01T16:33:19.992352+0000","services":{}},"progress_events":{}}
Oct 01 16:34:43 compute-0 systemd[1]: libpod-da22b5df5b8efc3b00165a60dd7ea7971e73fd801ddc1feaed0dc8b5bafe222c.scope: Deactivated successfully.
Oct 01 16:34:43 compute-0 podman[83627]: 2025-10-01 16:34:43.586677687 +0000 UTC m=+0.776939574 container died da22b5df5b8efc3b00165a60dd7ea7971e73fd801ddc1feaed0dc8b5bafe222c (image=quay.io/ceph/ceph:v18, name=happy_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 01 16:34:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b5403e6f98ec894244db0283d2051f730f2958972af9ce977c9fb640d3a4953-merged.mount: Deactivated successfully.
Oct 01 16:34:43 compute-0 podman[83627]: 2025-10-01 16:34:43.635093169 +0000 UTC m=+0.825355056 container remove da22b5df5b8efc3b00165a60dd7ea7971e73fd801ddc1feaed0dc8b5bafe222c (image=quay.io/ceph/ceph:v18, name=happy_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 01 16:34:43 compute-0 systemd[1]: libpod-conmon-da22b5df5b8efc3b00165a60dd7ea7971e73fd801ddc1feaed0dc8b5bafe222c.scope: Deactivated successfully.
Oct 01 16:34:43 compute-0 sudo[83573]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:34:43 compute-0 podman[83898]: 2025-10-01 16:34:43.886602739 +0000 UTC m=+0.072177254 container exec bfdaa9b78cc1558959452c7020a00aa78f3da27e3ededf3766f2f88165c2443b (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:34:43 compute-0 podman[83898]: 2025-10-01 16:34:43.983825383 +0000 UTC m=+0.169399898 container exec_died bfdaa9b78cc1558959452c7020a00aa78f3da27e3ededf3766f2f88165c2443b (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:34:44 compute-0 sudo[83789]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:34:44 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:34:44 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:34:44 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:34:44 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:34:44 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:34:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 16:34:44 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:34:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 16:34:44 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:44 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev f0845485-ff8e-4f98-8d5c-980b86ac49d5 does not exist
Oct 01 16:34:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 01 16:34:44 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:44 compute-0 ceph-mgr[74571]: [progress INFO root] update: starting ev 82f7b73a-ee75-495c-b846-a0eab9e7d59c (Updating mgr deployment (-1 -> 1))
Oct 01 16:34:44 compute-0 ceph-mgr[74571]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.xrtzhv from compute-0 -- ports [8765]
Oct 01 16:34:44 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.xrtzhv from compute-0 -- ports [8765]
Oct 01 16:34:44 compute-0 sudo[83985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:44 compute-0 sudo[83985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:44 compute-0 sudo[83985]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:44 compute-0 sudo[84010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:34:44 compute-0 sudo[84010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:44 compute-0 sudo[84010]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:44 compute-0 sudo[84035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:44 compute-0 sudo[84035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:44 compute-0 sudo[84035]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:44 compute-0 sudo[84060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 rm-daemon --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --name mgr.compute-0.xrtzhv --force --tcp-ports 8765
Oct 01 16:34:44 compute-0 sudo[84060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:44 compute-0 ceph-mon[74273]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:34:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4202555826' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 01 16:34:44 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:44 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:44 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:44 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:44 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:34:44 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:34:44 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:44 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:44 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.xrtzhv for f44264e3-e26a-5bd3-9e84-b4ba651d9cf5...
Oct 01 16:34:44 compute-0 ceph-mgr[82752]: mgr[py] Loading python module 'devicehealth'
Oct 01 16:34:45 compute-0 podman[84153]: 2025-10-01 16:34:45.019252961 +0000 UTC m=+0.054977219 container died 6820c90dcecf0c4922d00609f0b8b5e7ae06fc65d153e35637222a365cd26ada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-xrtzhv, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 01 16:34:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd2d2e6e56d321b6271bb1bcca9f66e5b3b7f07f1bd37e5b6dec403292e8dd18-merged.mount: Deactivated successfully.
Oct 01 16:34:45 compute-0 podman[84153]: 2025-10-01 16:34:45.060714358 +0000 UTC m=+0.096438616 container remove 6820c90dcecf0c4922d00609f0b8b5e7ae06fc65d153e35637222a365cd26ada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-xrtzhv, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:34:45 compute-0 bash[84153]: ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-xrtzhv
Oct 01 16:34:45 compute-0 systemd[1]: ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5@mgr.compute-0.xrtzhv.service: Main process exited, code=exited, status=143/n/a
Oct 01 16:34:45 compute-0 systemd[1]: ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5@mgr.compute-0.xrtzhv.service: Failed with result 'exit-code'.
Oct 01 16:34:45 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.xrtzhv for f44264e3-e26a-5bd3-9e84-b4ba651d9cf5.
Oct 01 16:34:45 compute-0 systemd[1]: ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5@mgr.compute-0.xrtzhv.service: Consumed 5.645s CPU time.
Oct 01 16:34:45 compute-0 systemd[1]: Reloading.
Oct 01 16:34:45 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:34:45 compute-0 systemd-rc-local-generator[84239]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:34:45 compute-0 systemd-sysv-generator[84242]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:34:45 compute-0 sudo[84060]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:45 compute-0 ceph-mgr[74571]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.xrtzhv
Oct 01 16:34:45 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.xrtzhv
Oct 01 16:34:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.xrtzhv"} v 0) v1
Oct 01 16:34:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.xrtzhv"}]: dispatch
Oct 01 16:34:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.xrtzhv"}]': finished
Oct 01 16:34:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 01 16:34:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:45 compute-0 ceph-mgr[74571]: [progress INFO root] complete: finished ev 82f7b73a-ee75-495c-b846-a0eab9e7d59c (Updating mgr deployment (-1 -> 1))
Oct 01 16:34:45 compute-0 ceph-mgr[74571]: [progress INFO root] Completed event 82f7b73a-ee75-495c-b846-a0eab9e7d59c (Updating mgr deployment (-1 -> 1)) in 1 seconds
Oct 01 16:34:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 01 16:34:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:45 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev c1d93d71-cda5-4cb3-96b1-3b033abbd538 does not exist
Oct 01 16:34:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 16:34:45 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:34:45 compute-0 ceph-mon[74273]: Removing daemon mgr.compute-0.xrtzhv from compute-0 -- ports [8765]
Oct 01 16:34:45 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.xrtzhv"}]: dispatch
Oct 01 16:34:45 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.xrtzhv"}]': finished
Oct 01 16:34:45 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 16:34:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:34:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:34:45 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:34:45 compute-0 sudo[84249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:45 compute-0 sudo[84249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:45 compute-0 sudo[84249]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:45 compute-0 sudo[84274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:34:45 compute-0 sudo[84274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:45 compute-0 sudo[84274]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:45 compute-0 sudo[84299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:45 compute-0 sudo[84299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:45 compute-0 sudo[84299]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:45 compute-0 sudo[84324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 16:34:45 compute-0 sudo[84324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:46 compute-0 podman[84389]: 2025-10-01 16:34:46.195745299 +0000 UTC m=+0.043005647 container create d406fa73943a8d439498a207603ea7740a8e8259f48483a0cddce01ef4ac27a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:34:46 compute-0 systemd[1]: Started libpod-conmon-d406fa73943a8d439498a207603ea7740a8e8259f48483a0cddce01ef4ac27a9.scope.
Oct 01 16:34:46 compute-0 podman[84389]: 2025-10-01 16:34:46.179243752 +0000 UTC m=+0.026504120 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:34:46 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:34:46 compute-0 podman[84389]: 2025-10-01 16:34:46.292665956 +0000 UTC m=+0.139926304 container init d406fa73943a8d439498a207603ea7740a8e8259f48483a0cddce01ef4ac27a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_mendel, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:34:46 compute-0 podman[84389]: 2025-10-01 16:34:46.304255458 +0000 UTC m=+0.151515766 container start d406fa73943a8d439498a207603ea7740a8e8259f48483a0cddce01ef4ac27a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Oct 01 16:34:46 compute-0 podman[84389]: 2025-10-01 16:34:46.307745176 +0000 UTC m=+0.155005544 container attach d406fa73943a8d439498a207603ea7740a8e8259f48483a0cddce01ef4ac27a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_mendel, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:34:46 compute-0 flamboyant_mendel[84405]: 167 167
Oct 01 16:34:46 compute-0 systemd[1]: libpod-d406fa73943a8d439498a207603ea7740a8e8259f48483a0cddce01ef4ac27a9.scope: Deactivated successfully.
Oct 01 16:34:46 compute-0 conmon[84405]: conmon d406fa73943a8d439498 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d406fa73943a8d439498a207603ea7740a8e8259f48483a0cddce01ef4ac27a9.scope/container/memory.events
Oct 01 16:34:46 compute-0 podman[84389]: 2025-10-01 16:34:46.313748908 +0000 UTC m=+0.161009216 container died d406fa73943a8d439498a207603ea7740a8e8259f48483a0cddce01ef4ac27a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 01 16:34:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-23301aa477751088764853d1b3fda5c2db2a49f471aa220a866172a00848f599-merged.mount: Deactivated successfully.
Oct 01 16:34:46 compute-0 podman[84389]: 2025-10-01 16:34:46.349545142 +0000 UTC m=+0.196805450 container remove d406fa73943a8d439498a207603ea7740a8e8259f48483a0cddce01ef4ac27a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_mendel, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:34:46 compute-0 ceph-mgr[74571]: [progress INFO root] Writing back 3 completed events
Oct 01 16:34:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 01 16:34:46 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:46 compute-0 systemd[1]: libpod-conmon-d406fa73943a8d439498a207603ea7740a8e8259f48483a0cddce01ef4ac27a9.scope: Deactivated successfully.
Oct 01 16:34:46 compute-0 podman[84429]: 2025-10-01 16:34:46.504969425 +0000 UTC m=+0.038684167 container create 71cc6833247c3e1d22e1fe82ae9effd87a48dda5be090af9e56aed4b27ff6e03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 01 16:34:46 compute-0 systemd[1]: Started libpod-conmon-71cc6833247c3e1d22e1fe82ae9effd87a48dda5be090af9e56aed4b27ff6e03.scope.
Oct 01 16:34:46 compute-0 ceph-mon[74273]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:34:46 compute-0 ceph-mon[74273]: Removing key for mgr.compute-0.xrtzhv
Oct 01 16:34:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:34:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:34:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:34:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:34:46 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:34:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/965e8595d6426745f05d2bfbcb1e81c5659cf9d9709c1a26767966f519bc955c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/965e8595d6426745f05d2bfbcb1e81c5659cf9d9709c1a26767966f519bc955c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:46 compute-0 podman[84429]: 2025-10-01 16:34:46.489160406 +0000 UTC m=+0.022875148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:34:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/965e8595d6426745f05d2bfbcb1e81c5659cf9d9709c1a26767966f519bc955c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/965e8595d6426745f05d2bfbcb1e81c5659cf9d9709c1a26767966f519bc955c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/965e8595d6426745f05d2bfbcb1e81c5659cf9d9709c1a26767966f519bc955c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:34:46 compute-0 podman[84429]: 2025-10-01 16:34:46.634598927 +0000 UTC m=+0.168313679 container init 71cc6833247c3e1d22e1fe82ae9effd87a48dda5be090af9e56aed4b27ff6e03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kalam, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:34:46 compute-0 podman[84429]: 2025-10-01 16:34:46.641599544 +0000 UTC m=+0.175314276 container start 71cc6833247c3e1d22e1fe82ae9effd87a48dda5be090af9e56aed4b27ff6e03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 01 16:34:46 compute-0 podman[84429]: 2025-10-01 16:34:46.650024657 +0000 UTC m=+0.183739409 container attach 71cc6833247c3e1d22e1fe82ae9effd87a48dda5be090af9e56aed4b27ff6e03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 01 16:34:47 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:34:47 compute-0 objective_kalam[84445]: --> passed data devices: 0 physical, 3 LVM
Oct 01 16:34:47 compute-0 objective_kalam[84445]: --> relative data size: 1.0
Oct 01 16:34:47 compute-0 objective_kalam[84445]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 01 16:34:47 compute-0 objective_kalam[84445]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 5fa2557d-fd0d-408d-adf7-3e2a01798c5f
Oct 01 16:34:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f"} v 0) v1
Oct 01 16:34:48 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3617059668' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f"}]: dispatch
Oct 01 16:34:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Oct 01 16:34:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 01 16:34:48 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3617059668' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f"}]': finished
Oct 01 16:34:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Oct 01 16:34:48 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Oct 01 16:34:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 01 16:34:48 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 16:34:48 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 01 16:34:48 compute-0 objective_kalam[84445]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 01 16:34:48 compute-0 lvm[84507]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 01 16:34:48 compute-0 lvm[84507]: VG ceph_vg0 finished
Oct 01 16:34:48 compute-0 objective_kalam[84445]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Oct 01 16:34:48 compute-0 objective_kalam[84445]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Oct 01 16:34:48 compute-0 objective_kalam[84445]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 01 16:34:48 compute-0 objective_kalam[84445]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 01 16:34:48 compute-0 objective_kalam[84445]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Oct 01 16:34:48 compute-0 ceph-mon[74273]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:34:48 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3617059668' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f"}]: dispatch
Oct 01 16:34:48 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3617059668' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f"}]': finished
Oct 01 16:34:48 compute-0 ceph-mon[74273]: osdmap e4: 1 total, 0 up, 1 in
Oct 01 16:34:48 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 16:34:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:34:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct 01 16:34:48 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2562532471' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 01 16:34:48 compute-0 objective_kalam[84445]:  stderr: got monmap epoch 1
Oct 01 16:34:48 compute-0 objective_kalam[84445]: --> Creating keyring file for osd.0
Oct 01 16:34:48 compute-0 objective_kalam[84445]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Oct 01 16:34:48 compute-0 objective_kalam[84445]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Oct 01 16:34:48 compute-0 objective_kalam[84445]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 5fa2557d-fd0d-408d-adf7-3e2a01798c5f --setuser ceph --setgroup ceph
Oct 01 16:34:49 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:34:49 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct 01 16:34:49 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 01 16:34:49 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2562532471' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 01 16:34:50 compute-0 ceph-mon[74273]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:34:50 compute-0 ceph-mon[74273]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct 01 16:34:50 compute-0 ceph-mon[74273]: Cluster is now healthy
Oct 01 16:34:51 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:34:51 compute-0 objective_kalam[84445]:  stderr: 2025-10-01T16:34:48.828+0000 7f19203c2740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 01 16:34:51 compute-0 objective_kalam[84445]:  stderr: 2025-10-01T16:34:48.828+0000 7f19203c2740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 01 16:34:51 compute-0 objective_kalam[84445]:  stderr: 2025-10-01T16:34:48.828+0000 7f19203c2740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 01 16:34:51 compute-0 objective_kalam[84445]:  stderr: 2025-10-01T16:34:48.829+0000 7f19203c2740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Oct 01 16:34:51 compute-0 objective_kalam[84445]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Oct 01 16:34:51 compute-0 objective_kalam[84445]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 01 16:34:51 compute-0 objective_kalam[84445]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Oct 01 16:34:51 compute-0 objective_kalam[84445]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 01 16:34:51 compute-0 objective_kalam[84445]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Oct 01 16:34:51 compute-0 objective_kalam[84445]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 01 16:34:51 compute-0 objective_kalam[84445]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 01 16:34:51 compute-0 objective_kalam[84445]: --> ceph-volume lvm activate successful for osd ID: 0
Oct 01 16:34:51 compute-0 objective_kalam[84445]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Oct 01 16:34:51 compute-0 objective_kalam[84445]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 01 16:34:51 compute-0 objective_kalam[84445]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new fb77ac1b-9869-45aa-84e9-22b10d405207
Oct 01 16:34:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207"} v 0) v1
Oct 01 16:34:52 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/691968538' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207"}]: dispatch
Oct 01 16:34:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Oct 01 16:34:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 01 16:34:52 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/691968538' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207"}]': finished
Oct 01 16:34:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Oct 01 16:34:52 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Oct 01 16:34:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 01 16:34:52 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 16:34:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 01 16:34:52 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 16:34:52 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 01 16:34:52 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 01 16:34:52 compute-0 lvm[85458]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 01 16:34:52 compute-0 lvm[85458]: VG ceph_vg1 finished
Oct 01 16:34:52 compute-0 objective_kalam[84445]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 01 16:34:52 compute-0 objective_kalam[84445]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Oct 01 16:34:52 compute-0 objective_kalam[84445]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Oct 01 16:34:52 compute-0 objective_kalam[84445]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct 01 16:34:52 compute-0 objective_kalam[84445]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct 01 16:34:52 compute-0 objective_kalam[84445]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Oct 01 16:34:52 compute-0 ceph-mon[74273]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:34:52 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/691968538' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207"}]: dispatch
Oct 01 16:34:52 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/691968538' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207"}]': finished
Oct 01 16:34:52 compute-0 ceph-mon[74273]: osdmap e5: 2 total, 0 up, 2 in
Oct 01 16:34:52 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 16:34:52 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 16:34:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct 01 16:34:52 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2352910185' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 01 16:34:52 compute-0 objective_kalam[84445]:  stderr: got monmap epoch 1
Oct 01 16:34:52 compute-0 objective_kalam[84445]: --> Creating keyring file for osd.1
Oct 01 16:34:52 compute-0 objective_kalam[84445]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Oct 01 16:34:52 compute-0 objective_kalam[84445]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Oct 01 16:34:52 compute-0 objective_kalam[84445]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid fb77ac1b-9869-45aa-84e9-22b10d405207 --setuser ceph --setgroup ceph
Oct 01 16:34:53 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:34:53 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2352910185' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 01 16:34:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:34:54 compute-0 ceph-mon[74273]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:34:55 compute-0 objective_kalam[84445]:  stderr: 2025-10-01T16:34:52.948+0000 7f8f5e7dc740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 01 16:34:55 compute-0 objective_kalam[84445]:  stderr: 2025-10-01T16:34:52.948+0000 7f8f5e7dc740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 01 16:34:55 compute-0 objective_kalam[84445]:  stderr: 2025-10-01T16:34:52.948+0000 7f8f5e7dc740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 01 16:34:55 compute-0 objective_kalam[84445]:  stderr: 2025-10-01T16:34:52.949+0000 7f8f5e7dc740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Oct 01 16:34:55 compute-0 objective_kalam[84445]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Oct 01 16:34:55 compute-0 objective_kalam[84445]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 01 16:34:55 compute-0 objective_kalam[84445]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Oct 01 16:34:55 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:34:55 compute-0 objective_kalam[84445]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct 01 16:34:55 compute-0 objective_kalam[84445]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Oct 01 16:34:55 compute-0 objective_kalam[84445]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct 01 16:34:55 compute-0 objective_kalam[84445]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 01 16:34:55 compute-0 objective_kalam[84445]: --> ceph-volume lvm activate successful for osd ID: 1
Oct 01 16:34:55 compute-0 objective_kalam[84445]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Oct 01 16:34:55 compute-0 objective_kalam[84445]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 01 16:34:55 compute-0 objective_kalam[84445]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 565523b5-fa16-4aaa-b37b-8314e4edb10e
Oct 01 16:34:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e"} v 0) v1
Oct 01 16:34:55 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2389680861' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e"}]: dispatch
Oct 01 16:34:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Oct 01 16:34:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 01 16:34:55 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2389680861' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e"}]': finished
Oct 01 16:34:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Oct 01 16:34:55 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Oct 01 16:34:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 01 16:34:55 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 01 16:34:55 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 16:34:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 01 16:34:55 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 16:34:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 01 16:34:55 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 16:34:55 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 01 16:34:55 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 01 16:34:55 compute-0 lvm[86409]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 01 16:34:55 compute-0 lvm[86409]: VG ceph_vg2 finished
Oct 01 16:34:55 compute-0 objective_kalam[84445]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 01 16:34:55 compute-0 objective_kalam[84445]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Oct 01 16:34:55 compute-0 objective_kalam[84445]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Oct 01 16:34:55 compute-0 objective_kalam[84445]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct 01 16:34:55 compute-0 objective_kalam[84445]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct 01 16:34:55 compute-0 objective_kalam[84445]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Oct 01 16:34:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct 01 16:34:56 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3892152050' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 01 16:34:56 compute-0 objective_kalam[84445]:  stderr: got monmap epoch 1
Oct 01 16:34:56 compute-0 objective_kalam[84445]: --> Creating keyring file for osd.2
Oct 01 16:34:56 compute-0 objective_kalam[84445]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Oct 01 16:34:56 compute-0 objective_kalam[84445]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Oct 01 16:34:56 compute-0 objective_kalam[84445]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 565523b5-fa16-4aaa-b37b-8314e4edb10e --setuser ceph --setgroup ceph
Oct 01 16:34:56 compute-0 ceph-mon[74273]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:34:56 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2389680861' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e"}]: dispatch
Oct 01 16:34:56 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2389680861' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e"}]': finished
Oct 01 16:34:56 compute-0 ceph-mon[74273]: osdmap e6: 3 total, 0 up, 3 in
Oct 01 16:34:56 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 16:34:56 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 16:34:56 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 16:34:56 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3892152050' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 01 16:34:57 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:34:58 compute-0 ceph-mon[74273]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:34:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:34:58 compute-0 objective_kalam[84445]:  stderr: 2025-10-01T16:34:56.502+0000 7f93077e2740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 01 16:34:58 compute-0 objective_kalam[84445]:  stderr: 2025-10-01T16:34:56.502+0000 7f93077e2740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 01 16:34:58 compute-0 objective_kalam[84445]:  stderr: 2025-10-01T16:34:56.502+0000 7f93077e2740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 01 16:34:58 compute-0 objective_kalam[84445]:  stderr: 2025-10-01T16:34:56.503+0000 7f93077e2740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Oct 01 16:34:58 compute-0 objective_kalam[84445]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Oct 01 16:34:58 compute-0 objective_kalam[84445]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 01 16:34:58 compute-0 objective_kalam[84445]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Oct 01 16:34:59 compute-0 objective_kalam[84445]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct 01 16:34:59 compute-0 objective_kalam[84445]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Oct 01 16:34:59 compute-0 objective_kalam[84445]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct 01 16:34:59 compute-0 objective_kalam[84445]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 01 16:34:59 compute-0 objective_kalam[84445]: --> ceph-volume lvm activate successful for osd ID: 2
Oct 01 16:34:59 compute-0 objective_kalam[84445]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Oct 01 16:34:59 compute-0 systemd[1]: libpod-71cc6833247c3e1d22e1fe82ae9effd87a48dda5be090af9e56aed4b27ff6e03.scope: Deactivated successfully.
Oct 01 16:34:59 compute-0 systemd[1]: libpod-71cc6833247c3e1d22e1fe82ae9effd87a48dda5be090af9e56aed4b27ff6e03.scope: Consumed 5.872s CPU time.
Oct 01 16:34:59 compute-0 podman[87330]: 2025-10-01 16:34:59.111620262 +0000 UTC m=+0.024402367 container died 71cc6833247c3e1d22e1fe82ae9effd87a48dda5be090af9e56aed4b27ff6e03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kalam, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 01 16:34:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-965e8595d6426745f05d2bfbcb1e81c5659cf9d9709c1a26767966f519bc955c-merged.mount: Deactivated successfully.
Oct 01 16:34:59 compute-0 podman[87330]: 2025-10-01 16:34:59.19803779 +0000 UTC m=+0.110819865 container remove 71cc6833247c3e1d22e1fe82ae9effd87a48dda5be090af9e56aed4b27ff6e03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kalam, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 01 16:34:59 compute-0 systemd[1]: libpod-conmon-71cc6833247c3e1d22e1fe82ae9effd87a48dda5be090af9e56aed4b27ff6e03.scope: Deactivated successfully.
Oct 01 16:34:59 compute-0 sudo[84324]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:59 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:34:59 compute-0 sudo[87345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:59 compute-0 sudo[87345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:59 compute-0 sudo[87345]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:59 compute-0 sudo[87370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:34:59 compute-0 sudo[87370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:59 compute-0 sudo[87370]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:59 compute-0 sudo[87395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:34:59 compute-0 sudo[87395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:59 compute-0 sudo[87395]: pam_unix(sudo:session): session closed for user root
Oct 01 16:34:59 compute-0 sudo[87420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 16:34:59 compute-0 sudo[87420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:34:59 compute-0 podman[87482]: 2025-10-01 16:34:59.816074312 +0000 UTC m=+0.045999298 container create 6d5811d3926de1ffb2009c28d83412567d7102afc9a5690ba46208df9fa881a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_rubin, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 01 16:34:59 compute-0 systemd[1]: Started libpod-conmon-6d5811d3926de1ffb2009c28d83412567d7102afc9a5690ba46208df9fa881a1.scope.
Oct 01 16:34:59 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:34:59 compute-0 podman[87482]: 2025-10-01 16:34:59.794238487 +0000 UTC m=+0.024163453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:34:59 compute-0 podman[87482]: 2025-10-01 16:34:59.898634863 +0000 UTC m=+0.128559889 container init 6d5811d3926de1ffb2009c28d83412567d7102afc9a5690ba46208df9fa881a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_rubin, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:34:59 compute-0 podman[87482]: 2025-10-01 16:34:59.905635791 +0000 UTC m=+0.135560767 container start 6d5811d3926de1ffb2009c28d83412567d7102afc9a5690ba46208df9fa881a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 01 16:34:59 compute-0 podman[87482]: 2025-10-01 16:34:59.909746731 +0000 UTC m=+0.139671707 container attach 6d5811d3926de1ffb2009c28d83412567d7102afc9a5690ba46208df9fa881a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_rubin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 01 16:34:59 compute-0 funny_rubin[87500]: 167 167
Oct 01 16:34:59 compute-0 systemd[1]: libpod-6d5811d3926de1ffb2009c28d83412567d7102afc9a5690ba46208df9fa881a1.scope: Deactivated successfully.
Oct 01 16:34:59 compute-0 podman[87482]: 2025-10-01 16:34:59.91539472 +0000 UTC m=+0.145319746 container died 6d5811d3926de1ffb2009c28d83412567d7102afc9a5690ba46208df9fa881a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_rubin, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:34:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1f05e55d882d184c41c9c191016db139e66c2303d92526e428bb38796a8e06a-merged.mount: Deactivated successfully.
Oct 01 16:34:59 compute-0 podman[87482]: 2025-10-01 16:34:59.968543349 +0000 UTC m=+0.198468305 container remove 6d5811d3926de1ffb2009c28d83412567d7102afc9a5690ba46208df9fa881a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_rubin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:34:59 compute-0 systemd[1]: libpod-conmon-6d5811d3926de1ffb2009c28d83412567d7102afc9a5690ba46208df9fa881a1.scope: Deactivated successfully.
Oct 01 16:35:00 compute-0 podman[87525]: 2025-10-01 16:35:00.191700709 +0000 UTC m=+0.065296249 container create 5697c93606f063bcbfacde318642bd48018737e92a8ef69c076a4e22e376e4c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shannon, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:35:00 compute-0 systemd[1]: Started libpod-conmon-5697c93606f063bcbfacde318642bd48018737e92a8ef69c076a4e22e376e4c3.scope.
Oct 01 16:35:00 compute-0 podman[87525]: 2025-10-01 16:35:00.167789791 +0000 UTC m=+0.041385341 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:00 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9806b09295fa42e9baa226490e6cc98d424d2e1c328bb12fc94b579e1a5dc2aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9806b09295fa42e9baa226490e6cc98d424d2e1c328bb12fc94b579e1a5dc2aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9806b09295fa42e9baa226490e6cc98d424d2e1c328bb12fc94b579e1a5dc2aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9806b09295fa42e9baa226490e6cc98d424d2e1c328bb12fc94b579e1a5dc2aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:00 compute-0 podman[87525]: 2025-10-01 16:35:00.292223301 +0000 UTC m=+0.165818851 container init 5697c93606f063bcbfacde318642bd48018737e92a8ef69c076a4e22e376e4c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 01 16:35:00 compute-0 podman[87525]: 2025-10-01 16:35:00.307733448 +0000 UTC m=+0.181328948 container start 5697c93606f063bcbfacde318642bd48018737e92a8ef69c076a4e22e376e4c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shannon, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 01 16:35:00 compute-0 podman[87525]: 2025-10-01 16:35:00.311582675 +0000 UTC m=+0.185178235 container attach 5697c93606f063bcbfacde318642bd48018737e92a8ef69c076a4e22e376e4c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 01 16:35:00 compute-0 ceph-mon[74273]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:35:01 compute-0 interesting_shannon[87541]: {
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:     "0": [
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:         {
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             "devices": [
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "/dev/loop3"
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             ],
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             "lv_name": "ceph_lv0",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             "lv_size": "21470642176",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             "name": "ceph_lv0",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             "tags": {
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "ceph.cluster_name": "ceph",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "ceph.crush_device_class": "",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "ceph.encrypted": "0",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "ceph.osd_id": "0",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "ceph.type": "block",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "ceph.vdo": "0"
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             },
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             "type": "block",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             "vg_name": "ceph_vg0"
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:         }
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:     ],
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:     "1": [
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:         {
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             "devices": [
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "/dev/loop4"
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             ],
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             "lv_name": "ceph_lv1",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             "lv_size": "21470642176",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             "name": "ceph_lv1",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             "tags": {
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "ceph.cluster_name": "ceph",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "ceph.crush_device_class": "",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "ceph.encrypted": "0",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "ceph.osd_id": "1",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "ceph.type": "block",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "ceph.vdo": "0"
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             },
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             "type": "block",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             "vg_name": "ceph_vg1"
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:         }
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:     ],
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:     "2": [
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:         {
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             "devices": [
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "/dev/loop5"
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             ],
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             "lv_name": "ceph_lv2",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             "lv_size": "21470642176",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             "name": "ceph_lv2",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             "tags": {
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "ceph.cluster_name": "ceph",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "ceph.crush_device_class": "",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "ceph.encrypted": "0",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "ceph.osd_id": "2",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "ceph.type": "block",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:                 "ceph.vdo": "0"
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             },
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             "type": "block",
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:             "vg_name": "ceph_vg2"
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:         }
Oct 01 16:35:01 compute-0 interesting_shannon[87541]:     ]
Oct 01 16:35:01 compute-0 interesting_shannon[87541]: }
Oct 01 16:35:01 compute-0 systemd[1]: libpod-5697c93606f063bcbfacde318642bd48018737e92a8ef69c076a4e22e376e4c3.scope: Deactivated successfully.
Oct 01 16:35:01 compute-0 podman[87525]: 2025-10-01 16:35:01.077987804 +0000 UTC m=+0.951583314 container died 5697c93606f063bcbfacde318642bd48018737e92a8ef69c076a4e22e376e4c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 01 16:35:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-9806b09295fa42e9baa226490e6cc98d424d2e1c328bb12fc94b579e1a5dc2aa-merged.mount: Deactivated successfully.
Oct 01 16:35:01 compute-0 podman[87525]: 2025-10-01 16:35:01.233738517 +0000 UTC m=+1.107334097 container remove 5697c93606f063bcbfacde318642bd48018737e92a8ef69c076a4e22e376e4c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:35:01 compute-0 systemd[1]: libpod-conmon-5697c93606f063bcbfacde318642bd48018737e92a8ef69c076a4e22e376e4c3.scope: Deactivated successfully.
Oct 01 16:35:01 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:35:01 compute-0 sudo[87420]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Oct 01 16:35:01 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 01 16:35:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:35:01 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:35:01 compute-0 ceph-mgr[74571]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Oct 01 16:35:01 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Oct 01 16:35:01 compute-0 sudo[87562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:01 compute-0 sudo[87562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:01 compute-0 sudo[87562]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:01 compute-0 sudo[87587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:35:01 compute-0 sudo[87587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:01 compute-0 sudo[87587]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:01 compute-0 sudo[87612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:01 compute-0 sudo[87612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:01 compute-0 sudo[87612]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:01 compute-0 sudo[87637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5
Oct 01 16:35:01 compute-0 sudo[87637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:01 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 01 16:35:01 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:35:01 compute-0 podman[87700]: 2025-10-01 16:35:01.945899051 +0000 UTC m=+0.045506989 container create 3043efa0d5afe83bcc03d0f112237d733fc46cdad84ac2f050f992baaead9c96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wescoff, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:35:01 compute-0 systemd[1]: Started libpod-conmon-3043efa0d5afe83bcc03d0f112237d733fc46cdad84ac2f050f992baaead9c96.scope.
Oct 01 16:35:02 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:02 compute-0 podman[87700]: 2025-10-01 16:35:02.015790518 +0000 UTC m=+0.115398476 container init 3043efa0d5afe83bcc03d0f112237d733fc46cdad84ac2f050f992baaead9c96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wescoff, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:35:02 compute-0 podman[87700]: 2025-10-01 16:35:01.924840195 +0000 UTC m=+0.024448173 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:02 compute-0 podman[87700]: 2025-10-01 16:35:02.023594662 +0000 UTC m=+0.123202590 container start 3043efa0d5afe83bcc03d0f112237d733fc46cdad84ac2f050f992baaead9c96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wescoff, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 01 16:35:02 compute-0 podman[87700]: 2025-10-01 16:35:02.02638238 +0000 UTC m=+0.125990328 container attach 3043efa0d5afe83bcc03d0f112237d733fc46cdad84ac2f050f992baaead9c96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 01 16:35:02 compute-0 thirsty_wescoff[87716]: 167 167
Oct 01 16:35:02 compute-0 systemd[1]: libpod-3043efa0d5afe83bcc03d0f112237d733fc46cdad84ac2f050f992baaead9c96.scope: Deactivated successfully.
Oct 01 16:35:02 compute-0 conmon[87716]: conmon 3043efa0d5afe83bcc03 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3043efa0d5afe83bcc03d0f112237d733fc46cdad84ac2f050f992baaead9c96.scope/container/memory.events
Oct 01 16:35:02 compute-0 podman[87700]: 2025-10-01 16:35:02.028902336 +0000 UTC m=+0.128510264 container died 3043efa0d5afe83bcc03d0f112237d733fc46cdad84ac2f050f992baaead9c96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wescoff, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:35:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b4a9e13e1b838783b49d8c0045c8375723f7f91784d7c6f009e73c22b8d0d60-merged.mount: Deactivated successfully.
Oct 01 16:35:02 compute-0 podman[87700]: 2025-10-01 16:35:02.060704529 +0000 UTC m=+0.160312457 container remove 3043efa0d5afe83bcc03d0f112237d733fc46cdad84ac2f050f992baaead9c96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:35:02 compute-0 systemd[1]: libpod-conmon-3043efa0d5afe83bcc03d0f112237d733fc46cdad84ac2f050f992baaead9c96.scope: Deactivated successfully.
Oct 01 16:35:02 compute-0 podman[87747]: 2025-10-01 16:35:02.285261493 +0000 UTC m=+0.036760965 container create aa3990c99fcbd01257d942d3d929394807b88ad2fccb287d9e2cb9ef1437ecd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-0-activate-test, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:35:02 compute-0 systemd[1]: Started libpod-conmon-aa3990c99fcbd01257d942d3d929394807b88ad2fccb287d9e2cb9ef1437ecd3.scope.
Oct 01 16:35:02 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca7e4ba7d9cc733b3f28bd3524070d188d9a19d2661788272d98d7e42eb4cf84/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca7e4ba7d9cc733b3f28bd3524070d188d9a19d2661788272d98d7e42eb4cf84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca7e4ba7d9cc733b3f28bd3524070d188d9a19d2661788272d98d7e42eb4cf84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca7e4ba7d9cc733b3f28bd3524070d188d9a19d2661788272d98d7e42eb4cf84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca7e4ba7d9cc733b3f28bd3524070d188d9a19d2661788272d98d7e42eb4cf84/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:02 compute-0 podman[87747]: 2025-10-01 16:35:02.267774252 +0000 UTC m=+0.019273734 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:02 compute-0 podman[87747]: 2025-10-01 16:35:02.37187769 +0000 UTC m=+0.123377172 container init aa3990c99fcbd01257d942d3d929394807b88ad2fccb287d9e2cb9ef1437ecd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:35:02 compute-0 podman[87747]: 2025-10-01 16:35:02.380747942 +0000 UTC m=+0.132247404 container start aa3990c99fcbd01257d942d3d929394807b88ad2fccb287d9e2cb9ef1437ecd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:35:02 compute-0 podman[87747]: 2025-10-01 16:35:02.38353257 +0000 UTC m=+0.135032052 container attach aa3990c99fcbd01257d942d3d929394807b88ad2fccb287d9e2cb9ef1437ecd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-0-activate-test, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:35:02 compute-0 ceph-mon[74273]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:35:02 compute-0 ceph-mon[74273]: Deploying daemon osd.0 on compute-0
Oct 01 16:35:03 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-0-activate-test[87763]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct 01 16:35:03 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-0-activate-test[87763]:                             [--no-systemd] [--no-tmpfs]
Oct 01 16:35:03 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-0-activate-test[87763]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct 01 16:35:03 compute-0 systemd[1]: libpod-aa3990c99fcbd01257d942d3d929394807b88ad2fccb287d9e2cb9ef1437ecd3.scope: Deactivated successfully.
Oct 01 16:35:03 compute-0 podman[87747]: 2025-10-01 16:35:03.020752585 +0000 UTC m=+0.772252087 container died aa3990c99fcbd01257d942d3d929394807b88ad2fccb287d9e2cb9ef1437ecd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-0-activate-test, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 01 16:35:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca7e4ba7d9cc733b3f28bd3524070d188d9a19d2661788272d98d7e42eb4cf84-merged.mount: Deactivated successfully.
Oct 01 16:35:03 compute-0 podman[87747]: 2025-10-01 16:35:03.076017172 +0000 UTC m=+0.827516644 container remove aa3990c99fcbd01257d942d3d929394807b88ad2fccb287d9e2cb9ef1437ecd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 01 16:35:03 compute-0 systemd[1]: libpod-conmon-aa3990c99fcbd01257d942d3d929394807b88ad2fccb287d9e2cb9ef1437ecd3.scope: Deactivated successfully.
Oct 01 16:35:03 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:35:03 compute-0 systemd[1]: Reloading.
Oct 01 16:35:03 compute-0 systemd-rc-local-generator[87828]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:35:03 compute-0 systemd-sysv-generator[87831]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:35:03 compute-0 systemd[1]: Reloading.
Oct 01 16:35:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:35:03 compute-0 systemd-rc-local-generator[87870]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:35:03 compute-0 systemd-sysv-generator[87873]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:35:03 compute-0 systemd[1]: Starting Ceph osd.0 for f44264e3-e26a-5bd3-9e84-b4ba651d9cf5...
Oct 01 16:35:04 compute-0 podman[87927]: 2025-10-01 16:35:04.144039911 +0000 UTC m=+0.040253979 container create 7ecbeff4161d1875f09d6ca68d3f1732dec4ccacd8fd29dac35de60d9fbc802a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 01 16:35:04 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d7ff89a1088a9f1088b59916d4e2be7963dd7c83d3880ec791fedb64f774afa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d7ff89a1088a9f1088b59916d4e2be7963dd7c83d3880ec791fedb64f774afa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d7ff89a1088a9f1088b59916d4e2be7963dd7c83d3880ec791fedb64f774afa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d7ff89a1088a9f1088b59916d4e2be7963dd7c83d3880ec791fedb64f774afa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d7ff89a1088a9f1088b59916d4e2be7963dd7c83d3880ec791fedb64f774afa/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:04 compute-0 podman[87927]: 2025-10-01 16:35:04.208430874 +0000 UTC m=+0.104644952 container init 7ecbeff4161d1875f09d6ca68d3f1732dec4ccacd8fd29dac35de60d9fbc802a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-0-activate, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 01 16:35:04 compute-0 podman[87927]: 2025-10-01 16:35:04.215135152 +0000 UTC m=+0.111349220 container start 7ecbeff4161d1875f09d6ca68d3f1732dec4ccacd8fd29dac35de60d9fbc802a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:35:04 compute-0 podman[87927]: 2025-10-01 16:35:04.218387822 +0000 UTC m=+0.114601910 container attach 7ecbeff4161d1875f09d6ca68d3f1732dec4ccacd8fd29dac35de60d9fbc802a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:35:04 compute-0 podman[87927]: 2025-10-01 16:35:04.127306412 +0000 UTC m=+0.023520480 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:04 compute-0 ceph-mon[74273]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:35:05 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-0-activate[87943]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 01 16:35:05 compute-0 bash[87927]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 01 16:35:05 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-0-activate[87943]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct 01 16:35:05 compute-0 bash[87927]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct 01 16:35:05 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-0-activate[87943]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct 01 16:35:05 compute-0 bash[87927]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct 01 16:35:05 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-0-activate[87943]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 01 16:35:05 compute-0 bash[87927]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 01 16:35:05 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-0-activate[87943]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 01 16:35:05 compute-0 bash[87927]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 01 16:35:05 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-0-activate[87943]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 01 16:35:05 compute-0 bash[87927]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 01 16:35:05 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-0-activate[87943]: --> ceph-volume raw activate successful for osd ID: 0
Oct 01 16:35:05 compute-0 bash[87927]: --> ceph-volume raw activate successful for osd ID: 0
Oct 01 16:35:05 compute-0 systemd[1]: libpod-7ecbeff4161d1875f09d6ca68d3f1732dec4ccacd8fd29dac35de60d9fbc802a.scope: Deactivated successfully.
Oct 01 16:35:05 compute-0 systemd[1]: libpod-7ecbeff4161d1875f09d6ca68d3f1732dec4ccacd8fd29dac35de60d9fbc802a.scope: Consumed 1.045s CPU time.
Oct 01 16:35:05 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:35:05 compute-0 podman[88058]: 2025-10-01 16:35:05.283587036 +0000 UTC m=+0.026066773 container died 7ecbeff4161d1875f09d6ca68d3f1732dec4ccacd8fd29dac35de60d9fbc802a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-0-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:35:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d7ff89a1088a9f1088b59916d4e2be7963dd7c83d3880ec791fedb64f774afa-merged.mount: Deactivated successfully.
Oct 01 16:35:05 compute-0 podman[88058]: 2025-10-01 16:35:05.367454911 +0000 UTC m=+0.109934638 container remove 7ecbeff4161d1875f09d6ca68d3f1732dec4ccacd8fd29dac35de60d9fbc802a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-0-activate, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:35:05 compute-0 podman[88121]: 2025-10-01 16:35:05.641429104 +0000 UTC m=+0.056468012 container create 072e3be7e651cceb0c51d0afbe31fc6fa877a178d9f381ac509755699c284d62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 01 16:35:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ca7e01b37b0dffdd8f6602b5592cec23a077ca76bb5a4a1cbd465d324a8513f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ca7e01b37b0dffdd8f6602b5592cec23a077ca76bb5a4a1cbd465d324a8513f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ca7e01b37b0dffdd8f6602b5592cec23a077ca76bb5a4a1cbd465d324a8513f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ca7e01b37b0dffdd8f6602b5592cec23a077ca76bb5a4a1cbd465d324a8513f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ca7e01b37b0dffdd8f6602b5592cec23a077ca76bb5a4a1cbd465d324a8513f/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:05 compute-0 podman[88121]: 2025-10-01 16:35:05.612219171 +0000 UTC m=+0.027258159 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:05 compute-0 podman[88121]: 2025-10-01 16:35:05.717250871 +0000 UTC m=+0.132289769 container init 072e3be7e651cceb0c51d0afbe31fc6fa877a178d9f381ac509755699c284d62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Oct 01 16:35:05 compute-0 podman[88121]: 2025-10-01 16:35:05.726825159 +0000 UTC m=+0.141864047 container start 072e3be7e651cceb0c51d0afbe31fc6fa877a178d9f381ac509755699c284d62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 01 16:35:05 compute-0 bash[88121]: 072e3be7e651cceb0c51d0afbe31fc6fa877a178d9f381ac509755699c284d62
Oct 01 16:35:05 compute-0 systemd[1]: Started Ceph osd.0 for f44264e3-e26a-5bd3-9e84-b4ba651d9cf5.
Oct 01 16:35:05 compute-0 ceph-osd[88140]: set uid:gid to 167:167 (ceph:ceph)
Oct 01 16:35:05 compute-0 ceph-osd[88140]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct 01 16:35:05 compute-0 ceph-osd[88140]: pidfile_write: ignore empty --pid-file
Oct 01 16:35:05 compute-0 ceph-osd[88140]: bdev(0x559b457c7800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 01 16:35:05 compute-0 ceph-osd[88140]: bdev(0x559b457c7800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 01 16:35:05 compute-0 ceph-osd[88140]: bdev(0x559b457c7800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 16:35:05 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 01 16:35:05 compute-0 ceph-osd[88140]: bdev(0x559b46609800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 01 16:35:05 compute-0 ceph-osd[88140]: bdev(0x559b46609800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 01 16:35:05 compute-0 ceph-osd[88140]: bdev(0x559b46609800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 16:35:05 compute-0 ceph-osd[88140]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct 01 16:35:05 compute-0 ceph-osd[88140]: bdev(0x559b46609800 /var/lib/ceph/osd/ceph-0/block) close
Oct 01 16:35:05 compute-0 sudo[87637]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:35:05 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:35:05 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Oct 01 16:35:05 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 01 16:35:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:35:05 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:35:05 compute-0 ceph-mgr[74571]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Oct 01 16:35:05 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Oct 01 16:35:05 compute-0 sudo[88153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:05 compute-0 sudo[88153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:05 compute-0 sudo[88153]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:05 compute-0 sudo[88178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:35:05 compute-0 sudo[88178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:05 compute-0 sudo[88178]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:05 compute-0 sudo[88203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:05 compute-0 sudo[88203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:05 compute-0 sudo[88203]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:06 compute-0 sudo[88228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5
Oct 01 16:35:06 compute-0 sudo[88228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:06 compute-0 ceph-osd[88140]: bdev(0x559b457c7800 /var/lib/ceph/osd/ceph-0/block) close
Oct 01 16:35:06 compute-0 podman[88295]: 2025-10-01 16:35:06.288071564 +0000 UTC m=+0.032207521 container create bedfbf384b19fbc70df76844f4cd0ba7b514df14afadbbf95282a02119403f8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Oct 01 16:35:06 compute-0 ceph-osd[88140]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Oct 01 16:35:06 compute-0 ceph-osd[88140]: load: jerasure load: lrc 
Oct 01 16:35:06 compute-0 ceph-osd[88140]: bdev(0x559b45990c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 01 16:35:06 compute-0 ceph-osd[88140]: bdev(0x559b45990c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 01 16:35:06 compute-0 ceph-osd[88140]: bdev(0x559b45990c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 16:35:06 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 01 16:35:06 compute-0 ceph-osd[88140]: bdev(0x559b45990c00 /var/lib/ceph/osd/ceph-0/block) close
Oct 01 16:35:06 compute-0 systemd[1]: Started libpod-conmon-bedfbf384b19fbc70df76844f4cd0ba7b514df14afadbbf95282a02119403f8d.scope.
Oct 01 16:35:06 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:06 compute-0 podman[88295]: 2025-10-01 16:35:06.367068098 +0000 UTC m=+0.111204075 container init bedfbf384b19fbc70df76844f4cd0ba7b514df14afadbbf95282a02119403f8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_satoshi, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 01 16:35:06 compute-0 podman[88295]: 2025-10-01 16:35:06.274851617 +0000 UTC m=+0.018987594 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:06 compute-0 podman[88295]: 2025-10-01 16:35:06.373600984 +0000 UTC m=+0.117736941 container start bedfbf384b19fbc70df76844f4cd0ba7b514df14afadbbf95282a02119403f8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_satoshi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Oct 01 16:35:06 compute-0 podman[88295]: 2025-10-01 16:35:06.376980791 +0000 UTC m=+0.121116768 container attach bedfbf384b19fbc70df76844f4cd0ba7b514df14afadbbf95282a02119403f8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_satoshi, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:35:06 compute-0 serene_satoshi[88316]: 167 167
Oct 01 16:35:06 compute-0 systemd[1]: libpod-bedfbf384b19fbc70df76844f4cd0ba7b514df14afadbbf95282a02119403f8d.scope: Deactivated successfully.
Oct 01 16:35:06 compute-0 podman[88295]: 2025-10-01 16:35:06.377872917 +0000 UTC m=+0.122008874 container died bedfbf384b19fbc70df76844f4cd0ba7b514df14afadbbf95282a02119403f8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:35:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f25601632158841bc8d85259635d0f0871b92dd0914be84ec3481ec3efa5b15-merged.mount: Deactivated successfully.
Oct 01 16:35:06 compute-0 podman[88295]: 2025-10-01 16:35:06.407530074 +0000 UTC m=+0.151666031 container remove bedfbf384b19fbc70df76844f4cd0ba7b514df14afadbbf95282a02119403f8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:35:06 compute-0 systemd[1]: libpod-conmon-bedfbf384b19fbc70df76844f4cd0ba7b514df14afadbbf95282a02119403f8d.scope: Deactivated successfully.
Oct 01 16:35:06 compute-0 ceph-osd[88140]: bdev(0x559b45990c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 01 16:35:06 compute-0 ceph-osd[88140]: bdev(0x559b45990c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 01 16:35:06 compute-0 ceph-osd[88140]: bdev(0x559b45990c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 16:35:06 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 01 16:35:06 compute-0 ceph-osd[88140]: bdev(0x559b45990c00 /var/lib/ceph/osd/ceph-0/block) close
Oct 01 16:35:06 compute-0 podman[88352]: 2025-10-01 16:35:06.626633468 +0000 UTC m=+0.033093279 container create f26a14b69d03392a32e12db1a00b95c170aeb3e41118829e50eb550e67d8cfe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-1-activate-test, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:35:06 compute-0 systemd[1]: Started libpod-conmon-f26a14b69d03392a32e12db1a00b95c170aeb3e41118829e50eb550e67d8cfe3.scope.
Oct 01 16:35:06 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d459fb04e1ffd91bf55cc2af9d0e2f1bc809d8ee3656d3088236130e0ecdd59b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d459fb04e1ffd91bf55cc2af9d0e2f1bc809d8ee3656d3088236130e0ecdd59b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d459fb04e1ffd91bf55cc2af9d0e2f1bc809d8ee3656d3088236130e0ecdd59b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d459fb04e1ffd91bf55cc2af9d0e2f1bc809d8ee3656d3088236130e0ecdd59b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d459fb04e1ffd91bf55cc2af9d0e2f1bc809d8ee3656d3088236130e0ecdd59b/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:06 compute-0 podman[88352]: 2025-10-01 16:35:06.710025902 +0000 UTC m=+0.116485733 container init f26a14b69d03392a32e12db1a00b95c170aeb3e41118829e50eb550e67d8cfe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:35:06 compute-0 podman[88352]: 2025-10-01 16:35:06.614040805 +0000 UTC m=+0.020500646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:06 compute-0 podman[88352]: 2025-10-01 16:35:06.716394536 +0000 UTC m=+0.122854337 container start f26a14b69d03392a32e12db1a00b95c170aeb3e41118829e50eb550e67d8cfe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-1-activate-test, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 01 16:35:06 compute-0 podman[88352]: 2025-10-01 16:35:06.719092913 +0000 UTC m=+0.125552744 container attach f26a14b69d03392a32e12db1a00b95c170aeb3e41118829e50eb550e67d8cfe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:35:06 compute-0 ceph-mon[74273]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:35:06 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:06 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:06 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 01 16:35:06 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:35:06 compute-0 ceph-mon[74273]: Deploying daemon osd.1 on compute-0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct 01 16:35:06 compute-0 ceph-osd[88140]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct 01 16:35:06 compute-0 ceph-osd[88140]: bdev(0x559b45990c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 01 16:35:06 compute-0 ceph-osd[88140]: bdev(0x559b45990c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 01 16:35:06 compute-0 ceph-osd[88140]: bdev(0x559b45990c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 16:35:06 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 01 16:35:06 compute-0 ceph-osd[88140]: bdev(0x559b45991400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 01 16:35:06 compute-0 ceph-osd[88140]: bdev(0x559b45991400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 01 16:35:06 compute-0 ceph-osd[88140]: bdev(0x559b45991400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 16:35:06 compute-0 ceph-osd[88140]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct 01 16:35:06 compute-0 ceph-osd[88140]: bluefs mount
Oct 01 16:35:06 compute-0 ceph-osd[88140]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: bluefs mount shared_bdev_used = 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: RocksDB version: 7.9.2
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Git sha 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: DB SUMMARY
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: DB Session ID:  D5RWPS9N2UNZ63124CKH
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: CURRENT file:  CURRENT
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: IDENTITY file:  IDENTITY
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                         Options.error_if_exists: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                       Options.create_if_missing: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                         Options.paranoid_checks: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                                     Options.env: 0x559b4665bd50
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                                Options.info_log: 0x559b45852980
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.max_file_opening_threads: 16
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                              Options.statistics: (nil)
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                               Options.use_fsync: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                       Options.max_log_file_size: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                         Options.allow_fallocate: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                        Options.use_direct_reads: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.create_missing_column_families: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                              Options.db_log_dir: 
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                                 Options.wal_dir: db.wal
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.advise_random_on_open: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                    Options.write_buffer_manager: 0x559b4676c460
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                            Options.rate_limiter: (nil)
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.unordered_write: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                               Options.row_cache: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                              Options.wal_filter: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.allow_ingest_behind: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.two_write_queues: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.manual_wal_flush: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.wal_compression: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.atomic_flush: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                 Options.log_readahead_size: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.allow_data_in_errors: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.db_host_id: __hostname__
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.max_background_jobs: 4
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.max_background_compactions: -1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.max_subcompactions: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                          Options.max_open_files: -1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                          Options.bytes_per_sync: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.max_background_flushes: -1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Compression algorithms supported:
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         kZSTD supported: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         kXpressCompression supported: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         kBZip2Compression supported: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         kLZ4Compression supported: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         kZlibCompression supported: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         kLZ4HCCompression supported: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         kSnappyCompression supported: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559b45852fe0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559b4583add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559b45852fe0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559b4583add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559b45852fe0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559b4583add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559b45852fe0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559b4583add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559b45852fe0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559b4583add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559b45852fe0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559b4583add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559b45852fe0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559b4583add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559b45852fc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559b4583a430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559b45852fc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559b4583a430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559b45852fc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559b4583a430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 749f9966-7cb0-4513-811b-76fda9c22b96
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336506864876, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336506865072, "job": 1, "event": "recovery_finished"}
Oct 01 16:35:06 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 01 16:35:06 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Oct 01 16:35:06 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Oct 01 16:35:06 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct 01 16:35:06 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Oct 01 16:35:06 compute-0 ceph-osd[88140]: freelist init
Oct 01 16:35:06 compute-0 ceph-osd[88140]: freelist _read_cfg
Oct 01 16:35:06 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 01 16:35:06 compute-0 ceph-osd[88140]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 01 16:35:06 compute-0 ceph-osd[88140]: bluefs umount
Oct 01 16:35:06 compute-0 ceph-osd[88140]: bdev(0x559b45991400 /var/lib/ceph/osd/ceph-0/block) close
Oct 01 16:35:07 compute-0 ceph-osd[88140]: bdev(0x559b45991400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 01 16:35:07 compute-0 ceph-osd[88140]: bdev(0x559b45991400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 01 16:35:07 compute-0 ceph-osd[88140]: bdev(0x559b45991400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 16:35:07 compute-0 ceph-osd[88140]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct 01 16:35:07 compute-0 ceph-osd[88140]: bluefs mount
Oct 01 16:35:07 compute-0 ceph-osd[88140]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: bluefs mount shared_bdev_used = 4718592
Oct 01 16:35:07 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: RocksDB version: 7.9.2
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Git sha 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: DB SUMMARY
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: DB Session ID:  D5RWPS9N2UNZ63124CKG
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: CURRENT file:  CURRENT
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: IDENTITY file:  IDENTITY
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                         Options.error_if_exists: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                       Options.create_if_missing: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                         Options.paranoid_checks: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                                     Options.env: 0x559b4681c3f0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                                Options.info_log: 0x559b45853340
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.max_file_opening_threads: 16
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                              Options.statistics: (nil)
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                               Options.use_fsync: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                       Options.max_log_file_size: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                         Options.allow_fallocate: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                        Options.use_direct_reads: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.create_missing_column_families: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                              Options.db_log_dir: 
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                                 Options.wal_dir: db.wal
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.advise_random_on_open: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                    Options.write_buffer_manager: 0x559b4676c460
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                            Options.rate_limiter: (nil)
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.unordered_write: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                               Options.row_cache: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                              Options.wal_filter: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.allow_ingest_behind: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.two_write_queues: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.manual_wal_flush: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.wal_compression: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.atomic_flush: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                 Options.log_readahead_size: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.allow_data_in_errors: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.db_host_id: __hostname__
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.max_background_jobs: 4
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.max_background_compactions: -1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.max_subcompactions: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                          Options.max_open_files: -1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                          Options.bytes_per_sync: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.max_background_flushes: -1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Compression algorithms supported:
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         kZSTD supported: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         kXpressCompression supported: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         kBZip2Compression supported: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         kLZ4Compression supported: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         kZlibCompression supported: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         kLZ4HCCompression supported: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         kSnappyCompression supported: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559b45852b00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559b4583add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559b45852b00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559b4583add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559b45852b00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559b4583add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559b45852b00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559b4583add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559b45852b00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559b4583add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559b45852b00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559b4583add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559b45852b00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559b4583add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559b458530e0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559b4583a430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559b458530e0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559b4583a430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559b458530e0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559b4583a430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 749f9966-7cb0-4513-811b-76fda9c22b96
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336507140026, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336507144686, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336507, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "749f9966-7cb0-4513-811b-76fda9c22b96", "db_session_id": "D5RWPS9N2UNZ63124CKG", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336507150097, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336507, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "749f9966-7cb0-4513-811b-76fda9c22b96", "db_session_id": "D5RWPS9N2UNZ63124CKG", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336507153425, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336507, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "749f9966-7cb0-4513-811b-76fda9c22b96", "db_session_id": "D5RWPS9N2UNZ63124CKG", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336507155623, "job": 1, "event": "recovery_finished"}
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x559b4684c000
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: DB pointer 0x559b4675da00
Oct 01 16:35:07 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 01 16:35:07 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Oct 01 16:35:07 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 16:35:07 compute-0 ceph-osd[88140]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583a430#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583a430#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583a430#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 01 16:35:07 compute-0 ceph-osd[88140]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct 01 16:35:07 compute-0 ceph-osd[88140]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct 01 16:35:07 compute-0 ceph-osd[88140]: _get_class not permitted to load lua
Oct 01 16:35:07 compute-0 ceph-osd[88140]: _get_class not permitted to load sdk
Oct 01 16:35:07 compute-0 ceph-osd[88140]: _get_class not permitted to load test_remote_reads
Oct 01 16:35:07 compute-0 ceph-osd[88140]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct 01 16:35:07 compute-0 ceph-osd[88140]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct 01 16:35:07 compute-0 ceph-osd[88140]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct 01 16:35:07 compute-0 ceph-osd[88140]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct 01 16:35:07 compute-0 ceph-osd[88140]: osd.0 0 load_pgs
Oct 01 16:35:07 compute-0 ceph-osd[88140]: osd.0 0 load_pgs opened 0 pgs
Oct 01 16:35:07 compute-0 ceph-osd[88140]: osd.0 0 log_to_monitors true
Oct 01 16:35:07 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-0[88136]: 2025-10-01T16:35:07.186+0000 7fc25208a740 -1 osd.0 0 log_to_monitors true
Oct 01 16:35:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Oct 01 16:35:07 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2673302381,v1:192.168.122.100:6803/2673302381]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct 01 16:35:07 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:35:07 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-1-activate-test[88368]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct 01 16:35:07 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-1-activate-test[88368]:                             [--no-systemd] [--no-tmpfs]
Oct 01 16:35:07 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-1-activate-test[88368]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct 01 16:35:07 compute-0 systemd[1]: libpod-f26a14b69d03392a32e12db1a00b95c170aeb3e41118829e50eb550e67d8cfe3.scope: Deactivated successfully.
Oct 01 16:35:07 compute-0 podman[88352]: 2025-10-01 16:35:07.359158511 +0000 UTC m=+0.765618342 container died f26a14b69d03392a32e12db1a00b95c170aeb3e41118829e50eb550e67d8cfe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-1-activate-test, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 01 16:35:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-d459fb04e1ffd91bf55cc2af9d0e2f1bc809d8ee3656d3088236130e0ecdd59b-merged.mount: Deactivated successfully.
Oct 01 16:35:07 compute-0 podman[88352]: 2025-10-01 16:35:07.413252341 +0000 UTC m=+0.819712172 container remove f26a14b69d03392a32e12db1a00b95c170aeb3e41118829e50eb550e67d8cfe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-1-activate-test, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 01 16:35:07 compute-0 systemd[1]: libpod-conmon-f26a14b69d03392a32e12db1a00b95c170aeb3e41118829e50eb550e67d8cfe3.scope: Deactivated successfully.
Oct 01 16:35:07 compute-0 systemd[1]: Reloading.
Oct 01 16:35:07 compute-0 systemd-rc-local-generator[88840]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:35:07 compute-0 systemd-sysv-generator[88843]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:35:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Oct 01 16:35:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 01 16:35:07 compute-0 ceph-mon[74273]: from='osd.0 [v2:192.168.122.100:6802/2673302381,v1:192.168.122.100:6803/2673302381]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct 01 16:35:07 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2673302381,v1:192.168.122.100:6803/2673302381]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct 01 16:35:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Oct 01 16:35:07 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Oct 01 16:35:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct 01 16:35:07 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2673302381,v1:192.168.122.100:6803/2673302381]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 01 16:35:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct 01 16:35:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 01 16:35:07 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 16:35:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 01 16:35:07 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 16:35:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 01 16:35:07 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 16:35:07 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 01 16:35:07 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 01 16:35:07 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 01 16:35:07 compute-0 systemd[1]: Reloading.
Oct 01 16:35:07 compute-0 systemd-rc-local-generator[88881]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:35:07 compute-0 systemd-sysv-generator[88885]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:35:08 compute-0 systemd[1]: Starting Ceph osd.1 for f44264e3-e26a-5bd3-9e84-b4ba651d9cf5...
Oct 01 16:35:08 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct 01 16:35:08 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct 01 16:35:08 compute-0 podman[88940]: 2025-10-01 16:35:08.331730908 +0000 UTC m=+0.037956380 container create 29f712f376f67b52cc95af26ddaaae829e11b76c88398acb4669700878d03db3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-1-activate, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:35:08 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96a79b0001eadc1d39117148c0244431a36a08093a8d2c60e94af5c808d469a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96a79b0001eadc1d39117148c0244431a36a08093a8d2c60e94af5c808d469a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96a79b0001eadc1d39117148c0244431a36a08093a8d2c60e94af5c808d469a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96a79b0001eadc1d39117148c0244431a36a08093a8d2c60e94af5c808d469a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96a79b0001eadc1d39117148c0244431a36a08093a8d2c60e94af5c808d469a2/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:08 compute-0 podman[88940]: 2025-10-01 16:35:08.403120308 +0000 UTC m=+0.109345780 container init 29f712f376f67b52cc95af26ddaaae829e11b76c88398acb4669700878d03db3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-1-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 01 16:35:08 compute-0 podman[88940]: 2025-10-01 16:35:08.409289953 +0000 UTC m=+0.115515425 container start 29f712f376f67b52cc95af26ddaaae829e11b76c88398acb4669700878d03db3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-1-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 01 16:35:08 compute-0 podman[88940]: 2025-10-01 16:35:08.313635061 +0000 UTC m=+0.019860563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:08 compute-0 podman[88940]: 2025-10-01 16:35:08.412593637 +0000 UTC m=+0.118819109 container attach 29f712f376f67b52cc95af26ddaaae829e11b76c88398acb4669700878d03db3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-1-activate, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:35:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e7 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:35:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Oct 01 16:35:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 01 16:35:08 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2673302381,v1:192.168.122.100:6803/2673302381]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 01 16:35:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Oct 01 16:35:08 compute-0 ceph-osd[88140]: osd.0 0 done with init, starting boot process
Oct 01 16:35:08 compute-0 ceph-osd[88140]: osd.0 0 start_boot
Oct 01 16:35:08 compute-0 ceph-osd[88140]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct 01 16:35:08 compute-0 ceph-osd[88140]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct 01 16:35:08 compute-0 ceph-osd[88140]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct 01 16:35:08 compute-0 ceph-osd[88140]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct 01 16:35:08 compute-0 ceph-osd[88140]: osd.0 0  bench count 12288000 bsize 4 KiB
Oct 01 16:35:08 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Oct 01 16:35:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 01 16:35:08 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 16:35:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 01 16:35:08 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 16:35:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 01 16:35:08 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 16:35:08 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 01 16:35:08 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 01 16:35:08 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 01 16:35:08 compute-0 ceph-mon[74273]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:35:08 compute-0 ceph-mon[74273]: from='osd.0 [v2:192.168.122.100:6802/2673302381,v1:192.168.122.100:6803/2673302381]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct 01 16:35:08 compute-0 ceph-mon[74273]: osdmap e7: 3 total, 0 up, 3 in
Oct 01 16:35:08 compute-0 ceph-mon[74273]: from='osd.0 [v2:192.168.122.100:6802/2673302381,v1:192.168.122.100:6803/2673302381]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 01 16:35:08 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 16:35:08 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 16:35:08 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 16:35:08 compute-0 ceph-mgr[74571]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2673302381; not ready for session (expect reconnect)
Oct 01 16:35:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 01 16:35:08 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 16:35:08 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 01 16:35:09 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:35:09 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-1-activate[88956]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 01 16:35:09 compute-0 bash[88940]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 01 16:35:09 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-1-activate[88956]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Oct 01 16:35:09 compute-0 bash[88940]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Oct 01 16:35:09 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-1-activate[88956]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Oct 01 16:35:09 compute-0 bash[88940]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Oct 01 16:35:09 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-1-activate[88956]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct 01 16:35:09 compute-0 bash[88940]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct 01 16:35:09 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-1-activate[88956]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct 01 16:35:09 compute-0 bash[88940]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct 01 16:35:09 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-1-activate[88956]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 01 16:35:09 compute-0 bash[88940]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 01 16:35:09 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-1-activate[88956]: --> ceph-volume raw activate successful for osd ID: 1
Oct 01 16:35:09 compute-0 bash[88940]: --> ceph-volume raw activate successful for osd ID: 1
Oct 01 16:35:09 compute-0 systemd[1]: libpod-29f712f376f67b52cc95af26ddaaae829e11b76c88398acb4669700878d03db3.scope: Deactivated successfully.
Oct 01 16:35:09 compute-0 podman[88940]: 2025-10-01 16:35:09.525720737 +0000 UTC m=+1.231946209 container died 29f712f376f67b52cc95af26ddaaae829e11b76c88398acb4669700878d03db3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-1-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:35:09 compute-0 systemd[1]: libpod-29f712f376f67b52cc95af26ddaaae829e11b76c88398acb4669700878d03db3.scope: Consumed 1.124s CPU time.
Oct 01 16:35:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-96a79b0001eadc1d39117148c0244431a36a08093a8d2c60e94af5c808d469a2-merged.mount: Deactivated successfully.
Oct 01 16:35:09 compute-0 podman[88940]: 2025-10-01 16:35:09.625969827 +0000 UTC m=+1.332195339 container remove 29f712f376f67b52cc95af26ddaaae829e11b76c88398acb4669700878d03db3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 01 16:35:09 compute-0 ceph-mgr[74571]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2673302381; not ready for session (expect reconnect)
Oct 01 16:35:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 01 16:35:09 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 16:35:09 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 01 16:35:09 compute-0 ceph-mon[74273]: from='osd.0 [v2:192.168.122.100:6802/2673302381,v1:192.168.122.100:6803/2673302381]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 01 16:35:09 compute-0 ceph-mon[74273]: osdmap e8: 3 total, 0 up, 3 in
Oct 01 16:35:09 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 16:35:09 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 16:35:09 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 16:35:09 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 16:35:09 compute-0 podman[89148]: 2025-10-01 16:35:09.847876129 +0000 UTC m=+0.059938010 container create e9f714ab807d06226051cf8f29089322f8a65155729abd131fb294dac7d77f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-1, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 16:35:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53a3035f3d69d5f316134602cf7a64f29e8dc810a2d6a0ee17b570ee182575ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53a3035f3d69d5f316134602cf7a64f29e8dc810a2d6a0ee17b570ee182575ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53a3035f3d69d5f316134602cf7a64f29e8dc810a2d6a0ee17b570ee182575ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53a3035f3d69d5f316134602cf7a64f29e8dc810a2d6a0ee17b570ee182575ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53a3035f3d69d5f316134602cf7a64f29e8dc810a2d6a0ee17b570ee182575ab/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:09 compute-0 podman[89148]: 2025-10-01 16:35:09.81469678 +0000 UTC m=+0.026758681 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:09 compute-0 podman[89148]: 2025-10-01 16:35:09.925766699 +0000 UTC m=+0.137828590 container init e9f714ab807d06226051cf8f29089322f8a65155729abd131fb294dac7d77f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-1, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:35:09 compute-0 podman[89148]: 2025-10-01 16:35:09.933887889 +0000 UTC m=+0.145949770 container start e9f714ab807d06226051cf8f29089322f8a65155729abd131fb294dac7d77f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-1, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:35:09 compute-0 bash[89148]: e9f714ab807d06226051cf8f29089322f8a65155729abd131fb294dac7d77f6f
Oct 01 16:35:09 compute-0 systemd[1]: Started Ceph osd.1 for f44264e3-e26a-5bd3-9e84-b4ba651d9cf5.
Oct 01 16:35:09 compute-0 ceph-osd[89167]: set uid:gid to 167:167 (ceph:ceph)
Oct 01 16:35:09 compute-0 ceph-osd[89167]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct 01 16:35:09 compute-0 ceph-osd[89167]: pidfile_write: ignore empty --pid-file
Oct 01 16:35:09 compute-0 ceph-osd[89167]: bdev(0x5624c4ead800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 01 16:35:09 compute-0 ceph-osd[89167]: bdev(0x5624c4ead800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 01 16:35:09 compute-0 ceph-osd[89167]: bdev(0x5624c4ead800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 16:35:09 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 01 16:35:09 compute-0 ceph-osd[89167]: bdev(0x5624c5cef800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 01 16:35:09 compute-0 ceph-osd[89167]: bdev(0x5624c5cef800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 01 16:35:09 compute-0 ceph-osd[89167]: bdev(0x5624c5cef800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 16:35:09 compute-0 ceph-osd[89167]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct 01 16:35:09 compute-0 ceph-osd[89167]: bdev(0x5624c5cef800 /var/lib/ceph/osd/ceph-1/block) close
Oct 01 16:35:09 compute-0 sudo[88228]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:35:10 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:35:10 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Oct 01 16:35:10 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct 01 16:35:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:35:10 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:35:10 compute-0 ceph-mgr[74571]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Oct 01 16:35:10 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Oct 01 16:35:10 compute-0 sudo[89180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:10 compute-0 sudo[89180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:10 compute-0 sudo[89180]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:10 compute-0 sudo[89205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:35:10 compute-0 sudo[89205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:10 compute-0 sudo[89205]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:10 compute-0 sudo[89230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:10 compute-0 sudo[89230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:10 compute-0 sudo[89230]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:10 compute-0 ceph-osd[89167]: bdev(0x5624c4ead800 /var/lib/ceph/osd/ceph-1/block) close
Oct 01 16:35:10 compute-0 sudo[89255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5
Oct 01 16:35:10 compute-0 sudo[89255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:10 compute-0 ceph-osd[89167]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Oct 01 16:35:10 compute-0 ceph-osd[89167]: load: jerasure load: lrc 
Oct 01 16:35:10 compute-0 ceph-osd[89167]: bdev(0x5624c5d70c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 01 16:35:10 compute-0 ceph-osd[89167]: bdev(0x5624c5d70c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 01 16:35:10 compute-0 ceph-osd[89167]: bdev(0x5624c5d70c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 16:35:10 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 01 16:35:10 compute-0 ceph-osd[89167]: bdev(0x5624c5d70c00 /var/lib/ceph/osd/ceph-1/block) close
Oct 01 16:35:10 compute-0 podman[89331]: 2025-10-01 16:35:10.646109366 +0000 UTC m=+0.046436793 container create 2096fca01acb559df6f07c49705bca942626de2441a3dfa2d727bfefb8cc3f90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 01 16:35:10 compute-0 systemd[1]: Started libpod-conmon-2096fca01acb559df6f07c49705bca942626de2441a3dfa2d727bfefb8cc3f90.scope.
Oct 01 16:35:10 compute-0 podman[89331]: 2025-10-01 16:35:10.62198622 +0000 UTC m=+0.022313667 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:10 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:10 compute-0 podman[89331]: 2025-10-01 16:35:10.748692202 +0000 UTC m=+0.149019649 container init 2096fca01acb559df6f07c49705bca942626de2441a3dfa2d727bfefb8cc3f90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cray, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 01 16:35:10 compute-0 podman[89331]: 2025-10-01 16:35:10.760627993 +0000 UTC m=+0.160955420 container start 2096fca01acb559df6f07c49705bca942626de2441a3dfa2d727bfefb8cc3f90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:35:10 compute-0 brave_cray[89347]: 167 167
Oct 01 16:35:10 compute-0 systemd[1]: libpod-2096fca01acb559df6f07c49705bca942626de2441a3dfa2d727bfefb8cc3f90.scope: Deactivated successfully.
Oct 01 16:35:10 compute-0 podman[89331]: 2025-10-01 16:35:10.77642448 +0000 UTC m=+0.176751997 container attach 2096fca01acb559df6f07c49705bca942626de2441a3dfa2d727bfefb8cc3f90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cray, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 01 16:35:10 compute-0 podman[89331]: 2025-10-01 16:35:10.777226706 +0000 UTC m=+0.177554133 container died 2096fca01acb559df6f07c49705bca942626de2441a3dfa2d727bfefb8cc3f90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:35:10 compute-0 ceph-osd[89167]: bdev(0x5624c5d70c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 01 16:35:10 compute-0 ceph-osd[89167]: bdev(0x5624c5d70c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 01 16:35:10 compute-0 ceph-osd[89167]: bdev(0x5624c5d70c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 16:35:10 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 01 16:35:10 compute-0 ceph-osd[89167]: bdev(0x5624c5d70c00 /var/lib/ceph/osd/ceph-1/block) close
Oct 01 16:35:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-49ec3160f27b3689a3f40d2d7af448dfb9022455f4ca52a7b767960de2bf61b4-merged.mount: Deactivated successfully.
Oct 01 16:35:10 compute-0 ceph-mgr[74571]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2673302381; not ready for session (expect reconnect)
Oct 01 16:35:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 01 16:35:10 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 16:35:10 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 01 16:35:10 compute-0 ceph-mon[74273]: purged_snaps scrub starts
Oct 01 16:35:10 compute-0 ceph-mon[74273]: purged_snaps scrub ok
Oct 01 16:35:10 compute-0 ceph-mon[74273]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:35:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 16:35:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct 01 16:35:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:35:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 16:35:10 compute-0 podman[89331]: 2025-10-01 16:35:10.855676408 +0000 UTC m=+0.256003835 container remove 2096fca01acb559df6f07c49705bca942626de2441a3dfa2d727bfefb8cc3f90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cray, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:35:10 compute-0 systemd[1]: libpod-conmon-2096fca01acb559df6f07c49705bca942626de2441a3dfa2d727bfefb8cc3f90.scope: Deactivated successfully.
Oct 01 16:35:11 compute-0 ceph-osd[89167]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct 01 16:35:11 compute-0 ceph-osd[89167]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct 01 16:35:11 compute-0 ceph-osd[89167]: bdev(0x5624c5d70c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 01 16:35:11 compute-0 ceph-osd[89167]: bdev(0x5624c5d70c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 01 16:35:11 compute-0 ceph-osd[89167]: bdev(0x5624c5d70c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 16:35:11 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 01 16:35:11 compute-0 ceph-osd[89167]: bdev(0x5624c5d71400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 01 16:35:11 compute-0 ceph-osd[89167]: bdev(0x5624c5d71400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 01 16:35:11 compute-0 ceph-osd[89167]: bdev(0x5624c5d71400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 16:35:11 compute-0 ceph-osd[89167]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct 01 16:35:11 compute-0 ceph-osd[89167]: bluefs mount
Oct 01 16:35:11 compute-0 ceph-osd[89167]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: bluefs mount shared_bdev_used = 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: RocksDB version: 7.9.2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Git sha 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: DB SUMMARY
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: DB Session ID:  NSG9TL1N6YAQW3ZEHEAK
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: CURRENT file:  CURRENT
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: IDENTITY file:  IDENTITY
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                         Options.error_if_exists: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                       Options.create_if_missing: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                         Options.paranoid_checks: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                                     Options.env: 0x5624c5d41c70
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                                Options.info_log: 0x5624c4f348a0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.max_file_opening_threads: 16
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                              Options.statistics: (nil)
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                               Options.use_fsync: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                       Options.max_log_file_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                         Options.allow_fallocate: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.use_direct_reads: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.create_missing_column_families: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                              Options.db_log_dir: 
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                                 Options.wal_dir: db.wal
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.advise_random_on_open: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.write_buffer_manager: 0x5624c5e4a460
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                            Options.rate_limiter: (nil)
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.unordered_write: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                               Options.row_cache: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                              Options.wal_filter: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.allow_ingest_behind: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.two_write_queues: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.manual_wal_flush: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.wal_compression: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.atomic_flush: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                 Options.log_readahead_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.allow_data_in_errors: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.db_host_id: __hostname__
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.max_background_jobs: 4
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.max_background_compactions: -1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.max_subcompactions: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.max_open_files: -1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.bytes_per_sync: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.max_background_flushes: -1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Compression algorithms supported:
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         kZSTD supported: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         kXpressCompression supported: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         kBZip2Compression supported: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         kLZ4Compression supported: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         kZlibCompression supported: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         kLZ4HCCompression supported: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         kSnappyCompression supported: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5624c4f342c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5624c4f211f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5624c4f342c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5624c4f211f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5624c4f342c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5624c4f211f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5624c4f342c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5624c4f211f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5624c4f342c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5624c4f211f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5624c4f342c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5624c4f211f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5624c4f342c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5624c4f211f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5624c4f34240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5624c4f21090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5624c4f34240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5624c4f21090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5624c4f34240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5624c4f21090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a8d57df7-dd9c-4e47-9eff-d09ae8367651
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336511078933, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336511079155, "job": 1, "event": "recovery_finished"}
Oct 01 16:35:11 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Oct 01 16:35:11 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Oct 01 16:35:11 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct 01 16:35:11 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: freelist init
Oct 01 16:35:11 compute-0 ceph-osd[89167]: freelist _read_cfg
Oct 01 16:35:11 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 01 16:35:11 compute-0 ceph-osd[89167]: bluefs umount
Oct 01 16:35:11 compute-0 ceph-osd[89167]: bdev(0x5624c5d71400 /var/lib/ceph/osd/ceph-1/block) close
Oct 01 16:35:11 compute-0 podman[89577]: 2025-10-01 16:35:11.183790106 +0000 UTC m=+0.065461961 container create 183e8173ac43b82b1fd10ff02db606790f154dda9fb68739ac00026fffc8b1da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-2-activate-test, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Oct 01 16:35:11 compute-0 podman[89577]: 2025-10-01 16:35:11.152868472 +0000 UTC m=+0.034540417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:11 compute-0 systemd[1]: Started libpod-conmon-183e8173ac43b82b1fd10ff02db606790f154dda9fb68739ac00026fffc8b1da.scope.
Oct 01 16:35:11 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:35:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_16:35:11
Oct 01 16:35:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 16:35:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 16:35:11 compute-0 ceph-mgr[74571]: [balancer INFO root] No pools available
Oct 01 16:35:11 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d54b68e71a3c6e644e31ff5da6db10277d0a26a1a345f11df4c05e52055ebd28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d54b68e71a3c6e644e31ff5da6db10277d0a26a1a345f11df4c05e52055ebd28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d54b68e71a3c6e644e31ff5da6db10277d0a26a1a345f11df4c05e52055ebd28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d54b68e71a3c6e644e31ff5da6db10277d0a26a1a345f11df4c05e52055ebd28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d54b68e71a3c6e644e31ff5da6db10277d0a26a1a345f11df4c05e52055ebd28/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:11 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 16:35:11 compute-0 podman[89577]: 2025-10-01 16:35:11.336481819 +0000 UTC m=+0.218153764 container init 183e8173ac43b82b1fd10ff02db606790f154dda9fb68739ac00026fffc8b1da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-2-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:35:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 16:35:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:35:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:35:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 16:35:11 compute-0 podman[89577]: 2025-10-01 16:35:11.347612645 +0000 UTC m=+0.229284500 container start 183e8173ac43b82b1fd10ff02db606790f154dda9fb68739ac00026fffc8b1da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-2-activate-test, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:35:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:35:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:35:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:35:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:35:11 compute-0 ceph-osd[89167]: bdev(0x5624c5d71400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 01 16:35:11 compute-0 ceph-osd[89167]: bdev(0x5624c5d71400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 01 16:35:11 compute-0 ceph-osd[89167]: bdev(0x5624c5d71400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 16:35:11 compute-0 ceph-osd[89167]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct 01 16:35:11 compute-0 ceph-osd[89167]: bluefs mount
Oct 01 16:35:11 compute-0 ceph-osd[89167]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: bluefs mount shared_bdev_used = 4718592
Oct 01 16:35:11 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: RocksDB version: 7.9.2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Git sha 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: DB SUMMARY
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: DB Session ID:  NSG9TL1N6YAQW3ZEHEAL
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: CURRENT file:  CURRENT
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: IDENTITY file:  IDENTITY
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                         Options.error_if_exists: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                       Options.create_if_missing: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                         Options.paranoid_checks: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                                     Options.env: 0x5624c5ef23f0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                                Options.info_log: 0x5624c4f34600
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.max_file_opening_threads: 16
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                              Options.statistics: (nil)
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                               Options.use_fsync: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                       Options.max_log_file_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                         Options.allow_fallocate: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.use_direct_reads: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.create_missing_column_families: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                              Options.db_log_dir: 
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                                 Options.wal_dir: db.wal
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.advise_random_on_open: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.write_buffer_manager: 0x5624c5e4a460
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                            Options.rate_limiter: (nil)
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.unordered_write: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                               Options.row_cache: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                              Options.wal_filter: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.allow_ingest_behind: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.two_write_queues: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.manual_wal_flush: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.wal_compression: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.atomic_flush: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                 Options.log_readahead_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.allow_data_in_errors: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.db_host_id: __hostname__
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.max_background_jobs: 4
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.max_background_compactions: -1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.max_subcompactions: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.max_open_files: -1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.bytes_per_sync: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.max_background_flushes: -1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Compression algorithms supported:
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         kZSTD supported: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         kXpressCompression supported: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         kBZip2Compression supported: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         kLZ4Compression supported: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         kZlibCompression supported: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         kLZ4HCCompression supported: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         kSnappyCompression supported: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5624c4f34a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5624c4f211f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:11 compute-0 podman[89577]: 2025-10-01 16:35:11.37002054 +0000 UTC m=+0.251692435 container attach 183e8173ac43b82b1fd10ff02db606790f154dda9fb68739ac00026fffc8b1da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5624c4f34a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5624c4f211f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5624c4f34a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5624c4f211f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5624c4f34a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5624c4f211f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5624c4f34a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5624c4f211f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5624c4f34a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5624c4f211f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5624c4f34a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5624c4f211f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5624c4f34380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5624c4f21090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5624c4f34380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5624c4f21090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5624c4f34380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5624c4f21090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a8d57df7-dd9c-4e47-9eff-d09ae8367651
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336511375740, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336511397569, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336511, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a8d57df7-dd9c-4e47-9eff-d09ae8367651", "db_session_id": "NSG9TL1N6YAQW3ZEHEAL", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336511419120, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1593, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 467, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336511, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a8d57df7-dd9c-4e47-9eff-d09ae8367651", "db_session_id": "NSG9TL1N6YAQW3ZEHEAL", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336511422585, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336511, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a8d57df7-dd9c-4e47-9eff-d09ae8367651", "db_session_id": "NSG9TL1N6YAQW3ZEHEAL", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336511424384, "job": 1, "event": "recovery_finished"}
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5624c508e000
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: DB pointer 0x5624c5e33a00
Oct 01 16:35:11 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 01 16:35:11 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Oct 01 16:35:11 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 16:35:11 compute-0 ceph-osd[89167]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f21090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f21090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f21090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 01 16:35:11 compute-0 ceph-osd[89167]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct 01 16:35:11 compute-0 ceph-osd[89167]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct 01 16:35:11 compute-0 ceph-osd[89167]: _get_class not permitted to load lua
Oct 01 16:35:11 compute-0 ceph-osd[89167]: _get_class not permitted to load sdk
Oct 01 16:35:11 compute-0 ceph-osd[89167]: _get_class not permitted to load test_remote_reads
Oct 01 16:35:11 compute-0 ceph-osd[89167]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct 01 16:35:11 compute-0 ceph-osd[89167]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct 01 16:35:11 compute-0 ceph-osd[89167]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct 01 16:35:11 compute-0 ceph-osd[89167]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct 01 16:35:11 compute-0 ceph-osd[89167]: osd.1 0 load_pgs
Oct 01 16:35:11 compute-0 ceph-osd[89167]: osd.1 0 load_pgs opened 0 pgs
Oct 01 16:35:11 compute-0 ceph-osd[89167]: osd.1 0 log_to_monitors true
Oct 01 16:35:11 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-1[89163]: 2025-10-01T16:35:11.522+0000 7f68e5056740 -1 osd.1 0 log_to_monitors true
Oct 01 16:35:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Oct 01 16:35:11 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3366200123,v1:192.168.122.100:6807/3366200123]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct 01 16:35:11 compute-0 ceph-mgr[74571]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2673302381; not ready for session (expect reconnect)
Oct 01 16:35:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 01 16:35:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 16:35:11 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 01 16:35:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Oct 01 16:35:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 01 16:35:11 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3366200123,v1:192.168.122.100:6807/3366200123]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct 01 16:35:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e9 e9: 3 total, 0 up, 3 in
Oct 01 16:35:11 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 0 up, 3 in
Oct 01 16:35:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct 01 16:35:11 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3366200123,v1:192.168.122.100:6807/3366200123]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 01 16:35:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct 01 16:35:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 01 16:35:11 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 01 16:35:11 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 01 16:35:11 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 01 16:35:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 16:35:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 01 16:35:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 16:35:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 01 16:35:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 16:35:11 compute-0 ceph-mon[74273]: Deploying daemon osd.2 on compute-0
Oct 01 16:35:11 compute-0 ceph-mon[74273]: from='osd.1 [v2:192.168.122.100:6806/3366200123,v1:192.168.122.100:6807/3366200123]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct 01 16:35:11 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 16:35:11 compute-0 ceph-osd[88140]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 36.312 iops: 9295.807 elapsed_sec: 0.323
Oct 01 16:35:11 compute-0 ceph-osd[88140]: log_channel(cluster) log [WRN] : OSD bench result of 9295.806929 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 01 16:35:11 compute-0 ceph-osd[88140]: osd.0 0 waiting for initial osdmap
Oct 01 16:35:11 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-0[88136]: 2025-10-01T16:35:11.962+0000 7fc24e00a640 -1 osd.0 0 waiting for initial osdmap
Oct 01 16:35:11 compute-0 ceph-osd[88140]: osd.0 9 crush map has features 288514050185494528, adjusting msgr requires for clients
Oct 01 16:35:11 compute-0 ceph-osd[88140]: osd.0 9 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Oct 01 16:35:11 compute-0 ceph-osd[88140]: osd.0 9 crush map has features 3314932999778484224, adjusting msgr requires for osds
Oct 01 16:35:11 compute-0 ceph-osd[88140]: osd.0 9 check_osdmap_features require_osd_release unknown -> reef
Oct 01 16:35:11 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-2-activate-test[89594]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct 01 16:35:11 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-2-activate-test[89594]:                             [--no-systemd] [--no-tmpfs]
Oct 01 16:35:11 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-2-activate-test[89594]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct 01 16:35:11 compute-0 ceph-osd[88140]: osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 01 16:35:11 compute-0 ceph-osd[88140]: osd.0 9 set_numa_affinity not setting numa affinity
Oct 01 16:35:11 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-0[88136]: 2025-10-01T16:35:11.990+0000 7fc249632640 -1 osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 01 16:35:11 compute-0 ceph-osd[88140]: osd.0 9 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Oct 01 16:35:12 compute-0 systemd[1]: libpod-183e8173ac43b82b1fd10ff02db606790f154dda9fb68739ac00026fffc8b1da.scope: Deactivated successfully.
Oct 01 16:35:12 compute-0 podman[89577]: 2025-10-01 16:35:12.007399879 +0000 UTC m=+0.889071744 container died 183e8173ac43b82b1fd10ff02db606790f154dda9fb68739ac00026fffc8b1da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-2-activate-test, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:35:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-d54b68e71a3c6e644e31ff5da6db10277d0a26a1a345f11df4c05e52055ebd28-merged.mount: Deactivated successfully.
Oct 01 16:35:12 compute-0 podman[89577]: 2025-10-01 16:35:12.063383161 +0000 UTC m=+0.945055016 container remove 183e8173ac43b82b1fd10ff02db606790f154dda9fb68739ac00026fffc8b1da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:35:12 compute-0 systemd[1]: libpod-conmon-183e8173ac43b82b1fd10ff02db606790f154dda9fb68739ac00026fffc8b1da.scope: Deactivated successfully.
Oct 01 16:35:12 compute-0 systemd[1]: Reloading.
Oct 01 16:35:12 compute-0 systemd-rc-local-generator[89876]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:35:12 compute-0 systemd-sysv-generator[89881]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:35:12 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct 01 16:35:12 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct 01 16:35:12 compute-0 systemd[1]: Reloading.
Oct 01 16:35:12 compute-0 systemd-sysv-generator[89919]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:35:12 compute-0 systemd-rc-local-generator[89916]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:35:12 compute-0 ceph-mgr[74571]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2673302381; not ready for session (expect reconnect)
Oct 01 16:35:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 01 16:35:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 16:35:12 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 01 16:35:12 compute-0 systemd[1]: Starting Ceph osd.2 for f44264e3-e26a-5bd3-9e84-b4ba651d9cf5...
Oct 01 16:35:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Oct 01 16:35:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 01 16:35:12 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3366200123,v1:192.168.122.100:6807/3366200123]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 01 16:35:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Oct 01 16:35:12 compute-0 ceph-osd[89167]: osd.1 0 done with init, starting boot process
Oct 01 16:35:12 compute-0 ceph-osd[89167]: osd.1 0 start_boot
Oct 01 16:35:12 compute-0 ceph-osd[89167]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct 01 16:35:12 compute-0 ceph-osd[89167]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct 01 16:35:12 compute-0 ceph-osd[89167]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct 01 16:35:12 compute-0 ceph-osd[89167]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct 01 16:35:12 compute-0 ceph-osd[89167]: osd.1 0  bench count 12288000 bsize 4 KiB
Oct 01 16:35:12 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/2673302381,v1:192.168.122.100:6803/2673302381] boot
Oct 01 16:35:12 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Oct 01 16:35:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 01 16:35:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 16:35:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 01 16:35:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 16:35:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 01 16:35:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 16:35:12 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 01 16:35:12 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 01 16:35:12 compute-0 ceph-mgr[74571]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3366200123; not ready for session (expect reconnect)
Oct 01 16:35:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 01 16:35:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 16:35:12 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 01 16:35:12 compute-0 ceph-osd[88140]: osd.0 10 state: booting -> active
Oct 01 16:35:12 compute-0 ceph-mon[74273]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:35:12 compute-0 ceph-mon[74273]: from='osd.1 [v2:192.168.122.100:6806/3366200123,v1:192.168.122.100:6807/3366200123]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct 01 16:35:12 compute-0 ceph-mon[74273]: osdmap e9: 3 total, 0 up, 3 in
Oct 01 16:35:12 compute-0 ceph-mon[74273]: from='osd.1 [v2:192.168.122.100:6806/3366200123,v1:192.168.122.100:6807/3366200123]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 01 16:35:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 16:35:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 16:35:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 16:35:12 compute-0 ceph-mon[74273]: OSD bench result of 9295.806929 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 01 16:35:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 16:35:12 compute-0 ceph-mon[74273]: from='osd.1 [v2:192.168.122.100:6806/3366200123,v1:192.168.122.100:6807/3366200123]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 01 16:35:12 compute-0 ceph-mon[74273]: osd.0 [v2:192.168.122.100:6802/2673302381,v1:192.168.122.100:6803/2673302381] boot
Oct 01 16:35:12 compute-0 ceph-mon[74273]: osdmap e10: 3 total, 1 up, 3 in
Oct 01 16:35:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 16:35:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 16:35:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 16:35:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 16:35:13 compute-0 podman[89973]: 2025-10-01 16:35:13.098829729 +0000 UTC m=+0.060638276 container create 5febc049e1b187cb60b7f4e21f8f39d5c71aca444393f98aa4ec1c2461b92812 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:35:13 compute-0 podman[89973]: 2025-10-01 16:35:13.059003116 +0000 UTC m=+0.020811683 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:13 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf6be27816de5f225183e2e770ddfba5d0a2d782a230a0890ac9a4ddb76b8fdf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf6be27816de5f225183e2e770ddfba5d0a2d782a230a0890ac9a4ddb76b8fdf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf6be27816de5f225183e2e770ddfba5d0a2d782a230a0890ac9a4ddb76b8fdf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf6be27816de5f225183e2e770ddfba5d0a2d782a230a0890ac9a4ddb76b8fdf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf6be27816de5f225183e2e770ddfba5d0a2d782a230a0890ac9a4ddb76b8fdf/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:13 compute-0 podman[89973]: 2025-10-01 16:35:13.189722569 +0000 UTC m=+0.151531116 container init 5febc049e1b187cb60b7f4e21f8f39d5c71aca444393f98aa4ec1c2461b92812 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 01 16:35:13 compute-0 podman[89973]: 2025-10-01 16:35:13.199803784 +0000 UTC m=+0.161612331 container start 5febc049e1b187cb60b7f4e21f8f39d5c71aca444393f98aa4ec1c2461b92812 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-2-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 01 16:35:13 compute-0 podman[89973]: 2025-10-01 16:35:13.210591385 +0000 UTC m=+0.172399932 container attach 5febc049e1b187cb60b7f4e21f8f39d5c71aca444393f98aa4ec1c2461b92812 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:35:13 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:35:13 compute-0 ceph-mgr[74571]: [devicehealth INFO root] creating mgr pool
Oct 01 16:35:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Oct 01 16:35:13 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct 01 16:35:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:35:13 compute-0 sudo[90018]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbycfhwtknhbjgnpelubvehrkrldwxve ; /usr/bin/python3'
Oct 01 16:35:13 compute-0 sudo[90018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:35:13 compute-0 ceph-mgr[74571]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3366200123; not ready for session (expect reconnect)
Oct 01 16:35:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 01 16:35:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 16:35:13 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 01 16:35:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Oct 01 16:35:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 01 16:35:13 compute-0 ceph-mon[74273]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 16:35:13 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct 01 16:35:13 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 16:35:13 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct 01 16:35:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Oct 01 16:35:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Oct 01 16:35:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Oct 01 16:35:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Oct 01 16:35:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Oct 01 16:35:13 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Oct 01 16:35:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 01 16:35:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 16:35:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 01 16:35:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 16:35:13 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 01 16:35:13 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 01 16:35:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Oct 01 16:35:13 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct 01 16:35:13 compute-0 ceph-osd[88140]: osd.0 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 01 16:35:13 compute-0 ceph-osd[88140]: osd.0 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Oct 01 16:35:13 compute-0 ceph-osd[88140]: osd.0 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 01 16:35:13 compute-0 python3[90020]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:35:14 compute-0 podman[90030]: 2025-10-01 16:35:14.022140809 +0000 UTC m=+0.055167619 container create 942e4e7728699b1bb335b70f4d51b2bbab82e81c662918c062975beda8a23a85 (image=quay.io/ceph/ceph:v18, name=dreamy_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:35:14 compute-0 systemd[1]: Started libpod-conmon-942e4e7728699b1bb335b70f4d51b2bbab82e81c662918c062975beda8a23a85.scope.
Oct 01 16:35:14 compute-0 podman[90030]: 2025-10-01 16:35:13.997669269 +0000 UTC m=+0.030696109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:35:14 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b906b8ad0ceedafe44290e88b1f4ca052ae0723ddfbc017df25e0209a0a8c3b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b906b8ad0ceedafe44290e88b1f4ca052ae0723ddfbc017df25e0209a0a8c3b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b906b8ad0ceedafe44290e88b1f4ca052ae0723ddfbc017df25e0209a0a8c3b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:14 compute-0 podman[90030]: 2025-10-01 16:35:14.133500317 +0000 UTC m=+0.166527157 container init 942e4e7728699b1bb335b70f4d51b2bbab82e81c662918c062975beda8a23a85 (image=quay.io/ceph/ceph:v18, name=dreamy_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 01 16:35:14 compute-0 podman[90030]: 2025-10-01 16:35:14.141185103 +0000 UTC m=+0.174211913 container start 942e4e7728699b1bb335b70f4d51b2bbab82e81c662918c062975beda8a23a85 (image=quay.io/ceph/ceph:v18, name=dreamy_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 01 16:35:14 compute-0 podman[90030]: 2025-10-01 16:35:14.155198256 +0000 UTC m=+0.188225066 container attach 942e4e7728699b1bb335b70f4d51b2bbab82e81c662918c062975beda8a23a85 (image=quay.io/ceph/ceph:v18, name=dreamy_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:35:14 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-2-activate[89988]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 01 16:35:14 compute-0 bash[89973]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 01 16:35:14 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-2-activate[89988]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Oct 01 16:35:14 compute-0 bash[89973]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Oct 01 16:35:14 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-2-activate[89988]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Oct 01 16:35:14 compute-0 bash[89973]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Oct 01 16:35:14 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-2-activate[89988]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct 01 16:35:14 compute-0 bash[89973]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct 01 16:35:14 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-2-activate[89988]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct 01 16:35:14 compute-0 bash[89973]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct 01 16:35:14 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-2-activate[89988]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 01 16:35:14 compute-0 bash[89973]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 01 16:35:14 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-2-activate[89988]: --> ceph-volume raw activate successful for osd ID: 2
Oct 01 16:35:14 compute-0 bash[89973]: --> ceph-volume raw activate successful for osd ID: 2
Oct 01 16:35:14 compute-0 systemd[1]: libpod-5febc049e1b187cb60b7f4e21f8f39d5c71aca444393f98aa4ec1c2461b92812.scope: Deactivated successfully.
Oct 01 16:35:14 compute-0 systemd[1]: libpod-5febc049e1b187cb60b7f4e21f8f39d5c71aca444393f98aa4ec1c2461b92812.scope: Consumed 1.070s CPU time.
Oct 01 16:35:14 compute-0 conmon[89988]: conmon 5febc049e1b187cb60b7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5febc049e1b187cb60b7f4e21f8f39d5c71aca444393f98aa4ec1c2461b92812.scope/container/memory.events
Oct 01 16:35:14 compute-0 podman[89973]: 2025-10-01 16:35:14.274132172 +0000 UTC m=+1.235940719 container died 5febc049e1b187cb60b7f4e21f8f39d5c71aca444393f98aa4ec1c2461b92812 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-2-activate, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:35:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf6be27816de5f225183e2e770ddfba5d0a2d782a230a0890ac9a4ddb76b8fdf-merged.mount: Deactivated successfully.
Oct 01 16:35:14 compute-0 podman[89973]: 2025-10-01 16:35:14.379342513 +0000 UTC m=+1.341151060 container remove 5febc049e1b187cb60b7f4e21f8f39d5c71aca444393f98aa4ec1c2461b92812 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-2-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Oct 01 16:35:14 compute-0 podman[90247]: 2025-10-01 16:35:14.647675007 +0000 UTC m=+0.047224010 container create 412bad0677b0b014614986ed3c9e112ce30a58ba1ff7f5731eecb6eac919e635 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 01 16:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f66ff10ab1b96e7fc6ba8618f895e8e55ac97c6f07b488faf22db5db9341c03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f66ff10ab1b96e7fc6ba8618f895e8e55ac97c6f07b488faf22db5db9341c03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f66ff10ab1b96e7fc6ba8618f895e8e55ac97c6f07b488faf22db5db9341c03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f66ff10ab1b96e7fc6ba8618f895e8e55ac97c6f07b488faf22db5db9341c03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f66ff10ab1b96e7fc6ba8618f895e8e55ac97c6f07b488faf22db5db9341c03/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:14 compute-0 podman[90247]: 2025-10-01 16:35:14.62642986 +0000 UTC m=+0.025978873 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 01 16:35:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3471500302' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 01 16:35:14 compute-0 dreamy_jemison[90057]: 
Oct 01 16:35:14 compute-0 dreamy_jemison[90057]: {"fsid":"f44264e3-e26a-5bd3-9e84-b4ba651d9cf5","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":110,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":11,"num_osds":3,"num_up_osds":1,"osd_up_since":1759336512,"num_in_osds":3,"osd_in_since":1759336495,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-01T16:35:13.275828+0000","services":{}},"progress_events":{}}
Oct 01 16:35:14 compute-0 podman[90247]: 2025-10-01 16:35:14.744344002 +0000 UTC m=+0.143893035 container init 412bad0677b0b014614986ed3c9e112ce30a58ba1ff7f5731eecb6eac919e635 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 01 16:35:14 compute-0 podman[90247]: 2025-10-01 16:35:14.74933679 +0000 UTC m=+0.148885793 container start 412bad0677b0b014614986ed3c9e112ce30a58ba1ff7f5731eecb6eac919e635 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 01 16:35:14 compute-0 systemd[1]: libpod-942e4e7728699b1bb335b70f4d51b2bbab82e81c662918c062975beda8a23a85.scope: Deactivated successfully.
Oct 01 16:35:14 compute-0 bash[90247]: 412bad0677b0b014614986ed3c9e112ce30a58ba1ff7f5731eecb6eac919e635
Oct 01 16:35:14 compute-0 podman[90030]: 2025-10-01 16:35:14.772564759 +0000 UTC m=+0.805591569 container died 942e4e7728699b1bb335b70f4d51b2bbab82e81c662918c062975beda8a23a85 (image=quay.io/ceph/ceph:v18, name=dreamy_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:35:14 compute-0 systemd[1]: Started Ceph osd.2 for f44264e3-e26a-5bd3-9e84-b4ba651d9cf5.
Oct 01 16:35:14 compute-0 ceph-osd[90269]: set uid:gid to 167:167 (ceph:ceph)
Oct 01 16:35:14 compute-0 ceph-osd[90269]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct 01 16:35:14 compute-0 ceph-osd[90269]: pidfile_write: ignore empty --pid-file
Oct 01 16:35:14 compute-0 ceph-osd[90269]: bdev(0x56260eca7800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 01 16:35:14 compute-0 ceph-osd[90269]: bdev(0x56260eca7800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 01 16:35:14 compute-0 ceph-osd[90269]: bdev(0x56260eca7800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 16:35:14 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 01 16:35:14 compute-0 ceph-osd[90269]: bdev(0x56260fadf800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 01 16:35:14 compute-0 ceph-osd[90269]: bdev(0x56260fadf800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 01 16:35:14 compute-0 ceph-osd[90269]: bdev(0x56260fadf800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 16:35:14 compute-0 ceph-osd[90269]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct 01 16:35:14 compute-0 ceph-osd[90269]: bdev(0x56260fadf800 /var/lib/ceph/osd/ceph-2/block) close
Oct 01 16:35:14 compute-0 sudo[89255]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b906b8ad0ceedafe44290e88b1f4ca052ae0723ddfbc017df25e0209a0a8c3b-merged.mount: Deactivated successfully.
Oct 01 16:35:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:35:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:35:14 compute-0 ceph-mgr[74571]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3366200123; not ready for session (expect reconnect)
Oct 01 16:35:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 01 16:35:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 16:35:14 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 01 16:35:14 compute-0 podman[90030]: 2025-10-01 16:35:14.878457029 +0000 UTC m=+0.911483829 container remove 942e4e7728699b1bb335b70f4d51b2bbab82e81c662918c062975beda8a23a85 (image=quay.io/ceph/ceph:v18, name=dreamy_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:35:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:14 compute-0 systemd[1]: libpod-conmon-942e4e7728699b1bb335b70f4d51b2bbab82e81c662918c062975beda8a23a85.scope: Deactivated successfully.
Oct 01 16:35:14 compute-0 sudo[90018]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Oct 01 16:35:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct 01 16:35:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Oct 01 16:35:14 compute-0 sudo[90295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:14 compute-0 sudo[90295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:14 compute-0 sudo[90295]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:14 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Oct 01 16:35:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 01 16:35:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 16:35:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 01 16:35:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 16:35:14 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 01 16:35:14 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 01 16:35:14 compute-0 ceph-mon[74273]: purged_snaps scrub starts
Oct 01 16:35:14 compute-0 ceph-mon[74273]: purged_snaps scrub ok
Oct 01 16:35:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct 01 16:35:14 compute-0 ceph-mon[74273]: osdmap e11: 3 total, 1 up, 3 in
Oct 01 16:35:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 16:35:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 16:35:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct 01 16:35:14 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3471500302' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 01 16:35:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 16:35:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:14 compute-0 sudo[90320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:35:14 compute-0 sudo[90320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:15 compute-0 sudo[90320]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:15 compute-0 sudo[90345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:15 compute-0 sudo[90345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:15 compute-0 sudo[90345]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:15 compute-0 ceph-osd[90269]: bdev(0x56260eca7800 /var/lib/ceph/osd/ceph-2/block) close
Oct 01 16:35:15 compute-0 sudo[90370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 16:35:15 compute-0 sudo[90370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:15 compute-0 sudo[90420]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swuwtilztuwecbdtniygscdcprpiipgt ; /usr/bin/python3'
Oct 01 16:35:15 compute-0 sudo[90420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:35:15 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v35: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Oct 01 16:35:15 compute-0 python3[90422]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Oct 01 16:35:15 compute-0 ceph-osd[90269]: load: jerasure load: lrc 
Oct 01 16:35:15 compute-0 ceph-osd[90269]: bdev(0x56260fb60c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 01 16:35:15 compute-0 ceph-osd[90269]: bdev(0x56260fb60c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 01 16:35:15 compute-0 ceph-osd[90269]: bdev(0x56260fb60c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 16:35:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 01 16:35:15 compute-0 ceph-osd[90269]: bdev(0x56260fb60c00 /var/lib/ceph/osd/ceph-2/block) close
Oct 01 16:35:15 compute-0 podman[90442]: 2025-10-01 16:35:15.383968716 +0000 UTC m=+0.064524940 container create cf78d0b4c13b78d1bee53a3716a4b198a75235e2e6e24908c99fbebb572a7d0a (image=quay.io/ceph/ceph:v18, name=elated_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 01 16:35:15 compute-0 systemd[1]: Started libpod-conmon-cf78d0b4c13b78d1bee53a3716a4b198a75235e2e6e24908c99fbebb572a7d0a.scope.
Oct 01 16:35:15 compute-0 podman[90442]: 2025-10-01 16:35:15.343837985 +0000 UTC m=+0.024394229 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:35:15 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22b1a182bc13180e1deab7db3a0ff5bc507678fd266a83aa8dce2a42bea2155c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22b1a182bc13180e1deab7db3a0ff5bc507678fd266a83aa8dce2a42bea2155c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:15 compute-0 podman[90442]: 2025-10-01 16:35:15.497230865 +0000 UTC m=+0.177787089 container init cf78d0b4c13b78d1bee53a3716a4b198a75235e2e6e24908c99fbebb572a7d0a (image=quay.io/ceph/ceph:v18, name=elated_pasteur, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 01 16:35:15 compute-0 podman[90442]: 2025-10-01 16:35:15.504374587 +0000 UTC m=+0.184930791 container start cf78d0b4c13b78d1bee53a3716a4b198a75235e2e6e24908c99fbebb572a7d0a (image=quay.io/ceph/ceph:v18, name=elated_pasteur, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 16:35:15 compute-0 podman[90442]: 2025-10-01 16:35:15.520798929 +0000 UTC m=+0.201355163 container attach cf78d0b4c13b78d1bee53a3716a4b198a75235e2e6e24908c99fbebb572a7d0a (image=quay.io/ceph/ceph:v18, name=elated_pasteur, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 01 16:35:15 compute-0 podman[90486]: 2025-10-01 16:35:15.593680413 +0000 UTC m=+0.046045553 container create 4ec57996e1f87dc59648a15adaafe43ec5feac5020c45b0e29b2df232fc24201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_diffie, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 01 16:35:15 compute-0 ceph-osd[90269]: bdev(0x56260fb60c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 01 16:35:15 compute-0 ceph-osd[90269]: bdev(0x56260fb60c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 01 16:35:15 compute-0 ceph-osd[90269]: bdev(0x56260fb60c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 16:35:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 01 16:35:15 compute-0 ceph-osd[90269]: bdev(0x56260fb60c00 /var/lib/ceph/osd/ceph-2/block) close
Oct 01 16:35:15 compute-0 systemd[1]: Started libpod-conmon-4ec57996e1f87dc59648a15adaafe43ec5feac5020c45b0e29b2df232fc24201.scope.
Oct 01 16:35:15 compute-0 podman[90486]: 2025-10-01 16:35:15.567840637 +0000 UTC m=+0.020205787 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:15 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:15 compute-0 podman[90486]: 2025-10-01 16:35:15.692826759 +0000 UTC m=+0.145191929 container init 4ec57996e1f87dc59648a15adaafe43ec5feac5020c45b0e29b2df232fc24201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_diffie, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:35:15 compute-0 podman[90486]: 2025-10-01 16:35:15.700631372 +0000 UTC m=+0.152996502 container start 4ec57996e1f87dc59648a15adaafe43ec5feac5020c45b0e29b2df232fc24201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:35:15 compute-0 podman[90486]: 2025-10-01 16:35:15.703739787 +0000 UTC m=+0.156104967 container attach 4ec57996e1f87dc59648a15adaafe43ec5feac5020c45b0e29b2df232fc24201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:35:15 compute-0 bold_diffie[90506]: 167 167
Oct 01 16:35:15 compute-0 systemd[1]: libpod-4ec57996e1f87dc59648a15adaafe43ec5feac5020c45b0e29b2df232fc24201.scope: Deactivated successfully.
Oct 01 16:35:15 compute-0 podman[90486]: 2025-10-01 16:35:15.704625284 +0000 UTC m=+0.156990464 container died 4ec57996e1f87dc59648a15adaafe43ec5feac5020c45b0e29b2df232fc24201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_diffie, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:35:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-f12fbafc4e943b4089ed6c6365c5a7db9f0d7b33c224eb871774dec2e05f6fcf-merged.mount: Deactivated successfully.
Oct 01 16:35:15 compute-0 podman[90486]: 2025-10-01 16:35:15.780945409 +0000 UTC m=+0.233310539 container remove 4ec57996e1f87dc59648a15adaafe43ec5feac5020c45b0e29b2df232fc24201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 01 16:35:15 compute-0 systemd[1]: libpod-conmon-4ec57996e1f87dc59648a15adaafe43ec5feac5020c45b0e29b2df232fc24201.scope: Deactivated successfully.
Oct 01 16:35:15 compute-0 ceph-mgr[74571]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3366200123; not ready for session (expect reconnect)
Oct 01 16:35:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 01 16:35:15 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 16:35:15 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 01 16:35:15 compute-0 ceph-osd[90269]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct 01 16:35:15 compute-0 ceph-osd[90269]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct 01 16:35:15 compute-0 ceph-osd[90269]: bdev(0x56260fb60c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 01 16:35:15 compute-0 ceph-osd[90269]: bdev(0x56260fb60c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 01 16:35:15 compute-0 ceph-osd[90269]: bdev(0x56260fb60c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 16:35:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 01 16:35:15 compute-0 ceph-osd[90269]: bdev(0x56260fb61400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 01 16:35:15 compute-0 ceph-osd[90269]: bdev(0x56260fb61400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 01 16:35:15 compute-0 ceph-osd[90269]: bdev(0x56260fb61400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 16:35:15 compute-0 ceph-osd[90269]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct 01 16:35:15 compute-0 ceph-osd[90269]: bluefs mount
Oct 01 16:35:15 compute-0 ceph-osd[90269]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: bluefs mount shared_bdev_used = 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: RocksDB version: 7.9.2
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Git sha 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: DB SUMMARY
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: DB Session ID:  0WNIT27MCCDJ7JRRVPFZ
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: CURRENT file:  CURRENT
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: IDENTITY file:  IDENTITY
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                         Options.error_if_exists: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                       Options.create_if_missing: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                         Options.paranoid_checks: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                                     Options.env: 0x56260fb31c70
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                                Options.info_log: 0x56260ed2e8a0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.max_file_opening_threads: 16
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                              Options.statistics: (nil)
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                               Options.use_fsync: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                       Options.max_log_file_size: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                         Options.allow_fallocate: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                        Options.use_direct_reads: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.create_missing_column_families: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                              Options.db_log_dir: 
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                                 Options.wal_dir: db.wal
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.advise_random_on_open: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                    Options.write_buffer_manager: 0x56260fc3a460
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                            Options.rate_limiter: (nil)
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.unordered_write: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                               Options.row_cache: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                              Options.wal_filter: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.allow_ingest_behind: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.two_write_queues: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.manual_wal_flush: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.wal_compression: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.atomic_flush: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                 Options.log_readahead_size: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.allow_data_in_errors: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.db_host_id: __hostname__
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.max_background_jobs: 4
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.max_background_compactions: -1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.max_subcompactions: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                          Options.max_open_files: -1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                          Options.bytes_per_sync: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.max_background_flushes: -1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Compression algorithms supported:
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         kZSTD supported: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         kXpressCompression supported: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         kBZip2Compression supported: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         kLZ4Compression supported: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         kZlibCompression supported: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         kLZ4HCCompression supported: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         kSnappyCompression supported: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56260ed2e2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56260ed1b1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56260ed2e2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56260ed1b1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56260ed2e2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56260ed1b1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56260ed2e2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56260ed1b1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56260ed2e2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56260ed1b1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56260ed2e2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56260ed1b1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56260ed2e2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56260ed1b1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56260ed2e240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56260ed1b090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56260ed2e240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56260ed1b090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56260ed2e240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56260ed1b090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 23cb4cbe-ddad-4618-bad4-1c2e4ba95b0c
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336515925353, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336515925578, "job": 1, "event": "recovery_finished"}
Oct 01 16:35:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 01 16:35:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Oct 01 16:35:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Oct 01 16:35:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct 01 16:35:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Oct 01 16:35:15 compute-0 ceph-osd[90269]: freelist init
Oct 01 16:35:15 compute-0 ceph-osd[90269]: freelist _read_cfg
Oct 01 16:35:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 01 16:35:15 compute-0 ceph-osd[90269]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 01 16:35:15 compute-0 ceph-osd[90269]: bluefs umount
Oct 01 16:35:15 compute-0 ceph-osd[90269]: bdev(0x56260fb61400 /var/lib/ceph/osd/ceph-2/block) close
Oct 01 16:35:15 compute-0 podman[90546]: 2025-10-01 16:35:15.939386861 +0000 UTC m=+0.056304860 container create 67339cd24266488ad3766aa7e862ffd8494f50fea920f35f8a597f1572f672ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 01 16:35:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct 01 16:35:15 compute-0 ceph-mon[74273]: osdmap e12: 3 total, 1 up, 3 in
Oct 01 16:35:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 16:35:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 16:35:15 compute-0 ceph-mon[74273]: pgmap v35: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Oct 01 16:35:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 16:35:15 compute-0 systemd[1]: Started libpod-conmon-67339cd24266488ad3766aa7e862ffd8494f50fea920f35f8a597f1572f672ff.scope.
Oct 01 16:35:15 compute-0 ceph-osd[89167]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 34.753 iops: 8896.663 elapsed_sec: 0.337
Oct 01 16:35:15 compute-0 ceph-osd[89167]: log_channel(cluster) log [WRN] : OSD bench result of 8896.662674 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 01 16:35:15 compute-0 ceph-osd[89167]: osd.1 0 waiting for initial osdmap
Oct 01 16:35:15 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-1[89163]: 2025-10-01T16:35:15.988+0000 7f68e17ed640 -1 osd.1 0 waiting for initial osdmap
Oct 01 16:35:16 compute-0 ceph-osd[89167]: osd.1 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 01 16:35:16 compute-0 ceph-osd[89167]: osd.1 12 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Oct 01 16:35:16 compute-0 ceph-osd[89167]: osd.1 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 01 16:35:16 compute-0 ceph-osd[89167]: osd.1 12 check_osdmap_features require_osd_release unknown -> reef
Oct 01 16:35:16 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efb71b5f06b961b51c997df24ece3a0852760fde69d7c3616c8579dbdbb7b43d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efb71b5f06b961b51c997df24ece3a0852760fde69d7c3616c8579dbdbb7b43d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efb71b5f06b961b51c997df24ece3a0852760fde69d7c3616c8579dbdbb7b43d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efb71b5f06b961b51c997df24ece3a0852760fde69d7c3616c8579dbdbb7b43d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:16 compute-0 podman[90546]: 2025-10-01 16:35:15.917610899 +0000 UTC m=+0.034528928 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:16 compute-0 ceph-osd[89167]: osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 01 16:35:16 compute-0 ceph-osd[89167]: osd.1 12 set_numa_affinity not setting numa affinity
Oct 01 16:35:16 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-1[89163]: 2025-10-01T16:35:16.012+0000 7f68dc5fe640 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 01 16:35:16 compute-0 ceph-osd[89167]: osd.1 12 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Oct 01 16:35:16 compute-0 podman[90546]: 2025-10-01 16:35:16.020591394 +0000 UTC m=+0.137509393 container init 67339cd24266488ad3766aa7e862ffd8494f50fea920f35f8a597f1572f672ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swanson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:35:16 compute-0 podman[90546]: 2025-10-01 16:35:16.027042389 +0000 UTC m=+0.143960388 container start 67339cd24266488ad3766aa7e862ffd8494f50fea920f35f8a597f1572f672ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swanson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:35:16 compute-0 podman[90546]: 2025-10-01 16:35:16.030926923 +0000 UTC m=+0.147844922 container attach 67339cd24266488ad3766aa7e862ffd8494f50fea920f35f8a597f1572f672ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swanson, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 01 16:35:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 01 16:35:16 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/862191005' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 01 16:35:16 compute-0 ceph-osd[90269]: bdev(0x56260fb61400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 01 16:35:16 compute-0 ceph-osd[90269]: bdev(0x56260fb61400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 01 16:35:16 compute-0 ceph-osd[90269]: bdev(0x56260fb61400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 16:35:16 compute-0 ceph-osd[90269]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct 01 16:35:16 compute-0 ceph-osd[90269]: bluefs mount
Oct 01 16:35:16 compute-0 ceph-osd[90269]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: bluefs mount shared_bdev_used = 4718592
Oct 01 16:35:16 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: RocksDB version: 7.9.2
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Git sha 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: DB SUMMARY
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: DB Session ID:  0WNIT27MCCDJ7JRRVPFY
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: CURRENT file:  CURRENT
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: IDENTITY file:  IDENTITY
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                         Options.error_if_exists: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                       Options.create_if_missing: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                         Options.paranoid_checks: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                                     Options.env: 0x56260fce2380
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                                Options.info_log: 0x56260ed25280
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.max_file_opening_threads: 16
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                              Options.statistics: (nil)
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                               Options.use_fsync: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                       Options.max_log_file_size: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                         Options.allow_fallocate: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                        Options.use_direct_reads: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.create_missing_column_families: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                              Options.db_log_dir: 
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                                 Options.wal_dir: db.wal
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.advise_random_on_open: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                    Options.write_buffer_manager: 0x56260fc3a6e0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                            Options.rate_limiter: (nil)
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.unordered_write: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                               Options.row_cache: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                              Options.wal_filter: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.allow_ingest_behind: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.two_write_queues: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.manual_wal_flush: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.wal_compression: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.atomic_flush: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                 Options.log_readahead_size: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.allow_data_in_errors: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.db_host_id: __hostname__
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.max_background_jobs: 4
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.max_background_compactions: -1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.max_subcompactions: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                          Options.max_open_files: -1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                          Options.bytes_per_sync: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.max_background_flushes: -1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Compression algorithms supported:
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         kZSTD supported: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         kXpressCompression supported: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         kBZip2Compression supported: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         kLZ4Compression supported: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         kZlibCompression supported: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         kLZ4HCCompression supported: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         kSnappyCompression supported: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56260ed01c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56260ed1b1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56260ed01c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56260ed1b1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56260ed01c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56260ed1b1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56260ed01c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56260ed1b1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56260ed01c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56260ed1b1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56260ed01c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56260ed1b1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56260ed01c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56260ed1b1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56260ed25840)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56260ed1b090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56260ed25840)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56260ed1b090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:           Options.merge_operator: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56260ed25840)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56260ed1b090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.compression: LZ4
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.num_levels: 7
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                           Options.bloom_locality: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                               Options.ttl: 2592000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                       Options.enable_blob_files: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                           Options.min_blob_size: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 23cb4cbe-ddad-4618-bad4-1c2e4ba95b0c
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336516186062, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336516191146, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336516, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cb4cbe-ddad-4618-bad4-1c2e4ba95b0c", "db_session_id": "0WNIT27MCCDJ7JRRVPFY", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336516194176, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1591, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 465, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336516, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cb4cbe-ddad-4618-bad4-1c2e4ba95b0c", "db_session_id": "0WNIT27MCCDJ7JRRVPFY", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336516196403, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336516, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cb4cbe-ddad-4618-bad4-1c2e4ba95b0c", "db_session_id": "0WNIT27MCCDJ7JRRVPFY", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336516197853, "job": 1, "event": "recovery_finished"}
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56260ee89c00
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: DB pointer 0x56260fc23a00
Oct 01 16:35:16 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 01 16:35:16 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Oct 01 16:35:16 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 16:35:16 compute-0 ceph-osd[90269]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.55 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.55 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.04 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.04 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 01 16:35:16 compute-0 ceph-osd[90269]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct 01 16:35:16 compute-0 ceph-osd[90269]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct 01 16:35:16 compute-0 ceph-osd[90269]: _get_class not permitted to load lua
Oct 01 16:35:16 compute-0 ceph-osd[90269]: _get_class not permitted to load sdk
Oct 01 16:35:16 compute-0 ceph-osd[90269]: _get_class not permitted to load test_remote_reads
Oct 01 16:35:16 compute-0 ceph-osd[90269]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct 01 16:35:16 compute-0 ceph-osd[90269]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct 01 16:35:16 compute-0 ceph-osd[90269]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct 01 16:35:16 compute-0 ceph-osd[90269]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct 01 16:35:16 compute-0 ceph-osd[90269]: osd.2 0 load_pgs
Oct 01 16:35:16 compute-0 ceph-osd[90269]: osd.2 0 load_pgs opened 0 pgs
Oct 01 16:35:16 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-2[90263]: 2025-10-01T16:35:16.220+0000 7f8fe63eb740 -1 osd.2 0 log_to_monitors true
Oct 01 16:35:16 compute-0 ceph-osd[90269]: osd.2 0 log_to_monitors true
Oct 01 16:35:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Oct 01 16:35:16 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2139758837,v1:192.168.122.100:6811/2139758837]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 01 16:35:16 compute-0 ceph-mgr[74571]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3366200123; not ready for session (expect reconnect)
Oct 01 16:35:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 01 16:35:16 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 16:35:16 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 01 16:35:16 compute-0 jovial_swanson[90759]: {
Oct 01 16:35:16 compute-0 jovial_swanson[90759]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 16:35:16 compute-0 jovial_swanson[90759]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:35:16 compute-0 jovial_swanson[90759]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 16:35:16 compute-0 jovial_swanson[90759]:         "osd_id": 2,
Oct 01 16:35:16 compute-0 jovial_swanson[90759]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:35:16 compute-0 jovial_swanson[90759]:         "type": "bluestore"
Oct 01 16:35:16 compute-0 jovial_swanson[90759]:     },
Oct 01 16:35:16 compute-0 jovial_swanson[90759]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 16:35:16 compute-0 jovial_swanson[90759]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:35:16 compute-0 jovial_swanson[90759]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 16:35:16 compute-0 jovial_swanson[90759]:         "osd_id": 0,
Oct 01 16:35:16 compute-0 jovial_swanson[90759]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:35:16 compute-0 jovial_swanson[90759]:         "type": "bluestore"
Oct 01 16:35:16 compute-0 jovial_swanson[90759]:     },
Oct 01 16:35:16 compute-0 jovial_swanson[90759]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 16:35:16 compute-0 jovial_swanson[90759]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:35:16 compute-0 jovial_swanson[90759]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 16:35:16 compute-0 jovial_swanson[90759]:         "osd_id": 1,
Oct 01 16:35:16 compute-0 jovial_swanson[90759]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:35:16 compute-0 jovial_swanson[90759]:         "type": "bluestore"
Oct 01 16:35:16 compute-0 jovial_swanson[90759]:     }
Oct 01 16:35:16 compute-0 jovial_swanson[90759]: }
Oct 01 16:35:16 compute-0 systemd[1]: libpod-67339cd24266488ad3766aa7e862ffd8494f50fea920f35f8a597f1572f672ff.scope: Deactivated successfully.
Oct 01 16:35:16 compute-0 podman[90546]: 2025-10-01 16:35:16.929963644 +0000 UTC m=+1.046881643 container died 67339cd24266488ad3766aa7e862ffd8494f50fea920f35f8a597f1572f672ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swanson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Oct 01 16:35:16 compute-0 ceph-osd[89167]: osd.1 12 tick checking mon for new map
Oct 01 16:35:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-efb71b5f06b961b51c997df24ece3a0852760fde69d7c3616c8579dbdbb7b43d-merged.mount: Deactivated successfully.
Oct 01 16:35:16 compute-0 podman[90546]: 2025-10-01 16:35:16.979357935 +0000 UTC m=+1.096275934 container remove 67339cd24266488ad3766aa7e862ffd8494f50fea920f35f8a597f1572f672ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:35:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Oct 01 16:35:16 compute-0 systemd[1]: libpod-conmon-67339cd24266488ad3766aa7e862ffd8494f50fea920f35f8a597f1572f672ff.scope: Deactivated successfully.
Oct 01 16:35:16 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/862191005' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 01 16:35:16 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2139758837,v1:192.168.122.100:6811/2139758837]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct 01 16:35:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Oct 01 16:35:16 compute-0 elated_pasteur[90482]: pool 'vms' created
Oct 01 16:35:16 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/3366200123,v1:192.168.122.100:6807/3366200123] boot
Oct 01 16:35:16 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Oct 01 16:35:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct 01 16:35:16 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2139758837,v1:192.168.122.100:6811/2139758837]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 01 16:35:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e13 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct 01 16:35:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 01 16:35:16 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 16:35:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 01 16:35:16 compute-0 ceph-osd[89167]: osd.1 13 state: booting -> active
Oct 01 16:35:16 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:35:16 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 16:35:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 13 pg[2.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [0] r=0 lpr=13 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:35:16 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 01 16:35:17 compute-0 ceph-mon[74273]: OSD bench result of 8896.662674 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 01 16:35:17 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/862191005' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 01 16:35:17 compute-0 ceph-mon[74273]: from='osd.2 [v2:192.168.122.100:6810/2139758837,v1:192.168.122.100:6811/2139758837]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 01 16:35:17 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 16:35:17 compute-0 systemd[1]: libpod-cf78d0b4c13b78d1bee53a3716a4b198a75235e2e6e24908c99fbebb572a7d0a.scope: Deactivated successfully.
Oct 01 16:35:17 compute-0 conmon[90482]: conmon cf78d0b4c13b78d1bee5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cf78d0b4c13b78d1bee53a3716a4b198a75235e2e6e24908c99fbebb572a7d0a.scope/container/memory.events
Oct 01 16:35:17 compute-0 podman[90442]: 2025-10-01 16:35:17.011687093 +0000 UTC m=+1.692243297 container died cf78d0b4c13b78d1bee53a3716a4b198a75235e2e6e24908c99fbebb572a7d0a (image=quay.io/ceph/ceph:v18, name=elated_pasteur, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:35:17 compute-0 sudo[90370]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:35:17 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:35:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-22b1a182bc13180e1deab7db3a0ff5bc507678fd266a83aa8dce2a42bea2155c-merged.mount: Deactivated successfully.
Oct 01 16:35:17 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:17 compute-0 podman[90442]: 2025-10-01 16:35:17.065443728 +0000 UTC m=+1.745999942 container remove cf78d0b4c13b78d1bee53a3716a4b198a75235e2e6e24908c99fbebb572a7d0a (image=quay.io/ceph/ceph:v18, name=elated_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 01 16:35:17 compute-0 systemd[1]: libpod-conmon-cf78d0b4c13b78d1bee53a3716a4b198a75235e2e6e24908c99fbebb572a7d0a.scope: Deactivated successfully.
Oct 01 16:35:17 compute-0 sudo[90420]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:17 compute-0 sudo[91031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:17 compute-0 sudo[91031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:17 compute-0 sudo[91031]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:17 compute-0 sudo[91062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 16:35:17 compute-0 sudo[91062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:17 compute-0 sudo[91062]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:17 compute-0 sudo[91124]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceproznmdbygungkqwjvuihbhyjnfpuf ; /usr/bin/python3'
Oct 01 16:35:17 compute-0 sudo[91124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:35:17 compute-0 sudo[91099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:17 compute-0 sudo[91099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:17 compute-0 sudo[91099]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:17 compute-0 sudo[91138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:35:17 compute-0 sudo[91138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:17 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct 01 16:35:17 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct 01 16:35:17 compute-0 sudo[91138]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:17 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v37: 2 pgs: 2 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Oct 01 16:35:17 compute-0 sudo[91163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:17 compute-0 sudo[91163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:17 compute-0 sudo[91163]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:17 compute-0 python3[91135]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:35:17 compute-0 sudo[91188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 01 16:35:17 compute-0 sudo[91188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:17 compute-0 podman[91209]: 2025-10-01 16:35:17.389555704 +0000 UTC m=+0.036061488 container create 192a4b894a4e53959a3c0f1d3f28fc4c470227b5e88e7d544d763e9fc86dacc9 (image=quay.io/ceph/ceph:v18, name=festive_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 01 16:35:17 compute-0 systemd[1]: Started libpod-conmon-192a4b894a4e53959a3c0f1d3f28fc4c470227b5e88e7d544d763e9fc86dacc9.scope.
Oct 01 16:35:17 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2299812aa067c620144cc4b37167bb9c6c2abd58a2bf59769f7faf4d7f4aa875/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2299812aa067c620144cc4b37167bb9c6c2abd58a2bf59769f7faf4d7f4aa875/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:17 compute-0 podman[91209]: 2025-10-01 16:35:17.457214584 +0000 UTC m=+0.103720398 container init 192a4b894a4e53959a3c0f1d3f28fc4c470227b5e88e7d544d763e9fc86dacc9 (image=quay.io/ceph/ceph:v18, name=festive_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:35:17 compute-0 podman[91209]: 2025-10-01 16:35:17.462952304 +0000 UTC m=+0.109458088 container start 192a4b894a4e53959a3c0f1d3f28fc4c470227b5e88e7d544d763e9fc86dacc9 (image=quay.io/ceph/ceph:v18, name=festive_rhodes, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:35:17 compute-0 podman[91209]: 2025-10-01 16:35:17.465973048 +0000 UTC m=+0.112478832 container attach 192a4b894a4e53959a3c0f1d3f28fc4c470227b5e88e7d544d763e9fc86dacc9 (image=quay.io/ceph/ceph:v18, name=festive_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:35:17 compute-0 podman[91209]: 2025-10-01 16:35:17.375088058 +0000 UTC m=+0.021593862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:35:17 compute-0 podman[91303]: 2025-10-01 16:35:17.806172102 +0000 UTC m=+0.061093779 container exec bfdaa9b78cc1558959452c7020a00aa78f3da27e3ededf3766f2f88165c2443b (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 16:35:17 compute-0 podman[91303]: 2025-10-01 16:35:17.891126823 +0000 UTC m=+0.146048480 container exec_died bfdaa9b78cc1558959452c7020a00aa78f3da27e3ededf3766f2f88165c2443b (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 01 16:35:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Oct 01 16:35:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 01 16:35:17 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/805049338' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 01 16:35:17 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2139758837,v1:192.168.122.100:6811/2139758837]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 01 16:35:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Oct 01 16:35:17 compute-0 ceph-osd[90269]: osd.2 0 done with init, starting boot process
Oct 01 16:35:17 compute-0 ceph-osd[90269]: osd.2 0 start_boot
Oct 01 16:35:17 compute-0 ceph-osd[90269]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct 01 16:35:17 compute-0 ceph-osd[90269]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct 01 16:35:17 compute-0 ceph-osd[90269]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct 01 16:35:17 compute-0 ceph-osd[90269]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct 01 16:35:17 compute-0 ceph-osd[90269]: osd.2 0  bench count 12288000 bsize 4 KiB
Oct 01 16:35:18 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Oct 01 16:35:18 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 14 pg[2.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=14) [] r=-1 lpr=14 pi=[13,14)/0 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:35:18 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 14 pg[2.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=14) [] r=-1 lpr=14 pi=[13,14)/0 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:35:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 01 16:35:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 16:35:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=13/14 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:35:18 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 01 16:35:18 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/862191005' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 01 16:35:18 compute-0 ceph-mon[74273]: from='osd.2 [v2:192.168.122.100:6810/2139758837,v1:192.168.122.100:6811/2139758837]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct 01 16:35:18 compute-0 ceph-mon[74273]: osd.1 [v2:192.168.122.100:6806/3366200123,v1:192.168.122.100:6807/3366200123] boot
Oct 01 16:35:18 compute-0 ceph-mon[74273]: osdmap e13: 3 total, 2 up, 3 in
Oct 01 16:35:18 compute-0 ceph-mon[74273]: from='osd.2 [v2:192.168.122.100:6810/2139758837,v1:192.168.122.100:6811/2139758837]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 01 16:35:18 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 16:35:18 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 16:35:18 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:18 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:18 compute-0 ceph-mon[74273]: pgmap v37: 2 pgs: 2 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Oct 01 16:35:18 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/805049338' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 01 16:35:18 compute-0 ceph-mon[74273]: from='osd.2 [v2:192.168.122.100:6810/2139758837,v1:192.168.122.100:6811/2139758837]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 01 16:35:18 compute-0 ceph-mgr[74571]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2139758837; not ready for session (expect reconnect)
Oct 01 16:35:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 01 16:35:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 16:35:18 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 01 16:35:18 compute-0 ceph-mon[74273]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 01 16:35:18 compute-0 ceph-mgr[74571]: [devicehealth INFO root] creating main.db for devicehealth
Oct 01 16:35:18 compute-0 ceph-mgr[74571]: [devicehealth INFO root] Check health
Oct 01 16:35:18 compute-0 ceph-mgr[74571]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Oct 01 16:35:18 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct 01 16:35:18 compute-0 sudo[91462]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Oct 01 16:35:18 compute-0 sudo[91462]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 01 16:35:18 compute-0 sudo[91462]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Oct 01 16:35:18 compute-0 sudo[91188]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:18 compute-0 sudo[91462]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:18 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct 01 16:35:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct 01 16:35:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 01 16:35:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:35:18 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:35:18 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:18 compute-0 sudo[91465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:18 compute-0 sudo[91465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:18 compute-0 sudo[91465]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:18 compute-0 sudo[91490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:35:18 compute-0 sudo[91490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:18 compute-0 sudo[91490]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:18 compute-0 sudo[91515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:18 compute-0 sudo[91515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:18 compute-0 sudo[91515]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:18 compute-0 sudo[91540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- inventory --format=json-pretty --filter-for-batch
Oct 01 16:35:18 compute-0 sudo[91540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:35:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Oct 01 16:35:19 compute-0 podman[91604]: 2025-10-01 16:35:19.001105022 +0000 UTC m=+0.056468982 container create 8c886d0ba44d7339207481aab98f88d8a669750eccd9c0bf91f134f0d8f4329c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 01 16:35:19 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/805049338' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 01 16:35:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Oct 01 16:35:19 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Oct 01 16:35:19 compute-0 festive_rhodes[91228]: pool 'volumes' created
Oct 01 16:35:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 01 16:35:19 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 01 16:35:19 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 16:35:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 01 16:35:19 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 16:35:19 compute-0 ceph-mgr[74571]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2139758837; not ready for session (expect reconnect)
Oct 01 16:35:19 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 01 16:35:19 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 15 pg[3.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:35:19 compute-0 ceph-mon[74273]: osdmap e14: 3 total, 2 up, 3 in
Oct 01 16:35:19 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 16:35:19 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 16:35:19 compute-0 ceph-mon[74273]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 01 16:35:19 compute-0 ceph-mon[74273]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct 01 16:35:19 compute-0 ceph-mon[74273]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct 01 16:35:19 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 01 16:35:19 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:19 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:19 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/805049338' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 01 16:35:19 compute-0 ceph-mon[74273]: osdmap e15: 3 total, 2 up, 3 in
Oct 01 16:35:19 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 16:35:19 compute-0 systemd[1]: libpod-192a4b894a4e53959a3c0f1d3f28fc4c470227b5e88e7d544d763e9fc86dacc9.scope: Deactivated successfully.
Oct 01 16:35:19 compute-0 systemd[1]: Started libpod-conmon-8c886d0ba44d7339207481aab98f88d8a669750eccd9c0bf91f134f0d8f4329c.scope.
Oct 01 16:35:19 compute-0 podman[91604]: 2025-10-01 16:35:18.970774936 +0000 UTC m=+0.026138936 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:19 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:19 compute-0 podman[91621]: 2025-10-01 16:35:19.093394105 +0000 UTC m=+0.033932750 container died 192a4b894a4e53959a3c0f1d3f28fc4c470227b5e88e7d544d763e9fc86dacc9 (image=quay.io/ceph/ceph:v18, name=festive_rhodes, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:35:19 compute-0 podman[91604]: 2025-10-01 16:35:19.150960952 +0000 UTC m=+0.206324932 container init 8c886d0ba44d7339207481aab98f88d8a669750eccd9c0bf91f134f0d8f4329c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_einstein, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 01 16:35:19 compute-0 podman[91604]: 2025-10-01 16:35:19.159181602 +0000 UTC m=+0.214545592 container start 8c886d0ba44d7339207481aab98f88d8a669750eccd9c0bf91f134f0d8f4329c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 01 16:35:19 compute-0 pedantic_einstein[91627]: 167 167
Oct 01 16:35:19 compute-0 systemd[1]: libpod-8c886d0ba44d7339207481aab98f88d8a669750eccd9c0bf91f134f0d8f4329c.scope: Deactivated successfully.
Oct 01 16:35:19 compute-0 conmon[91627]: conmon 8c886d0ba44d73392074 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8c886d0ba44d7339207481aab98f88d8a669750eccd9c0bf91f134f0d8f4329c.scope/container/memory.events
Oct 01 16:35:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-2299812aa067c620144cc4b37167bb9c6c2abd58a2bf59769f7faf4d7f4aa875-merged.mount: Deactivated successfully.
Oct 01 16:35:19 compute-0 podman[91604]: 2025-10-01 16:35:19.212284796 +0000 UTC m=+0.267648796 container attach 8c886d0ba44d7339207481aab98f88d8a669750eccd9c0bf91f134f0d8f4329c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_einstein, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 01 16:35:19 compute-0 podman[91604]: 2025-10-01 16:35:19.21376946 +0000 UTC m=+0.269133460 container died 8c886d0ba44d7339207481aab98f88d8a669750eccd9c0bf91f134f0d8f4329c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_einstein, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 01 16:35:19 compute-0 podman[91621]: 2025-10-01 16:35:19.243188812 +0000 UTC m=+0.183727427 container remove 192a4b894a4e53959a3c0f1d3f28fc4c470227b5e88e7d544d763e9fc86dacc9 (image=quay.io/ceph/ceph:v18, name=festive_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 01 16:35:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-82cb09da65591924fefb7e86e629cb69b0c154703c5c3c84c866084572f75989-merged.mount: Deactivated successfully.
Oct 01 16:35:19 compute-0 systemd[1]: libpod-conmon-192a4b894a4e53959a3c0f1d3f28fc4c470227b5e88e7d544d763e9fc86dacc9.scope: Deactivated successfully.
Oct 01 16:35:19 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v40: 3 pgs: 1 creating+peering, 2 unknown; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 01 16:35:19 compute-0 podman[91604]: 2025-10-01 16:35:19.298871566 +0000 UTC m=+0.354235536 container remove 8c886d0ba44d7339207481aab98f88d8a669750eccd9c0bf91f134f0d8f4329c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_einstein, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:35:19 compute-0 systemd[1]: libpod-conmon-8c886d0ba44d7339207481aab98f88d8a669750eccd9c0bf91f134f0d8f4329c.scope: Deactivated successfully.
Oct 01 16:35:19 compute-0 sudo[91124]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:19 compute-0 podman[91661]: 2025-10-01 16:35:19.47497038 +0000 UTC m=+0.044502005 container create a5be5f9db122c1d1fef5f00de78aa6540b780a2366ab2082e7ebc9dea9ff80e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:35:19 compute-0 sudo[91697]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddsxccyqbekipwcjtpwoqbjicujduoel ; /usr/bin/python3'
Oct 01 16:35:19 compute-0 sudo[91697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:35:19 compute-0 systemd[1]: Started libpod-conmon-a5be5f9db122c1d1fef5f00de78aa6540b780a2366ab2082e7ebc9dea9ff80e8.scope.
Oct 01 16:35:19 compute-0 podman[91661]: 2025-10-01 16:35:19.455080541 +0000 UTC m=+0.024612196 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:19 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99840260522cdd94334da87d2d964fb719eced5b6ece244304daa211252adfe3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99840260522cdd94334da87d2d964fb719eced5b6ece244304daa211252adfe3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99840260522cdd94334da87d2d964fb719eced5b6ece244304daa211252adfe3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99840260522cdd94334da87d2d964fb719eced5b6ece244304daa211252adfe3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:19 compute-0 podman[91661]: 2025-10-01 16:35:19.588239128 +0000 UTC m=+0.157770753 container init a5be5f9db122c1d1fef5f00de78aa6540b780a2366ab2082e7ebc9dea9ff80e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 01 16:35:19 compute-0 podman[91661]: 2025-10-01 16:35:19.596238811 +0000 UTC m=+0.165770436 container start a5be5f9db122c1d1fef5f00de78aa6540b780a2366ab2082e7ebc9dea9ff80e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_cray, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:35:19 compute-0 podman[91661]: 2025-10-01 16:35:19.613220734 +0000 UTC m=+0.182752359 container attach a5be5f9db122c1d1fef5f00de78aa6540b780a2366ab2082e7ebc9dea9ff80e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_cray, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:35:19 compute-0 python3[91699]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:35:19 compute-0 podman[91708]: 2025-10-01 16:35:19.692227677 +0000 UTC m=+0.033265210 container create b0fa54dbedfbdb51710da9dd43d55f98947a27d13ae8d6b867090589f7215b1e (image=quay.io/ceph/ceph:v18, name=charming_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 01 16:35:19 compute-0 systemd[1]: Started libpod-conmon-b0fa54dbedfbdb51710da9dd43d55f98947a27d13ae8d6b867090589f7215b1e.scope.
Oct 01 16:35:19 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9bf22d32205a648721c6a59c9517a63cc1fb2d24480193b53b695fb06522fff/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9bf22d32205a648721c6a59c9517a63cc1fb2d24480193b53b695fb06522fff/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:19 compute-0 podman[91708]: 2025-10-01 16:35:19.763815187 +0000 UTC m=+0.104852740 container init b0fa54dbedfbdb51710da9dd43d55f98947a27d13ae8d6b867090589f7215b1e (image=quay.io/ceph/ceph:v18, name=charming_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 01 16:35:19 compute-0 podman[91708]: 2025-10-01 16:35:19.770527035 +0000 UTC m=+0.111564568 container start b0fa54dbedfbdb51710da9dd43d55f98947a27d13ae8d6b867090589f7215b1e (image=quay.io/ceph/ceph:v18, name=charming_solomon, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 01 16:35:19 compute-0 podman[91708]: 2025-10-01 16:35:19.679120389 +0000 UTC m=+0.020157942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:35:19 compute-0 podman[91708]: 2025-10-01 16:35:19.785754002 +0000 UTC m=+0.126791585 container attach b0fa54dbedfbdb51710da9dd43d55f98947a27d13ae8d6b867090589f7215b1e (image=quay.io/ceph/ceph:v18, name=charming_solomon, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:35:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Oct 01 16:35:20 compute-0 ceph-mgr[74571]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2139758837; not ready for session (expect reconnect)
Oct 01 16:35:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 01 16:35:20 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 16:35:20 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 01 16:35:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Oct 01 16:35:20 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Oct 01 16:35:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 01 16:35:20 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 16:35:20 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 01 16:35:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 16 pg[3.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:35:20 compute-0 ceph-mon[74273]: purged_snaps scrub starts
Oct 01 16:35:20 compute-0 ceph-mon[74273]: purged_snaps scrub ok
Oct 01 16:35:20 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 16:35:20 compute-0 ceph-mon[74273]: pgmap v40: 3 pgs: 1 creating+peering, 2 unknown; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 01 16:35:20 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 16:35:20 compute-0 ceph-mon[74273]: osdmap e16: 3 total, 2 up, 3 in
Oct 01 16:35:20 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 16:35:20 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.pmbdpj(active, since 68s)
Oct 01 16:35:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 01 16:35:20 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2316982609' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 01 16:35:20 compute-0 keen_cray[91703]: [
Oct 01 16:35:20 compute-0 keen_cray[91703]:     {
Oct 01 16:35:20 compute-0 keen_cray[91703]:         "available": false,
Oct 01 16:35:20 compute-0 keen_cray[91703]:         "ceph_device": false,
Oct 01 16:35:20 compute-0 keen_cray[91703]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 01 16:35:20 compute-0 keen_cray[91703]:         "lsm_data": {},
Oct 01 16:35:20 compute-0 keen_cray[91703]:         "lvs": [],
Oct 01 16:35:20 compute-0 keen_cray[91703]:         "path": "/dev/sr0",
Oct 01 16:35:20 compute-0 keen_cray[91703]:         "rejected_reasons": [
Oct 01 16:35:20 compute-0 keen_cray[91703]:             "Insufficient space (<5GB)",
Oct 01 16:35:20 compute-0 keen_cray[91703]:             "Has a FileSystem"
Oct 01 16:35:20 compute-0 keen_cray[91703]:         ],
Oct 01 16:35:20 compute-0 keen_cray[91703]:         "sys_api": {
Oct 01 16:35:20 compute-0 keen_cray[91703]:             "actuators": null,
Oct 01 16:35:20 compute-0 keen_cray[91703]:             "device_nodes": "sr0",
Oct 01 16:35:20 compute-0 keen_cray[91703]:             "devname": "sr0",
Oct 01 16:35:20 compute-0 keen_cray[91703]:             "human_readable_size": "482.00 KB",
Oct 01 16:35:20 compute-0 keen_cray[91703]:             "id_bus": "ata",
Oct 01 16:35:20 compute-0 keen_cray[91703]:             "model": "QEMU DVD-ROM",
Oct 01 16:35:20 compute-0 keen_cray[91703]:             "nr_requests": "2",
Oct 01 16:35:20 compute-0 keen_cray[91703]:             "parent": "/dev/sr0",
Oct 01 16:35:20 compute-0 keen_cray[91703]:             "partitions": {},
Oct 01 16:35:20 compute-0 keen_cray[91703]:             "path": "/dev/sr0",
Oct 01 16:35:20 compute-0 keen_cray[91703]:             "removable": "1",
Oct 01 16:35:20 compute-0 keen_cray[91703]:             "rev": "2.5+",
Oct 01 16:35:20 compute-0 keen_cray[91703]:             "ro": "0",
Oct 01 16:35:20 compute-0 keen_cray[91703]:             "rotational": "0",
Oct 01 16:35:20 compute-0 keen_cray[91703]:             "sas_address": "",
Oct 01 16:35:20 compute-0 keen_cray[91703]:             "sas_device_handle": "",
Oct 01 16:35:20 compute-0 keen_cray[91703]:             "scheduler_mode": "mq-deadline",
Oct 01 16:35:20 compute-0 keen_cray[91703]:             "sectors": 0,
Oct 01 16:35:20 compute-0 keen_cray[91703]:             "sectorsize": "2048",
Oct 01 16:35:20 compute-0 keen_cray[91703]:             "size": 493568.0,
Oct 01 16:35:20 compute-0 keen_cray[91703]:             "support_discard": "2048",
Oct 01 16:35:20 compute-0 keen_cray[91703]:             "type": "disk",
Oct 01 16:35:20 compute-0 keen_cray[91703]:             "vendor": "QEMU"
Oct 01 16:35:20 compute-0 keen_cray[91703]:         }
Oct 01 16:35:20 compute-0 keen_cray[91703]:     }
Oct 01 16:35:20 compute-0 keen_cray[91703]: ]
Oct 01 16:35:20 compute-0 systemd[1]: libpod-a5be5f9db122c1d1fef5f00de78aa6540b780a2366ab2082e7ebc9dea9ff80e8.scope: Deactivated successfully.
Oct 01 16:35:20 compute-0 systemd[1]: libpod-a5be5f9db122c1d1fef5f00de78aa6540b780a2366ab2082e7ebc9dea9ff80e8.scope: Consumed 1.309s CPU time.
Oct 01 16:35:20 compute-0 podman[91661]: 2025-10-01 16:35:20.911806848 +0000 UTC m=+1.481338473 container died a5be5f9db122c1d1fef5f00de78aa6540b780a2366ab2082e7ebc9dea9ff80e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 01 16:35:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-99840260522cdd94334da87d2d964fb719eced5b6ece244304daa211252adfe3-merged.mount: Deactivated successfully.
Oct 01 16:35:21 compute-0 podman[91661]: 2025-10-01 16:35:21.024336693 +0000 UTC m=+1.593868318 container remove a5be5f9db122c1d1fef5f00de78aa6540b780a2366ab2082e7ebc9dea9ff80e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 01 16:35:21 compute-0 ceph-mgr[74571]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2139758837; not ready for session (expect reconnect)
Oct 01 16:35:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 01 16:35:21 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 16:35:21 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 01 16:35:21 compute-0 systemd[1]: libpod-conmon-a5be5f9db122c1d1fef5f00de78aa6540b780a2366ab2082e7ebc9dea9ff80e8.scope: Deactivated successfully.
Oct 01 16:35:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Oct 01 16:35:21 compute-0 sudo[91540]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:35:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2316982609' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 01 16:35:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e17 e17: 3 total, 2 up, 3 in
Oct 01 16:35:21 compute-0 charming_solomon[91723]: pool 'backups' created
Oct 01 16:35:21 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 2 up, 3 in
Oct 01 16:35:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 01 16:35:21 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 16:35:21 compute-0 systemd[1]: libpod-b0fa54dbedfbdb51710da9dd43d55f98947a27d13ae8d6b867090589f7215b1e.scope: Deactivated successfully.
Oct 01 16:35:21 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 01 16:35:21 compute-0 podman[91708]: 2025-10-01 16:35:21.095224056 +0000 UTC m=+1.436261609 container died b0fa54dbedfbdb51710da9dd43d55f98947a27d13ae8d6b867090589f7215b1e (image=quay.io/ceph/ceph:v18, name=charming_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:35:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:21 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 17 pg[4.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:35:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:35:21 compute-0 ceph-mon[74273]: mgrmap e9: compute-0.pmbdpj(active, since 68s)
Oct 01 16:35:21 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2316982609' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 01 16:35:21 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 16:35:21 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2316982609' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 01 16:35:21 compute-0 ceph-mon[74273]: osdmap e17: 3 total, 2 up, 3 in
Oct 01 16:35:21 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 16:35:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9bf22d32205a648721c6a59c9517a63cc1fb2d24480193b53b695fb06522fff-merged.mount: Deactivated successfully.
Oct 01 16:35:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Oct 01 16:35:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 01 16:35:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Oct 01 16:35:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 01 16:35:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Oct 01 16:35:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 01 16:35:21 compute-0 ceph-mgr[74571]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43639k
Oct 01 16:35:21 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43639k
Oct 01 16:35:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Oct 01 16:35:21 compute-0 ceph-mgr[74571]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44686677: error parsing value: Value '44686677' is below minimum 939524096
Oct 01 16:35:21 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44686677: error parsing value: Value '44686677' is below minimum 939524096
Oct 01 16:35:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:35:21 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:35:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 16:35:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:35:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 16:35:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:21 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev c96b652b-e044-4e70-8ec4-4ace9e4938fd does not exist
Oct 01 16:35:21 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 8a490e9e-2e5b-4dcd-ab3f-90f64680c6c8 does not exist
Oct 01 16:35:21 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 6bef064b-e7ea-44c3-8123-5ae3a9325134 does not exist
Oct 01 16:35:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 16:35:21 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:35:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 16:35:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:35:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:35:21 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:35:21 compute-0 podman[91708]: 2025-10-01 16:35:21.233741542 +0000 UTC m=+1.574779085 container remove b0fa54dbedfbdb51710da9dd43d55f98947a27d13ae8d6b867090589f7215b1e (image=quay.io/ceph/ceph:v18, name=charming_solomon, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 01 16:35:21 compute-0 systemd[1]: libpod-conmon-b0fa54dbedfbdb51710da9dd43d55f98947a27d13ae8d6b867090589f7215b1e.scope: Deactivated successfully.
Oct 01 16:35:21 compute-0 sudo[91697]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:21 compute-0 sudo[93397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:21 compute-0 sudo[93397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:21 compute-0 sudo[93397]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:21 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v43: 4 pgs: 1 creating+peering, 3 unknown; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 01 16:35:21 compute-0 sudo[93422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:35:21 compute-0 sudo[93422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:21 compute-0 sudo[93422]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:21 compute-0 sudo[93486]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgbqcklorehqgbmbntinpfkaundagrab ; /usr/bin/python3'
Oct 01 16:35:21 compute-0 sudo[93486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:35:21 compute-0 sudo[93459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:21 compute-0 sudo[93459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:21 compute-0 sudo[93459]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:21 compute-0 sudo[93498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 16:35:21 compute-0 sudo[93498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:21 compute-0 python3[93495]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:35:21 compute-0 ceph-osd[90269]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 17.687 iops: 4527.874 elapsed_sec: 0.663
Oct 01 16:35:21 compute-0 ceph-osd[90269]: log_channel(cluster) log [WRN] : OSD bench result of 4527.873881 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 01 16:35:21 compute-0 podman[93523]: 2025-10-01 16:35:21.600058463 +0000 UTC m=+0.059447780 container create ad92065580bb104ac3cf2eb6989fec19e91fa7f370e898724377bbe3ba34bcd2 (image=quay.io/ceph/ceph:v18, name=lucid_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 01 16:35:21 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-2[90263]: 2025-10-01T16:35:21.599+0000 7f8fe2b82640 -1 osd.2 0 waiting for initial osdmap
Oct 01 16:35:21 compute-0 ceph-osd[90269]: osd.2 0 waiting for initial osdmap
Oct 01 16:35:21 compute-0 ceph-osd[90269]: osd.2 17 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 01 16:35:21 compute-0 ceph-osd[90269]: osd.2 17 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Oct 01 16:35:21 compute-0 ceph-osd[90269]: osd.2 17 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 01 16:35:21 compute-0 ceph-osd[90269]: osd.2 17 check_osdmap_features require_osd_release unknown -> reef
Oct 01 16:35:21 compute-0 podman[93523]: 2025-10-01 16:35:21.563708737 +0000 UTC m=+0.023098084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:35:21 compute-0 systemd[1]: Started libpod-conmon-ad92065580bb104ac3cf2eb6989fec19e91fa7f370e898724377bbe3ba34bcd2.scope.
Oct 01 16:35:21 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8986b1209b957cf85f07bf0dfa848588b1a305e9963867023d7b851d10aa6da/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8986b1209b957cf85f07bf0dfa848588b1a305e9963867023d7b851d10aa6da/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:21 compute-0 podman[93523]: 2025-10-01 16:35:21.735934816 +0000 UTC m=+0.195324163 container init ad92065580bb104ac3cf2eb6989fec19e91fa7f370e898724377bbe3ba34bcd2 (image=quay.io/ceph/ceph:v18, name=lucid_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:35:21 compute-0 podman[93523]: 2025-10-01 16:35:21.744842154 +0000 UTC m=+0.204231461 container start ad92065580bb104ac3cf2eb6989fec19e91fa7f370e898724377bbe3ba34bcd2 (image=quay.io/ceph/ceph:v18, name=lucid_bouman, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Oct 01 16:35:21 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-osd-2[90263]: 2025-10-01T16:35:21.744+0000 7f8fdd993640 -1 osd.2 17 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 01 16:35:21 compute-0 ceph-osd[90269]: osd.2 17 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 01 16:35:21 compute-0 ceph-osd[90269]: osd.2 17 set_numa_affinity not setting numa affinity
Oct 01 16:35:21 compute-0 ceph-osd[90269]: osd.2 17 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Oct 01 16:35:21 compute-0 podman[93523]: 2025-10-01 16:35:21.752934787 +0000 UTC m=+0.212324124 container attach ad92065580bb104ac3cf2eb6989fec19e91fa7f370e898724377bbe3ba34bcd2 (image=quay.io/ceph/ceph:v18, name=lucid_bouman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:35:21 compute-0 podman[93584]: 2025-10-01 16:35:21.860840327 +0000 UTC m=+0.042031523 container create 7d7940687af4aff1375e517250f318ec8d8ccfed68a270ea40ea9fac9a40b339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_kilby, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 16:35:21 compute-0 systemd[1]: Started libpod-conmon-7d7940687af4aff1375e517250f318ec8d8ccfed68a270ea40ea9fac9a40b339.scope.
Oct 01 16:35:21 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:21 compute-0 podman[93584]: 2025-10-01 16:35:21.841159846 +0000 UTC m=+0.022351072 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:21 compute-0 podman[93584]: 2025-10-01 16:35:21.98266687 +0000 UTC m=+0.163858096 container init 7d7940687af4aff1375e517250f318ec8d8ccfed68a270ea40ea9fac9a40b339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_kilby, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 01 16:35:21 compute-0 podman[93584]: 2025-10-01 16:35:21.987690244 +0000 UTC m=+0.168881440 container start 7d7940687af4aff1375e517250f318ec8d8ccfed68a270ea40ea9fac9a40b339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_kilby, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 01 16:35:21 compute-0 funny_kilby[93601]: 167 167
Oct 01 16:35:21 compute-0 systemd[1]: libpod-7d7940687af4aff1375e517250f318ec8d8ccfed68a270ea40ea9fac9a40b339.scope: Deactivated successfully.
Oct 01 16:35:22 compute-0 podman[93584]: 2025-10-01 16:35:22.005037959 +0000 UTC m=+0.186229175 container attach 7d7940687af4aff1375e517250f318ec8d8ccfed68a270ea40ea9fac9a40b339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:35:22 compute-0 podman[93584]: 2025-10-01 16:35:22.00541137 +0000 UTC m=+0.186602566 container died 7d7940687af4aff1375e517250f318ec8d8ccfed68a270ea40ea9fac9a40b339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_kilby, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Oct 01 16:35:22 compute-0 ceph-mgr[74571]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2139758837; not ready for session (expect reconnect)
Oct 01 16:35:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 01 16:35:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 16:35:22 compute-0 ceph-mgr[74571]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 01 16:35:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-06d4be719d2ab63a7860a9d2e68ac3739fc0201e37a3724d056f4e2b79e7e14b-merged.mount: Deactivated successfully.
Oct 01 16:35:22 compute-0 podman[93584]: 2025-10-01 16:35:22.071681536 +0000 UTC m=+0.252872732 container remove 7d7940687af4aff1375e517250f318ec8d8ccfed68a270ea40ea9fac9a40b339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_kilby, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:35:22 compute-0 systemd[1]: libpod-conmon-7d7940687af4aff1375e517250f318ec8d8ccfed68a270ea40ea9fac9a40b339.scope: Deactivated successfully.
Oct 01 16:35:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Oct 01 16:35:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Oct 01 16:35:22 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/2139758837,v1:192.168.122.100:6811/2139758837] boot
Oct 01 16:35:22 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Oct 01 16:35:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 01 16:35:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 16:35:22 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 18 pg[2.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=-1 lpr=18 pi=[13,18)/0 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:35:22 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 18 pg[2.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=-1 lpr=18 pi=[13,18)/0 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:35:22 compute-0 ceph-osd[90269]: osd.2 18 state: booting -> active
Oct 01 16:35:22 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 pi=[13,18)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:35:22 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 18 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:35:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 01 16:35:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 01 16:35:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 01 16:35:22 compute-0 ceph-mon[74273]: Adjusting osd_memory_target on compute-0 to 43639k
Oct 01 16:35:22 compute-0 ceph-mon[74273]: Unable to set osd_memory_target on compute-0 to 44686677: error parsing value: Value '44686677' is below minimum 939524096
Oct 01 16:35:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:35:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:35:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:35:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:35:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:35:22 compute-0 ceph-mon[74273]: pgmap v43: 4 pgs: 1 creating+peering, 3 unknown; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 01 16:35:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 16:35:22 compute-0 ceph-mon[74273]: osd.2 [v2:192.168.122.100:6810/2139758837,v1:192.168.122.100:6811/2139758837] boot
Oct 01 16:35:22 compute-0 ceph-mon[74273]: osdmap e18: 3 total, 3 up, 3 in
Oct 01 16:35:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 16:35:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 01 16:35:22 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1401853213' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 01 16:35:22 compute-0 podman[93645]: 2025-10-01 16:35:22.297775919 +0000 UTC m=+0.103050589 container create e39399ec1419a6aa5954b30f9809c1c29ef6ce718cb3ca3917fd19bc06cd39d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 01 16:35:22 compute-0 podman[93645]: 2025-10-01 16:35:22.218850367 +0000 UTC m=+0.024125007 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:22 compute-0 systemd[1]: Started libpod-conmon-e39399ec1419a6aa5954b30f9809c1c29ef6ce718cb3ca3917fd19bc06cd39d9.scope.
Oct 01 16:35:22 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0994aada61ca4f044b1e08dbeed69effe862edee5fb6b7c585bf2c3777ae384/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0994aada61ca4f044b1e08dbeed69effe862edee5fb6b7c585bf2c3777ae384/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0994aada61ca4f044b1e08dbeed69effe862edee5fb6b7c585bf2c3777ae384/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0994aada61ca4f044b1e08dbeed69effe862edee5fb6b7c585bf2c3777ae384/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0994aada61ca4f044b1e08dbeed69effe862edee5fb6b7c585bf2c3777ae384/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:22 compute-0 podman[93645]: 2025-10-01 16:35:22.423827579 +0000 UTC m=+0.229102249 container init e39399ec1419a6aa5954b30f9809c1c29ef6ce718cb3ca3917fd19bc06cd39d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:35:22 compute-0 podman[93645]: 2025-10-01 16:35:22.435528645 +0000 UTC m=+0.240803285 container start e39399ec1419a6aa5954b30f9809c1c29ef6ce718cb3ca3917fd19bc06cd39d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 01 16:35:22 compute-0 podman[93645]: 2025-10-01 16:35:22.439269064 +0000 UTC m=+0.244543764 container attach e39399ec1419a6aa5954b30f9809c1c29ef6ce718cb3ca3917fd19bc06cd39d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 01 16:35:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Oct 01 16:35:23 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1401853213' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 01 16:35:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Oct 01 16:35:23 compute-0 lucid_bouman[93564]: pool 'images' created
Oct 01 16:35:23 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Oct 01 16:35:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 19 pg[5.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:35:23 compute-0 systemd[1]: libpod-ad92065580bb104ac3cf2eb6989fec19e91fa7f370e898724377bbe3ba34bcd2.scope: Deactivated successfully.
Oct 01 16:35:23 compute-0 podman[93523]: 2025-10-01 16:35:23.135239802 +0000 UTC m=+1.594629119 container died ad92065580bb104ac3cf2eb6989fec19e91fa7f370e898724377bbe3ba34bcd2 (image=quay.io/ceph/ceph:v18, name=lucid_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:35:23 compute-0 ceph-mon[74273]: OSD bench result of 4527.873881 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 01 16:35:23 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1401853213' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 01 16:35:23 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1401853213' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 01 16:35:23 compute-0 ceph-mon[74273]: osdmap e19: 3 total, 3 up, 3 in
Oct 01 16:35:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 19 pg[2.0( empty local-lis/les=18/19 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 pi=[13,18)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:35:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8986b1209b957cf85f07bf0dfa848588b1a305e9963867023d7b851d10aa6da-merged.mount: Deactivated successfully.
Oct 01 16:35:23 compute-0 podman[93523]: 2025-10-01 16:35:23.247188208 +0000 UTC m=+1.706577535 container remove ad92065580bb104ac3cf2eb6989fec19e91fa7f370e898724377bbe3ba34bcd2 (image=quay.io/ceph/ceph:v18, name=lucid_bouman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:35:23 compute-0 systemd[1]: libpod-conmon-ad92065580bb104ac3cf2eb6989fec19e91fa7f370e898724377bbe3ba34bcd2.scope: Deactivated successfully.
Oct 01 16:35:23 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v46: 5 pgs: 1 creating+peering, 2 active+clean, 2 unknown; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:23 compute-0 sudo[93486]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:23 compute-0 sudo[93721]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgnnmqpxlxvnyynjbykcieycdldthion ; /usr/bin/python3'
Oct 01 16:35:23 compute-0 sudo[93721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:35:23 compute-0 silly_gould[93663]: --> passed data devices: 0 physical, 3 LVM
Oct 01 16:35:23 compute-0 silly_gould[93663]: --> relative data size: 1.0
Oct 01 16:35:23 compute-0 silly_gould[93663]: --> All data devices are unavailable
Oct 01 16:35:23 compute-0 systemd[1]: libpod-e39399ec1419a6aa5954b30f9809c1c29ef6ce718cb3ca3917fd19bc06cd39d9.scope: Deactivated successfully.
Oct 01 16:35:23 compute-0 systemd[1]: libpod-e39399ec1419a6aa5954b30f9809c1c29ef6ce718cb3ca3917fd19bc06cd39d9.scope: Consumed 1.040s CPU time.
Oct 01 16:35:23 compute-0 podman[93645]: 2025-10-01 16:35:23.539026092 +0000 UTC m=+1.344300722 container died e39399ec1419a6aa5954b30f9809c1c29ef6ce718cb3ca3917fd19bc06cd39d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_gould, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:35:23 compute-0 python3[93724]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:35:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0994aada61ca4f044b1e08dbeed69effe862edee5fb6b7c585bf2c3777ae384-merged.mount: Deactivated successfully.
Oct 01 16:35:23 compute-0 ceph-mon[74273]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 01 16:35:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e19 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:35:23 compute-0 podman[93733]: 2025-10-01 16:35:23.675061419 +0000 UTC m=+0.112330808 container create 9eb41afbdb25613f68ae3a60f6522c77bb4b5ad0169f970c75b3bc4d69a47061 (image=quay.io/ceph/ceph:v18, name=objective_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:35:23 compute-0 podman[93733]: 2025-10-01 16:35:23.599200706 +0000 UTC m=+0.036470095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:35:23 compute-0 podman[93645]: 2025-10-01 16:35:23.701680513 +0000 UTC m=+1.506955153 container remove e39399ec1419a6aa5954b30f9809c1c29ef6ce718cb3ca3917fd19bc06cd39d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_gould, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:35:23 compute-0 systemd[1]: Started libpod-conmon-9eb41afbdb25613f68ae3a60f6522c77bb4b5ad0169f970c75b3bc4d69a47061.scope.
Oct 01 16:35:23 compute-0 systemd[1]: libpod-conmon-e39399ec1419a6aa5954b30f9809c1c29ef6ce718cb3ca3917fd19bc06cd39d9.scope: Deactivated successfully.
Oct 01 16:35:23 compute-0 sudo[93498]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:23 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03745cbff3ba57498f727a9d51314a6777ccc8398ce6124ede00dcb90826a3e2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03745cbff3ba57498f727a9d51314a6777ccc8398ce6124ede00dcb90826a3e2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:23 compute-0 podman[93733]: 2025-10-01 16:35:23.76001023 +0000 UTC m=+0.197279629 container init 9eb41afbdb25613f68ae3a60f6522c77bb4b5ad0169f970c75b3bc4d69a47061 (image=quay.io/ceph/ceph:v18, name=objective_brahmagupta, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 01 16:35:23 compute-0 podman[93733]: 2025-10-01 16:35:23.768683593 +0000 UTC m=+0.205952962 container start 9eb41afbdb25613f68ae3a60f6522c77bb4b5ad0169f970c75b3bc4d69a47061 (image=quay.io/ceph/ceph:v18, name=objective_brahmagupta, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:35:23 compute-0 podman[93733]: 2025-10-01 16:35:23.772955176 +0000 UTC m=+0.210224595 container attach 9eb41afbdb25613f68ae3a60f6522c77bb4b5ad0169f970c75b3bc4d69a47061 (image=quay.io/ceph/ceph:v18, name=objective_brahmagupta, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 01 16:35:23 compute-0 sudo[93763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:23 compute-0 sudo[93763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:23 compute-0 sudo[93763]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:23 compute-0 sudo[93789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:35:23 compute-0 sudo[93789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:23 compute-0 sudo[93789]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:23 compute-0 sudo[93814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:23 compute-0 sudo[93814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:23 compute-0 sudo[93814]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:23 compute-0 sudo[93839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 16:35:23 compute-0 sudo[93839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Oct 01 16:35:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Oct 01 16:35:24 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Oct 01 16:35:24 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 20 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:35:24 compute-0 ceph-mon[74273]: pgmap v46: 5 pgs: 1 creating+peering, 2 active+clean, 2 unknown; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:24 compute-0 ceph-mon[74273]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 01 16:35:24 compute-0 ceph-mon[74273]: osdmap e20: 3 total, 3 up, 3 in
Oct 01 16:35:24 compute-0 podman[93923]: 2025-10-01 16:35:24.270538802 +0000 UTC m=+0.036045800 container create 7f4d8440a6c644c826c4bb5ea3867ef051beaa13a9a23c55ed3aeccab6ac49d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_fermi, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 01 16:35:24 compute-0 systemd[1]: Started libpod-conmon-7f4d8440a6c644c826c4bb5ea3867ef051beaa13a9a23c55ed3aeccab6ac49d0.scope.
Oct 01 16:35:24 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:24 compute-0 podman[93923]: 2025-10-01 16:35:24.328533743 +0000 UTC m=+0.094040751 container init 7f4d8440a6c644c826c4bb5ea3867ef051beaa13a9a23c55ed3aeccab6ac49d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_fermi, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:35:24 compute-0 podman[93923]: 2025-10-01 16:35:24.33401285 +0000 UTC m=+0.099519848 container start 7f4d8440a6c644c826c4bb5ea3867ef051beaa13a9a23c55ed3aeccab6ac49d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_fermi, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:35:24 compute-0 sharp_fermi[93939]: 167 167
Oct 01 16:35:24 compute-0 systemd[1]: libpod-7f4d8440a6c644c826c4bb5ea3867ef051beaa13a9a23c55ed3aeccab6ac49d0.scope: Deactivated successfully.
Oct 01 16:35:24 compute-0 podman[93923]: 2025-10-01 16:35:24.338464284 +0000 UTC m=+0.103971282 container attach 7f4d8440a6c644c826c4bb5ea3867ef051beaa13a9a23c55ed3aeccab6ac49d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:35:24 compute-0 podman[93923]: 2025-10-01 16:35:24.338805779 +0000 UTC m=+0.104312777 container died 7f4d8440a6c644c826c4bb5ea3867ef051beaa13a9a23c55ed3aeccab6ac49d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Oct 01 16:35:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 01 16:35:24 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1939141714' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 01 16:35:24 compute-0 podman[93923]: 2025-10-01 16:35:24.253495115 +0000 UTC m=+0.019002143 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-cae2acb7531c3f88d829ea2f9c026d2c7385f72e41f8f1b9420fd2437049051b-merged.mount: Deactivated successfully.
Oct 01 16:35:24 compute-0 podman[93923]: 2025-10-01 16:35:24.373471202 +0000 UTC m=+0.138978200 container remove 7f4d8440a6c644c826c4bb5ea3867ef051beaa13a9a23c55ed3aeccab6ac49d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_fermi, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 01 16:35:24 compute-0 systemd[1]: libpod-conmon-7f4d8440a6c644c826c4bb5ea3867ef051beaa13a9a23c55ed3aeccab6ac49d0.scope: Deactivated successfully.
Oct 01 16:35:24 compute-0 podman[93965]: 2025-10-01 16:35:24.533566469 +0000 UTC m=+0.049352507 container create e1447e9d50345decd471853214b63d410e8fefa14da58b69c060c79937a5359b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_khorana, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 01 16:35:24 compute-0 systemd[1]: Started libpod-conmon-e1447e9d50345decd471853214b63d410e8fefa14da58b69c060c79937a5359b.scope.
Oct 01 16:35:24 compute-0 podman[93965]: 2025-10-01 16:35:24.510760206 +0000 UTC m=+0.026546343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:24 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43612b5ae7e2bda9931b4a40a1aa9f521b1f52333144fab4575e0c7e5e878a47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43612b5ae7e2bda9931b4a40a1aa9f521b1f52333144fab4575e0c7e5e878a47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43612b5ae7e2bda9931b4a40a1aa9f521b1f52333144fab4575e0c7e5e878a47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43612b5ae7e2bda9931b4a40a1aa9f521b1f52333144fab4575e0c7e5e878a47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:24 compute-0 podman[93965]: 2025-10-01 16:35:24.631627879 +0000 UTC m=+0.147413927 container init e1447e9d50345decd471853214b63d410e8fefa14da58b69c060c79937a5359b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_khorana, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:35:24 compute-0 podman[93965]: 2025-10-01 16:35:24.650563737 +0000 UTC m=+0.166349815 container start e1447e9d50345decd471853214b63d410e8fefa14da58b69c060c79937a5359b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_khorana, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:35:24 compute-0 podman[93965]: 2025-10-01 16:35:24.654408485 +0000 UTC m=+0.170194553 container attach e1447e9d50345decd471853214b63d410e8fefa14da58b69c060c79937a5359b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_khorana, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 01 16:35:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Oct 01 16:35:25 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1939141714' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 01 16:35:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Oct 01 16:35:25 compute-0 objective_brahmagupta[93760]: pool 'cephfs.cephfs.meta' created
Oct 01 16:35:25 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Oct 01 16:35:25 compute-0 systemd[1]: libpod-9eb41afbdb25613f68ae3a60f6522c77bb4b5ad0169f970c75b3bc4d69a47061.scope: Deactivated successfully.
Oct 01 16:35:25 compute-0 podman[93733]: 2025-10-01 16:35:25.15133691 +0000 UTC m=+1.588606299 container died 9eb41afbdb25613f68ae3a60f6522c77bb4b5ad0169f970c75b3bc4d69a47061 (image=quay.io/ceph/ceph:v18, name=objective_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 01 16:35:25 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1939141714' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 01 16:35:25 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1939141714' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 01 16:35:25 compute-0 ceph-mon[74273]: osdmap e21: 3 total, 3 up, 3 in
Oct 01 16:35:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-03745cbff3ba57498f727a9d51314a6777ccc8398ce6124ede00dcb90826a3e2-merged.mount: Deactivated successfully.
Oct 01 16:35:25 compute-0 podman[93733]: 2025-10-01 16:35:25.204984206 +0000 UTC m=+1.642253575 container remove 9eb41afbdb25613f68ae3a60f6522c77bb4b5ad0169f970c75b3bc4d69a47061 (image=quay.io/ceph/ceph:v18, name=objective_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:35:25 compute-0 sudo[93721]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:25 compute-0 systemd[1]: libpod-conmon-9eb41afbdb25613f68ae3a60f6522c77bb4b5ad0169f970c75b3bc4d69a47061.scope: Deactivated successfully.
Oct 01 16:35:25 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v49: 6 pgs: 1 creating+peering, 3 active+clean, 2 unknown; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:25 compute-0 sudo[94024]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upbiolheuxsurpqltdlhgkxqdtdkwtys ; /usr/bin/python3'
Oct 01 16:35:25 compute-0 sudo[94024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:35:25 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 21 pg[6.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]: {
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:     "0": [
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:         {
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             "devices": [
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "/dev/loop3"
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             ],
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             "lv_name": "ceph_lv0",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             "lv_size": "21470642176",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             "name": "ceph_lv0",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             "tags": {
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "ceph.cluster_name": "ceph",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "ceph.crush_device_class": "",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "ceph.encrypted": "0",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "ceph.osd_id": "0",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "ceph.type": "block",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "ceph.vdo": "0"
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             },
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             "type": "block",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             "vg_name": "ceph_vg0"
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:         }
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:     ],
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:     "1": [
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:         {
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             "devices": [
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "/dev/loop4"
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             ],
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             "lv_name": "ceph_lv1",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             "lv_size": "21470642176",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             "name": "ceph_lv1",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             "tags": {
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "ceph.cluster_name": "ceph",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "ceph.crush_device_class": "",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "ceph.encrypted": "0",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "ceph.osd_id": "1",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "ceph.type": "block",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "ceph.vdo": "0"
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             },
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             "type": "block",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             "vg_name": "ceph_vg1"
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:         }
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:     ],
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:     "2": [
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:         {
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             "devices": [
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "/dev/loop5"
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             ],
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             "lv_name": "ceph_lv2",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             "lv_size": "21470642176",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             "name": "ceph_lv2",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             "tags": {
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "ceph.cluster_name": "ceph",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "ceph.crush_device_class": "",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "ceph.encrypted": "0",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "ceph.osd_id": "2",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "ceph.type": "block",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:                 "ceph.vdo": "0"
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             },
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             "type": "block",
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:             "vg_name": "ceph_vg2"
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:         }
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]:     ]
Oct 01 16:35:25 compute-0 pedantic_khorana[93981]: }
Oct 01 16:35:25 compute-0 systemd[1]: libpod-e1447e9d50345decd471853214b63d410e8fefa14da58b69c060c79937a5359b.scope: Deactivated successfully.
Oct 01 16:35:25 compute-0 podman[94029]: 2025-10-01 16:35:25.469346925 +0000 UTC m=+0.021976031 container died e1447e9d50345decd471853214b63d410e8fefa14da58b69c060c79937a5359b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 01 16:35:25 compute-0 python3[94026]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:35:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-43612b5ae7e2bda9931b4a40a1aa9f521b1f52333144fab4575e0c7e5e878a47-merged.mount: Deactivated successfully.
Oct 01 16:35:25 compute-0 podman[94029]: 2025-10-01 16:35:25.595154302 +0000 UTC m=+0.147783388 container remove e1447e9d50345decd471853214b63d410e8fefa14da58b69c060c79937a5359b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:35:25 compute-0 systemd[1]: libpod-conmon-e1447e9d50345decd471853214b63d410e8fefa14da58b69c060c79937a5359b.scope: Deactivated successfully.
Oct 01 16:35:25 compute-0 podman[94042]: 2025-10-01 16:35:25.621486196 +0000 UTC m=+0.100456570 container create bd8780c89f762f3e7016f754853bde9c654affa3034437915f47e6f8c4793386 (image=quay.io/ceph/ceph:v18, name=exciting_moser, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:35:25 compute-0 sudo[93839]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:25 compute-0 systemd[1]: Started libpod-conmon-bd8780c89f762f3e7016f754853bde9c654affa3034437915f47e6f8c4793386.scope.
Oct 01 16:35:25 compute-0 podman[94042]: 2025-10-01 16:35:25.571308267 +0000 UTC m=+0.050278721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:35:25 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69497f1ff433d7e402231917f5ff69249268106d3abfbce0a7598f0d104369a1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69497f1ff433d7e402231917f5ff69249268106d3abfbce0a7598f0d104369a1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:25 compute-0 sudo[94059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:25 compute-0 sudo[94059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:25 compute-0 podman[94042]: 2025-10-01 16:35:25.712977723 +0000 UTC m=+0.191948097 container init bd8780c89f762f3e7016f754853bde9c654affa3034437915f47e6f8c4793386 (image=quay.io/ceph/ceph:v18, name=exciting_moser, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 01 16:35:25 compute-0 sudo[94059]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:25 compute-0 podman[94042]: 2025-10-01 16:35:25.720749 +0000 UTC m=+0.199719364 container start bd8780c89f762f3e7016f754853bde9c654affa3034437915f47e6f8c4793386 (image=quay.io/ceph/ceph:v18, name=exciting_moser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 01 16:35:25 compute-0 podman[94042]: 2025-10-01 16:35:25.733906694 +0000 UTC m=+0.212877068 container attach bd8780c89f762f3e7016f754853bde9c654affa3034437915f47e6f8c4793386 (image=quay.io/ceph/ceph:v18, name=exciting_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:35:25 compute-0 sudo[94088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:35:25 compute-0 sudo[94088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:25 compute-0 sudo[94088]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:25 compute-0 sudo[94113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:25 compute-0 sudo[94113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:25 compute-0 sudo[94113]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:25 compute-0 sudo[94138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 16:35:25 compute-0 sudo[94138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:26 compute-0 podman[94221]: 2025-10-01 16:35:26.151664762 +0000 UTC m=+0.039869579 container create 473b24bc86fd312b48922ebae84064b453a4fb4c5c910c2b1e0be19176047c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:35:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Oct 01 16:35:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Oct 01 16:35:26 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Oct 01 16:35:26 compute-0 ceph-mon[74273]: pgmap v49: 6 pgs: 1 creating+peering, 3 active+clean, 2 unknown; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:26 compute-0 systemd[1]: Started libpod-conmon-473b24bc86fd312b48922ebae84064b453a4fb4c5c910c2b1e0be19176047c34.scope.
Oct 01 16:35:26 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 22 pg[6.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:35:26 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:26 compute-0 podman[94221]: 2025-10-01 16:35:26.198082906 +0000 UTC m=+0.086287783 container init 473b24bc86fd312b48922ebae84064b453a4fb4c5c910c2b1e0be19176047c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:35:26 compute-0 podman[94221]: 2025-10-01 16:35:26.20272266 +0000 UTC m=+0.090927477 container start 473b24bc86fd312b48922ebae84064b453a4fb4c5c910c2b1e0be19176047c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mendeleev, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 01 16:35:26 compute-0 loving_mendeleev[94238]: 167 167
Oct 01 16:35:26 compute-0 systemd[1]: libpod-473b24bc86fd312b48922ebae84064b453a4fb4c5c910c2b1e0be19176047c34.scope: Deactivated successfully.
Oct 01 16:35:26 compute-0 podman[94221]: 2025-10-01 16:35:26.206764417 +0000 UTC m=+0.094969254 container attach 473b24bc86fd312b48922ebae84064b453a4fb4c5c910c2b1e0be19176047c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:35:26 compute-0 podman[94221]: 2025-10-01 16:35:26.207038048 +0000 UTC m=+0.095242865 container died 473b24bc86fd312b48922ebae84064b453a4fb4c5c910c2b1e0be19176047c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mendeleev, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 16:35:26 compute-0 podman[94221]: 2025-10-01 16:35:26.131917469 +0000 UTC m=+0.020122325 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-1197bf14f79cf5478e359e31ceda442e894caac8dfd47656311e7f4c1bdd46dc-merged.mount: Deactivated successfully.
Oct 01 16:35:26 compute-0 podman[94221]: 2025-10-01 16:35:26.25557143 +0000 UTC m=+0.143776247 container remove 473b24bc86fd312b48922ebae84064b453a4fb4c5c910c2b1e0be19176047c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mendeleev, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:35:26 compute-0 systemd[1]: libpod-conmon-473b24bc86fd312b48922ebae84064b453a4fb4c5c910c2b1e0be19176047c34.scope: Deactivated successfully.
Oct 01 16:35:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 01 16:35:26 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/854242239' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 01 16:35:26 compute-0 podman[94265]: 2025-10-01 16:35:26.453492821 +0000 UTC m=+0.079417981 container create 493d29e3e53882740fb69169e8594af688186d7cc57c052f8fe262f8520a3461 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_chatterjee, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:35:26 compute-0 podman[94265]: 2025-10-01 16:35:26.398221634 +0000 UTC m=+0.024146834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:26 compute-0 systemd[1]: Started libpod-conmon-493d29e3e53882740fb69169e8594af688186d7cc57c052f8fe262f8520a3461.scope.
Oct 01 16:35:26 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fe06bbd5e07bfb89fd8f071ee8e0ddb71d88337568bedbcfd908c08be212feb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fe06bbd5e07bfb89fd8f071ee8e0ddb71d88337568bedbcfd908c08be212feb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fe06bbd5e07bfb89fd8f071ee8e0ddb71d88337568bedbcfd908c08be212feb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fe06bbd5e07bfb89fd8f071ee8e0ddb71d88337568bedbcfd908c08be212feb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:26 compute-0 podman[94265]: 2025-10-01 16:35:26.560170579 +0000 UTC m=+0.186095729 container init 493d29e3e53882740fb69169e8594af688186d7cc57c052f8fe262f8520a3461 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 01 16:35:26 compute-0 podman[94265]: 2025-10-01 16:35:26.575518903 +0000 UTC m=+0.201444023 container start 493d29e3e53882740fb69169e8594af688186d7cc57c052f8fe262f8520a3461 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_chatterjee, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:35:26 compute-0 podman[94265]: 2025-10-01 16:35:26.613713727 +0000 UTC m=+0.239638847 container attach 493d29e3e53882740fb69169e8594af688186d7cc57c052f8fe262f8520a3461 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_chatterjee, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:35:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Oct 01 16:35:27 compute-0 ceph-mon[74273]: osdmap e22: 3 total, 3 up, 3 in
Oct 01 16:35:27 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/854242239' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 01 16:35:27 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/854242239' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 01 16:35:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Oct 01 16:35:27 compute-0 exciting_moser[94061]: pool 'cephfs.cephfs.data' created
Oct 01 16:35:27 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Oct 01 16:35:27 compute-0 podman[94042]: 2025-10-01 16:35:27.232206622 +0000 UTC m=+1.711177036 container died bd8780c89f762f3e7016f754853bde9c654affa3034437915f47e6f8c4793386 (image=quay.io/ceph/ceph:v18, name=exciting_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 01 16:35:27 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 23 pg[7.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [1] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:35:27 compute-0 systemd[1]: libpod-bd8780c89f762f3e7016f754853bde9c654affa3034437915f47e6f8c4793386.scope: Deactivated successfully.
Oct 01 16:35:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-69497f1ff433d7e402231917f5ff69249268106d3abfbce0a7598f0d104369a1-merged.mount: Deactivated successfully.
Oct 01 16:35:27 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v52: 7 pgs: 1 creating+peering, 3 active+clean, 3 unknown; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:27 compute-0 podman[94042]: 2025-10-01 16:35:27.294934869 +0000 UTC m=+1.773905243 container remove bd8780c89f762f3e7016f754853bde9c654affa3034437915f47e6f8c4793386 (image=quay.io/ceph/ceph:v18, name=exciting_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:35:27 compute-0 systemd[1]: libpod-conmon-bd8780c89f762f3e7016f754853bde9c654affa3034437915f47e6f8c4793386.scope: Deactivated successfully.
Oct 01 16:35:27 compute-0 sudo[94024]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:27 compute-0 sudo[94340]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqvulihhbvzdmrgvgulmtvhyepflmuzj ; /usr/bin/python3'
Oct 01 16:35:27 compute-0 sudo[94340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:35:27 compute-0 agitated_chatterjee[94281]: {
Oct 01 16:35:27 compute-0 agitated_chatterjee[94281]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 16:35:27 compute-0 agitated_chatterjee[94281]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:35:27 compute-0 agitated_chatterjee[94281]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 16:35:27 compute-0 agitated_chatterjee[94281]:         "osd_id": 2,
Oct 01 16:35:27 compute-0 agitated_chatterjee[94281]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:35:27 compute-0 agitated_chatterjee[94281]:         "type": "bluestore"
Oct 01 16:35:27 compute-0 agitated_chatterjee[94281]:     },
Oct 01 16:35:27 compute-0 agitated_chatterjee[94281]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 16:35:27 compute-0 agitated_chatterjee[94281]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:35:27 compute-0 agitated_chatterjee[94281]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 16:35:27 compute-0 agitated_chatterjee[94281]:         "osd_id": 0,
Oct 01 16:35:27 compute-0 agitated_chatterjee[94281]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:35:27 compute-0 agitated_chatterjee[94281]:         "type": "bluestore"
Oct 01 16:35:27 compute-0 agitated_chatterjee[94281]:     },
Oct 01 16:35:27 compute-0 agitated_chatterjee[94281]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 16:35:27 compute-0 agitated_chatterjee[94281]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:35:27 compute-0 agitated_chatterjee[94281]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 16:35:27 compute-0 agitated_chatterjee[94281]:         "osd_id": 1,
Oct 01 16:35:27 compute-0 agitated_chatterjee[94281]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:35:27 compute-0 agitated_chatterjee[94281]:         "type": "bluestore"
Oct 01 16:35:27 compute-0 agitated_chatterjee[94281]:     }
Oct 01 16:35:27 compute-0 agitated_chatterjee[94281]: }
Oct 01 16:35:27 compute-0 systemd[1]: libpod-493d29e3e53882740fb69169e8594af688186d7cc57c052f8fe262f8520a3461.scope: Deactivated successfully.
Oct 01 16:35:27 compute-0 podman[94265]: 2025-10-01 16:35:27.59128945 +0000 UTC m=+1.217214580 container died 493d29e3e53882740fb69169e8594af688186d7cc57c052f8fe262f8520a3461 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 01 16:35:27 compute-0 systemd[1]: libpod-493d29e3e53882740fb69169e8594af688186d7cc57c052f8fe262f8520a3461.scope: Consumed 1.018s CPU time.
Oct 01 16:35:27 compute-0 python3[94345]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:35:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fe06bbd5e07bfb89fd8f071ee8e0ddb71d88337568bedbcfd908c08be212feb-merged.mount: Deactivated successfully.
Oct 01 16:35:27 compute-0 podman[94265]: 2025-10-01 16:35:27.646118993 +0000 UTC m=+1.272044113 container remove 493d29e3e53882740fb69169e8594af688186d7cc57c052f8fe262f8520a3461 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:35:27 compute-0 systemd[1]: libpod-conmon-493d29e3e53882740fb69169e8594af688186d7cc57c052f8fe262f8520a3461.scope: Deactivated successfully.
Oct 01 16:35:27 compute-0 sudo[94138]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:27 compute-0 podman[94355]: 2025-10-01 16:35:27.675715886 +0000 UTC m=+0.061881006 container create 41e1a9c1d3d1a9db5683f2cb9ec7a029e9c2e20239049ebd28a20cf524165754 (image=quay.io/ceph/ceph:v18, name=nervous_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 16:35:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:35:27 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:35:27 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:27 compute-0 systemd[1]: Started libpod-conmon-41e1a9c1d3d1a9db5683f2cb9ec7a029e9c2e20239049ebd28a20cf524165754.scope.
Oct 01 16:35:27 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:27 compute-0 sudo[94383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f181e1361adae15e2bd42aede5c1c6862a149cda879e73b5a3fcc75bd1f75f39/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f181e1361adae15e2bd42aede5c1c6862a149cda879e73b5a3fcc75bd1f75f39/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:27 compute-0 sudo[94383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:27 compute-0 podman[94355]: 2025-10-01 16:35:27.657777863 +0000 UTC m=+0.043943013 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:35:27 compute-0 sudo[94383]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:27 compute-0 podman[94355]: 2025-10-01 16:35:27.755177872 +0000 UTC m=+0.141343012 container init 41e1a9c1d3d1a9db5683f2cb9ec7a029e9c2e20239049ebd28a20cf524165754 (image=quay.io/ceph/ceph:v18, name=nervous_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 01 16:35:27 compute-0 podman[94355]: 2025-10-01 16:35:27.762055482 +0000 UTC m=+0.148220602 container start 41e1a9c1d3d1a9db5683f2cb9ec7a029e9c2e20239049ebd28a20cf524165754 (image=quay.io/ceph/ceph:v18, name=nervous_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 01 16:35:27 compute-0 podman[94355]: 2025-10-01 16:35:27.765553626 +0000 UTC m=+0.151718786 container attach 41e1a9c1d3d1a9db5683f2cb9ec7a029e9c2e20239049ebd28a20cf524165754 (image=quay.io/ceph/ceph:v18, name=nervous_dhawan, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 01 16:35:27 compute-0 sudo[94412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 16:35:27 compute-0 sudo[94412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:27 compute-0 sudo[94412]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:27 compute-0 sudo[94437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:27 compute-0 sudo[94437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:27 compute-0 sudo[94437]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:27 compute-0 sudo[94462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:35:27 compute-0 sudo[94462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:27 compute-0 sudo[94462]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:27 compute-0 sudo[94487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:28 compute-0 sudo[94487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:28 compute-0 sudo[94487]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:28 compute-0 sudo[94512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 01 16:35:28 compute-0 sudo[94512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Oct 01 16:35:28 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/854242239' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 01 16:35:28 compute-0 ceph-mon[74273]: osdmap e23: 3 total, 3 up, 3 in
Oct 01 16:35:28 compute-0 ceph-mon[74273]: pgmap v52: 7 pgs: 1 creating+peering, 3 active+clean, 3 unknown; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Oct 01 16:35:28 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Oct 01 16:35:28 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 24 pg[7.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [1] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:35:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Oct 01 16:35:28 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4167019060' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct 01 16:35:28 compute-0 podman[94630]: 2025-10-01 16:35:28.470202935 +0000 UTC m=+0.057322873 container exec bfdaa9b78cc1558959452c7020a00aa78f3da27e3ededf3766f2f88165c2443b (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:35:28 compute-0 podman[94630]: 2025-10-01 16:35:28.59124568 +0000 UTC m=+0.178365628 container exec_died bfdaa9b78cc1558959452c7020a00aa78f3da27e3ededf3766f2f88165c2443b (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 01 16:35:28 compute-0 ceph-mon[74273]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 01 16:35:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e24 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:35:29 compute-0 sudo[94512]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:35:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:35:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:29 compute-0 sudo[94753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:29 compute-0 sudo[94753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:29 compute-0 sudo[94753]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:29 compute-0 sudo[94778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:35:29 compute-0 sudo[94778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:29 compute-0 sudo[94778]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:29 compute-0 sudo[94803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:29 compute-0 sudo[94803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:29 compute-0 sudo[94803]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Oct 01 16:35:29 compute-0 ceph-mon[74273]: osdmap e24: 3 total, 3 up, 3 in
Oct 01 16:35:29 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4167019060' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct 01 16:35:29 compute-0 ceph-mon[74273]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 01 16:35:29 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:29 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4167019060' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct 01 16:35:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Oct 01 16:35:29 compute-0 nervous_dhawan[94389]: enabled application 'rbd' on pool 'vms'
Oct 01 16:35:29 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Oct 01 16:35:29 compute-0 systemd[1]: libpod-41e1a9c1d3d1a9db5683f2cb9ec7a029e9c2e20239049ebd28a20cf524165754.scope: Deactivated successfully.
Oct 01 16:35:29 compute-0 conmon[94389]: conmon 41e1a9c1d3d1a9db5683 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-41e1a9c1d3d1a9db5683f2cb9ec7a029e9c2e20239049ebd28a20cf524165754.scope/container/memory.events
Oct 01 16:35:29 compute-0 podman[94355]: 2025-10-01 16:35:29.262122814 +0000 UTC m=+1.648287944 container died 41e1a9c1d3d1a9db5683f2cb9ec7a029e9c2e20239049ebd28a20cf524165754 (image=quay.io/ceph/ceph:v18, name=nervous_dhawan, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 01 16:35:29 compute-0 sudo[94828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 16:35:29 compute-0 sudo[94828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:29 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v55: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-f181e1361adae15e2bd42aede5c1c6862a149cda879e73b5a3fcc75bd1f75f39-merged.mount: Deactivated successfully.
Oct 01 16:35:29 compute-0 podman[94355]: 2025-10-01 16:35:29.314716301 +0000 UTC m=+1.700881431 container remove 41e1a9c1d3d1a9db5683f2cb9ec7a029e9c2e20239049ebd28a20cf524165754 (image=quay.io/ceph/ceph:v18, name=nervous_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 01 16:35:29 compute-0 systemd[1]: libpod-conmon-41e1a9c1d3d1a9db5683f2cb9ec7a029e9c2e20239049ebd28a20cf524165754.scope: Deactivated successfully.
Oct 01 16:35:29 compute-0 sudo[94340]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:29 compute-0 sudo[94899]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpnsowgbijvfqlghbanqlngkcuwjggpi ; /usr/bin/python3'
Oct 01 16:35:29 compute-0 sudo[94899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:35:29 compute-0 python3[94903]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:35:29 compute-0 podman[94909]: 2025-10-01 16:35:29.745712554 +0000 UTC m=+0.035883456 container create 563606c383f96ca80fe565b5728664b1c384664d66dbaaf1f90bb2268e03748c (image=quay.io/ceph/ceph:v18, name=trusting_goldwasser, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 01 16:35:29 compute-0 sudo[94828]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:29 compute-0 systemd[1]: Started libpod-conmon-563606c383f96ca80fe565b5728664b1c384664d66dbaaf1f90bb2268e03748c.scope.
Oct 01 16:35:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:35:29 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:35:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 16:35:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:35:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 16:35:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:29 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 924997db-7dd3-400d-a8e8-9095c07a4d03 does not exist
Oct 01 16:35:29 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 310fec36-f43b-4137-b8e4-399e8347adae does not exist
Oct 01 16:35:29 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev ce6299cb-221c-4a20-b6e6-2c7de607af6f does not exist
Oct 01 16:35:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 16:35:29 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:35:29 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 16:35:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:35:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5075158c18dd115cc5af80a6f6a6b3fdcc3886aab202728ab2d87ffbc74fe61/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5075158c18dd115cc5af80a6f6a6b3fdcc3886aab202728ab2d87ffbc74fe61/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:35:29 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:35:29 compute-0 podman[94909]: 2025-10-01 16:35:29.729428058 +0000 UTC m=+0.019598980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:35:29 compute-0 podman[94909]: 2025-10-01 16:35:29.834968925 +0000 UTC m=+0.125139847 container init 563606c383f96ca80fe565b5728664b1c384664d66dbaaf1f90bb2268e03748c (image=quay.io/ceph/ceph:v18, name=trusting_goldwasser, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:35:29 compute-0 podman[94909]: 2025-10-01 16:35:29.842116467 +0000 UTC m=+0.132287369 container start 563606c383f96ca80fe565b5728664b1c384664d66dbaaf1f90bb2268e03748c (image=quay.io/ceph/ceph:v18, name=trusting_goldwasser, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 01 16:35:29 compute-0 podman[94909]: 2025-10-01 16:35:29.845017234 +0000 UTC m=+0.135188156 container attach 563606c383f96ca80fe565b5728664b1c384664d66dbaaf1f90bb2268e03748c (image=quay.io/ceph/ceph:v18, name=trusting_goldwasser, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:35:29 compute-0 sudo[94937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:29 compute-0 sudo[94937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:29 compute-0 sudo[94937]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:29 compute-0 sudo[94965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:35:29 compute-0 sudo[94965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:29 compute-0 sudo[94965]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:29 compute-0 sudo[94990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:29 compute-0 sudo[94990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:29 compute-0 sudo[94990]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:30 compute-0 sudo[95015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 16:35:30 compute-0 sudo[95015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:30 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4167019060' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct 01 16:35:30 compute-0 ceph-mon[74273]: osdmap e25: 3 total, 3 up, 3 in
Oct 01 16:35:30 compute-0 ceph-mon[74273]: pgmap v55: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:35:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:35:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:35:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:35:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:35:30 compute-0 podman[95098]: 2025-10-01 16:35:30.356921801 +0000 UTC m=+0.048088469 container create cb9ec899b49cfe665d97970a45b5077d93285eb0fe0fd9a8d86e75916bac3686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hodgkin, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 01 16:35:30 compute-0 systemd[1]: Started libpod-conmon-cb9ec899b49cfe665d97970a45b5077d93285eb0fe0fd9a8d86e75916bac3686.scope.
Oct 01 16:35:30 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:30 compute-0 podman[95098]: 2025-10-01 16:35:30.425674167 +0000 UTC m=+0.116840775 container init cb9ec899b49cfe665d97970a45b5077d93285eb0fe0fd9a8d86e75916bac3686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hodgkin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:35:30 compute-0 podman[95098]: 2025-10-01 16:35:30.336147845 +0000 UTC m=+0.027314473 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:30 compute-0 podman[95098]: 2025-10-01 16:35:30.433157175 +0000 UTC m=+0.124323763 container start cb9ec899b49cfe665d97970a45b5077d93285eb0fe0fd9a8d86e75916bac3686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hodgkin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 01 16:35:30 compute-0 admiring_hodgkin[95115]: 167 167
Oct 01 16:35:30 compute-0 podman[95098]: 2025-10-01 16:35:30.439540667 +0000 UTC m=+0.130707275 container attach cb9ec899b49cfe665d97970a45b5077d93285eb0fe0fd9a8d86e75916bac3686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hodgkin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Oct 01 16:35:30 compute-0 systemd[1]: libpod-cb9ec899b49cfe665d97970a45b5077d93285eb0fe0fd9a8d86e75916bac3686.scope: Deactivated successfully.
Oct 01 16:35:30 compute-0 podman[95098]: 2025-10-01 16:35:30.439785231 +0000 UTC m=+0.130951839 container died cb9ec899b49cfe665d97970a45b5077d93285eb0fe0fd9a8d86e75916bac3686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:35:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Oct 01 16:35:30 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2754516438' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct 01 16:35:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-121e62ce25b2d721310572f3144fce961eb50437d0f3fec0d7afeb5ce3223b7a-merged.mount: Deactivated successfully.
Oct 01 16:35:30 compute-0 podman[95098]: 2025-10-01 16:35:30.479200507 +0000 UTC m=+0.170367095 container remove cb9ec899b49cfe665d97970a45b5077d93285eb0fe0fd9a8d86e75916bac3686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 01 16:35:30 compute-0 systemd[1]: libpod-conmon-cb9ec899b49cfe665d97970a45b5077d93285eb0fe0fd9a8d86e75916bac3686.scope: Deactivated successfully.
Oct 01 16:35:30 compute-0 podman[95140]: 2025-10-01 16:35:30.675611056 +0000 UTC m=+0.058112221 container create 349c465836ac15dd8ccb51d5a2557762babff6210df95e5146d0bad671bbb153 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_torvalds, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:35:30 compute-0 systemd[1]: Started libpod-conmon-349c465836ac15dd8ccb51d5a2557762babff6210df95e5146d0bad671bbb153.scope.
Oct 01 16:35:30 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:30 compute-0 podman[95140]: 2025-10-01 16:35:30.647262802 +0000 UTC m=+0.029764067 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e0b2cb8f9348a58f1195242ab9665325ae118deb544627cd35eb50b3d4558f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e0b2cb8f9348a58f1195242ab9665325ae118deb544627cd35eb50b3d4558f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e0b2cb8f9348a58f1195242ab9665325ae118deb544627cd35eb50b3d4558f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e0b2cb8f9348a58f1195242ab9665325ae118deb544627cd35eb50b3d4558f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e0b2cb8f9348a58f1195242ab9665325ae118deb544627cd35eb50b3d4558f1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:30 compute-0 podman[95140]: 2025-10-01 16:35:30.756372005 +0000 UTC m=+0.138873220 container init 349c465836ac15dd8ccb51d5a2557762babff6210df95e5146d0bad671bbb153 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 01 16:35:30 compute-0 podman[95140]: 2025-10-01 16:35:30.763800608 +0000 UTC m=+0.146301803 container start 349c465836ac15dd8ccb51d5a2557762babff6210df95e5146d0bad671bbb153 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 01 16:35:30 compute-0 podman[95140]: 2025-10-01 16:35:30.766371329 +0000 UTC m=+0.148872504 container attach 349c465836ac15dd8ccb51d5a2557762babff6210df95e5146d0bad671bbb153 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 01 16:35:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Oct 01 16:35:31 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2754516438' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct 01 16:35:31 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2754516438' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct 01 16:35:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Oct 01 16:35:31 compute-0 trusting_goldwasser[94934]: enabled application 'rbd' on pool 'volumes'
Oct 01 16:35:31 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Oct 01 16:35:31 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v57: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:31 compute-0 systemd[1]: libpod-563606c383f96ca80fe565b5728664b1c384664d66dbaaf1f90bb2268e03748c.scope: Deactivated successfully.
Oct 01 16:35:31 compute-0 podman[94909]: 2025-10-01 16:35:31.290164413 +0000 UTC m=+1.580335335 container died 563606c383f96ca80fe565b5728664b1c384664d66dbaaf1f90bb2268e03748c (image=quay.io/ceph/ceph:v18, name=trusting_goldwasser, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 01 16:35:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5075158c18dd115cc5af80a6f6a6b3fdcc3886aab202728ab2d87ffbc74fe61-merged.mount: Deactivated successfully.
Oct 01 16:35:31 compute-0 podman[94909]: 2025-10-01 16:35:31.340508035 +0000 UTC m=+1.630678937 container remove 563606c383f96ca80fe565b5728664b1c384664d66dbaaf1f90bb2268e03748c (image=quay.io/ceph/ceph:v18, name=trusting_goldwasser, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 01 16:35:31 compute-0 systemd[1]: libpod-conmon-563606c383f96ca80fe565b5728664b1c384664d66dbaaf1f90bb2268e03748c.scope: Deactivated successfully.
Oct 01 16:35:31 compute-0 sudo[94899]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:31 compute-0 sudo[95201]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkuvpipqaxqeywgzsbhabbwkspqmwulf ; /usr/bin/python3'
Oct 01 16:35:31 compute-0 sudo[95201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:35:31 compute-0 python3[95205]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:35:31 compute-0 podman[95213]: 2025-10-01 16:35:31.676494989 +0000 UTC m=+0.052217237 container create 1a27a6e1be671cb41d85144a15919398bd2cf942805a8a6ee4f0b83d8e5829c0 (image=quay.io/ceph/ceph:v18, name=boring_cannon, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:35:31 compute-0 systemd[1]: Started libpod-conmon-1a27a6e1be671cb41d85144a15919398bd2cf942805a8a6ee4f0b83d8e5829c0.scope.
Oct 01 16:35:31 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/240fdf07492a07f244ccbf759f7ed92ea0f09a8063826551ef0c668dbfc1435e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/240fdf07492a07f244ccbf759f7ed92ea0f09a8063826551ef0c668dbfc1435e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:31 compute-0 podman[95213]: 2025-10-01 16:35:31.738187394 +0000 UTC m=+0.113909662 container init 1a27a6e1be671cb41d85144a15919398bd2cf942805a8a6ee4f0b83d8e5829c0 (image=quay.io/ceph/ceph:v18, name=boring_cannon, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 01 16:35:31 compute-0 podman[95213]: 2025-10-01 16:35:31.744206104 +0000 UTC m=+0.119928352 container start 1a27a6e1be671cb41d85144a15919398bd2cf942805a8a6ee4f0b83d8e5829c0 (image=quay.io/ceph/ceph:v18, name=boring_cannon, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 01 16:35:31 compute-0 podman[95213]: 2025-10-01 16:35:31.747477782 +0000 UTC m=+0.123200040 container attach 1a27a6e1be671cb41d85144a15919398bd2cf942805a8a6ee4f0b83d8e5829c0 (image=quay.io/ceph/ceph:v18, name=boring_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 01 16:35:31 compute-0 focused_torvalds[95157]: --> passed data devices: 0 physical, 3 LVM
Oct 01 16:35:31 compute-0 focused_torvalds[95157]: --> relative data size: 1.0
Oct 01 16:35:31 compute-0 focused_torvalds[95157]: --> All data devices are unavailable
Oct 01 16:35:31 compute-0 podman[95213]: 2025-10-01 16:35:31.656332529 +0000 UTC m=+0.032054817 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:35:31 compute-0 systemd[1]: libpod-349c465836ac15dd8ccb51d5a2557762babff6210df95e5146d0bad671bbb153.scope: Deactivated successfully.
Oct 01 16:35:31 compute-0 podman[95140]: 2025-10-01 16:35:31.781822488 +0000 UTC m=+1.164323663 container died 349c465836ac15dd8ccb51d5a2557762babff6210df95e5146d0bad671bbb153 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Oct 01 16:35:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e0b2cb8f9348a58f1195242ab9665325ae118deb544627cd35eb50b3d4558f1-merged.mount: Deactivated successfully.
Oct 01 16:35:31 compute-0 podman[95140]: 2025-10-01 16:35:31.854446389 +0000 UTC m=+1.236947604 container remove 349c465836ac15dd8ccb51d5a2557762babff6210df95e5146d0bad671bbb153 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:35:31 compute-0 systemd[1]: libpod-conmon-349c465836ac15dd8ccb51d5a2557762babff6210df95e5146d0bad671bbb153.scope: Deactivated successfully.
Oct 01 16:35:31 compute-0 sudo[95015]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:31 compute-0 sudo[95253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:31 compute-0 sudo[95253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:31 compute-0 sudo[95253]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:31 compute-0 sudo[95278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:35:31 compute-0 sudo[95278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:32 compute-0 sudo[95278]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:32 compute-0 sudo[95303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:32 compute-0 sudo[95303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:32 compute-0 sudo[95303]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:32 compute-0 sudo[95329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 16:35:32 compute-0 sudo[95329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:32 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2754516438' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct 01 16:35:32 compute-0 ceph-mon[74273]: osdmap e26: 3 total, 3 up, 3 in
Oct 01 16:35:32 compute-0 ceph-mon[74273]: pgmap v57: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Oct 01 16:35:32 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2805070380' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct 01 16:35:32 compute-0 podman[95413]: 2025-10-01 16:35:32.431261475 +0000 UTC m=+0.038465296 container create 468b9833f949eabeab440754319cc8b77a1cf479ea1914b3b927ea0b9201041d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_fermat, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 01 16:35:32 compute-0 systemd[1]: Started libpod-conmon-468b9833f949eabeab440754319cc8b77a1cf479ea1914b3b927ea0b9201041d.scope.
Oct 01 16:35:32 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:32 compute-0 podman[95413]: 2025-10-01 16:35:32.497533001 +0000 UTC m=+0.104736832 container init 468b9833f949eabeab440754319cc8b77a1cf479ea1914b3b927ea0b9201041d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_fermat, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 01 16:35:32 compute-0 podman[95413]: 2025-10-01 16:35:32.50346435 +0000 UTC m=+0.110668181 container start 468b9833f949eabeab440754319cc8b77a1cf479ea1914b3b927ea0b9201041d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 01 16:35:32 compute-0 podman[95413]: 2025-10-01 16:35:32.50642559 +0000 UTC m=+0.113629431 container attach 468b9833f949eabeab440754319cc8b77a1cf479ea1914b3b927ea0b9201041d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:35:32 compute-0 nice_fermat[95429]: 167 167
Oct 01 16:35:32 compute-0 systemd[1]: libpod-468b9833f949eabeab440754319cc8b77a1cf479ea1914b3b927ea0b9201041d.scope: Deactivated successfully.
Oct 01 16:35:32 compute-0 podman[95413]: 2025-10-01 16:35:32.507986007 +0000 UTC m=+0.115189858 container died 468b9833f949eabeab440754319cc8b77a1cf479ea1914b3b927ea0b9201041d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_fermat, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Oct 01 16:35:32 compute-0 podman[95413]: 2025-10-01 16:35:32.414658372 +0000 UTC m=+0.021862203 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-71911e1beafa8b761b7816e1837e1dcf2bb8a12d81729582a95fcf73dc4ca853-merged.mount: Deactivated successfully.
Oct 01 16:35:32 compute-0 podman[95413]: 2025-10-01 16:35:32.538602754 +0000 UTC m=+0.145806585 container remove 468b9833f949eabeab440754319cc8b77a1cf479ea1914b3b927ea0b9201041d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 01 16:35:32 compute-0 systemd[1]: libpod-conmon-468b9833f949eabeab440754319cc8b77a1cf479ea1914b3b927ea0b9201041d.scope: Deactivated successfully.
Oct 01 16:35:32 compute-0 podman[95452]: 2025-10-01 16:35:32.677833575 +0000 UTC m=+0.037968298 container create 5baab408eec06b8bc13fd96fd61d3a5576512ee9ebc351988d7ff8527740ccee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 16:35:32 compute-0 systemd[1]: Started libpod-conmon-5baab408eec06b8bc13fd96fd61d3a5576512ee9ebc351988d7ff8527740ccee.scope.
Oct 01 16:35:32 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df2d6dbf393f0cd9530c1ac7980ad043523d6e07603515664194441ecc056ee2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df2d6dbf393f0cd9530c1ac7980ad043523d6e07603515664194441ecc056ee2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df2d6dbf393f0cd9530c1ac7980ad043523d6e07603515664194441ecc056ee2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df2d6dbf393f0cd9530c1ac7980ad043523d6e07603515664194441ecc056ee2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:32 compute-0 podman[95452]: 2025-10-01 16:35:32.746018511 +0000 UTC m=+0.106153264 container init 5baab408eec06b8bc13fd96fd61d3a5576512ee9ebc351988d7ff8527740ccee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_brown, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 01 16:35:32 compute-0 podman[95452]: 2025-10-01 16:35:32.751896896 +0000 UTC m=+0.112031629 container start 5baab408eec06b8bc13fd96fd61d3a5576512ee9ebc351988d7ff8527740ccee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_brown, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 16:35:32 compute-0 podman[95452]: 2025-10-01 16:35:32.659385046 +0000 UTC m=+0.019519819 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:32 compute-0 podman[95452]: 2025-10-01 16:35:32.758486906 +0000 UTC m=+0.118621639 container attach 5baab408eec06b8bc13fd96fd61d3a5576512ee9ebc351988d7ff8527740ccee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_brown, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Oct 01 16:35:33 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v58: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Oct 01 16:35:33 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2805070380' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct 01 16:35:33 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2805070380' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct 01 16:35:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Oct 01 16:35:33 compute-0 boring_cannon[95236]: enabled application 'rbd' on pool 'backups'
Oct 01 16:35:33 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Oct 01 16:35:33 compute-0 systemd[1]: libpod-1a27a6e1be671cb41d85144a15919398bd2cf942805a8a6ee4f0b83d8e5829c0.scope: Deactivated successfully.
Oct 01 16:35:33 compute-0 podman[95213]: 2025-10-01 16:35:33.320452906 +0000 UTC m=+1.696175164 container died 1a27a6e1be671cb41d85144a15919398bd2cf942805a8a6ee4f0b83d8e5829c0 (image=quay.io/ceph/ceph:v18, name=boring_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 01 16:35:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-240fdf07492a07f244ccbf759f7ed92ea0f09a8063826551ef0c668dbfc1435e-merged.mount: Deactivated successfully.
Oct 01 16:35:33 compute-0 podman[95213]: 2025-10-01 16:35:33.359879211 +0000 UTC m=+1.735601459 container remove 1a27a6e1be671cb41d85144a15919398bd2cf942805a8a6ee4f0b83d8e5829c0 (image=quay.io/ceph/ceph:v18, name=boring_cannon, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:35:33 compute-0 systemd[1]: libpod-conmon-1a27a6e1be671cb41d85144a15919398bd2cf942805a8a6ee4f0b83d8e5829c0.scope: Deactivated successfully.
Oct 01 16:35:33 compute-0 sudo[95201]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:33 compute-0 eager_brown[95469]: {
Oct 01 16:35:33 compute-0 eager_brown[95469]:     "0": [
Oct 01 16:35:33 compute-0 eager_brown[95469]:         {
Oct 01 16:35:33 compute-0 eager_brown[95469]:             "devices": [
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "/dev/loop3"
Oct 01 16:35:33 compute-0 eager_brown[95469]:             ],
Oct 01 16:35:33 compute-0 eager_brown[95469]:             "lv_name": "ceph_lv0",
Oct 01 16:35:33 compute-0 eager_brown[95469]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:35:33 compute-0 eager_brown[95469]:             "lv_size": "21470642176",
Oct 01 16:35:33 compute-0 eager_brown[95469]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:35:33 compute-0 eager_brown[95469]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:35:33 compute-0 eager_brown[95469]:             "name": "ceph_lv0",
Oct 01 16:35:33 compute-0 eager_brown[95469]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:35:33 compute-0 eager_brown[95469]:             "tags": {
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "ceph.cluster_name": "ceph",
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "ceph.crush_device_class": "",
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "ceph.encrypted": "0",
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "ceph.osd_id": "0",
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "ceph.type": "block",
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "ceph.vdo": "0"
Oct 01 16:35:33 compute-0 eager_brown[95469]:             },
Oct 01 16:35:33 compute-0 eager_brown[95469]:             "type": "block",
Oct 01 16:35:33 compute-0 eager_brown[95469]:             "vg_name": "ceph_vg0"
Oct 01 16:35:33 compute-0 eager_brown[95469]:         }
Oct 01 16:35:33 compute-0 eager_brown[95469]:     ],
Oct 01 16:35:33 compute-0 eager_brown[95469]:     "1": [
Oct 01 16:35:33 compute-0 eager_brown[95469]:         {
Oct 01 16:35:33 compute-0 eager_brown[95469]:             "devices": [
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "/dev/loop4"
Oct 01 16:35:33 compute-0 eager_brown[95469]:             ],
Oct 01 16:35:33 compute-0 eager_brown[95469]:             "lv_name": "ceph_lv1",
Oct 01 16:35:33 compute-0 eager_brown[95469]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:35:33 compute-0 eager_brown[95469]:             "lv_size": "21470642176",
Oct 01 16:35:33 compute-0 eager_brown[95469]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:35:33 compute-0 eager_brown[95469]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:35:33 compute-0 eager_brown[95469]:             "name": "ceph_lv1",
Oct 01 16:35:33 compute-0 eager_brown[95469]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:35:33 compute-0 eager_brown[95469]:             "tags": {
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "ceph.cluster_name": "ceph",
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "ceph.crush_device_class": "",
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "ceph.encrypted": "0",
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "ceph.osd_id": "1",
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "ceph.type": "block",
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "ceph.vdo": "0"
Oct 01 16:35:33 compute-0 eager_brown[95469]:             },
Oct 01 16:35:33 compute-0 eager_brown[95469]:             "type": "block",
Oct 01 16:35:33 compute-0 eager_brown[95469]:             "vg_name": "ceph_vg1"
Oct 01 16:35:33 compute-0 eager_brown[95469]:         }
Oct 01 16:35:33 compute-0 sudo[95516]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjlolymirvxmapcsglyvsqirxletiayx ; /usr/bin/python3'
Oct 01 16:35:33 compute-0 eager_brown[95469]:     ],
Oct 01 16:35:33 compute-0 eager_brown[95469]:     "2": [
Oct 01 16:35:33 compute-0 eager_brown[95469]:         {
Oct 01 16:35:33 compute-0 eager_brown[95469]:             "devices": [
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "/dev/loop5"
Oct 01 16:35:33 compute-0 eager_brown[95469]:             ],
Oct 01 16:35:33 compute-0 eager_brown[95469]:             "lv_name": "ceph_lv2",
Oct 01 16:35:33 compute-0 eager_brown[95469]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:35:33 compute-0 eager_brown[95469]:             "lv_size": "21470642176",
Oct 01 16:35:33 compute-0 eager_brown[95469]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:35:33 compute-0 eager_brown[95469]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:35:33 compute-0 eager_brown[95469]:             "name": "ceph_lv2",
Oct 01 16:35:33 compute-0 eager_brown[95469]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:35:33 compute-0 eager_brown[95469]:             "tags": {
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "ceph.cluster_name": "ceph",
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "ceph.crush_device_class": "",
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "ceph.encrypted": "0",
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "ceph.osd_id": "2",
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "ceph.type": "block",
Oct 01 16:35:33 compute-0 eager_brown[95469]:                 "ceph.vdo": "0"
Oct 01 16:35:33 compute-0 eager_brown[95469]:             },
Oct 01 16:35:33 compute-0 eager_brown[95469]:             "type": "block",
Oct 01 16:35:33 compute-0 eager_brown[95469]:             "vg_name": "ceph_vg2"
Oct 01 16:35:33 compute-0 eager_brown[95469]:         }
Oct 01 16:35:33 compute-0 eager_brown[95469]:     ]
Oct 01 16:35:33 compute-0 eager_brown[95469]: }
Oct 01 16:35:33 compute-0 sudo[95516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:35:33 compute-0 systemd[1]: libpod-5baab408eec06b8bc13fd96fd61d3a5576512ee9ebc351988d7ff8527740ccee.scope: Deactivated successfully.
Oct 01 16:35:33 compute-0 podman[95452]: 2025-10-01 16:35:33.513476569 +0000 UTC m=+0.873611302 container died 5baab408eec06b8bc13fd96fd61d3a5576512ee9ebc351988d7ff8527740ccee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_brown, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 01 16:35:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-df2d6dbf393f0cd9530c1ac7980ad043523d6e07603515664194441ecc056ee2-merged.mount: Deactivated successfully.
Oct 01 16:35:33 compute-0 podman[95452]: 2025-10-01 16:35:33.597308768 +0000 UTC m=+0.957443501 container remove 5baab408eec06b8bc13fd96fd61d3a5576512ee9ebc351988d7ff8527740ccee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 01 16:35:33 compute-0 systemd[1]: libpod-conmon-5baab408eec06b8bc13fd96fd61d3a5576512ee9ebc351988d7ff8527740ccee.scope: Deactivated successfully.
Oct 01 16:35:33 compute-0 sudo[95329]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:33 compute-0 python3[95518]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:35:33 compute-0 ceph-mon[74273]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 01 16:35:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:35:33 compute-0 sudo[95530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:33 compute-0 sudo[95530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:33 compute-0 sudo[95530]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:33 compute-0 podman[95547]: 2025-10-01 16:35:33.710526291 +0000 UTC m=+0.036246568 container create 473ded80c84e1f09649f35e2dd279dbb18327817f544bd87aaffe28e141cbd82 (image=quay.io/ceph/ceph:v18, name=exciting_archimedes, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:35:33 compute-0 systemd[1]: Started libpod-conmon-473ded80c84e1f09649f35e2dd279dbb18327817f544bd87aaffe28e141cbd82.scope.
Oct 01 16:35:33 compute-0 sudo[95568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:35:33 compute-0 sudo[95568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:33 compute-0 sudo[95568]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:33 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46c9862de29eafd5dcbf9b84086322c5a46b4b92dad40d797fc68bb3712c467a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46c9862de29eafd5dcbf9b84086322c5a46b4b92dad40d797fc68bb3712c467a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:33 compute-0 podman[95547]: 2025-10-01 16:35:33.784165846 +0000 UTC m=+0.109886123 container init 473ded80c84e1f09649f35e2dd279dbb18327817f544bd87aaffe28e141cbd82 (image=quay.io/ceph/ceph:v18, name=exciting_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:35:33 compute-0 podman[95547]: 2025-10-01 16:35:33.695371927 +0000 UTC m=+0.021092224 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:35:33 compute-0 podman[95547]: 2025-10-01 16:35:33.793987528 +0000 UTC m=+0.119707805 container start 473ded80c84e1f09649f35e2dd279dbb18327817f544bd87aaffe28e141cbd82 (image=quay.io/ceph/ceph:v18, name=exciting_archimedes, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 01 16:35:33 compute-0 podman[95547]: 2025-10-01 16:35:33.797143228 +0000 UTC m=+0.122863505 container attach 473ded80c84e1f09649f35e2dd279dbb18327817f544bd87aaffe28e141cbd82 (image=quay.io/ceph/ceph:v18, name=exciting_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:35:33 compute-0 sudo[95599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:33 compute-0 sudo[95599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:33 compute-0 sudo[95599]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:33 compute-0 sudo[95625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 16:35:33 compute-0 sudo[95625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:34 compute-0 podman[95707]: 2025-10-01 16:35:34.176043273 +0000 UTC m=+0.032311151 container create 830c38983b7ca27670c1677dc33840f307da450045b65cdda356fb5a56c153cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_wilbur, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:35:34 compute-0 systemd[1]: Started libpod-conmon-830c38983b7ca27670c1677dc33840f307da450045b65cdda356fb5a56c153cd.scope.
Oct 01 16:35:34 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:34 compute-0 podman[95707]: 2025-10-01 16:35:34.227029238 +0000 UTC m=+0.083297136 container init 830c38983b7ca27670c1677dc33840f307da450045b65cdda356fb5a56c153cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Oct 01 16:35:34 compute-0 podman[95707]: 2025-10-01 16:35:34.234164851 +0000 UTC m=+0.090432719 container start 830c38983b7ca27670c1677dc33840f307da450045b65cdda356fb5a56c153cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_wilbur, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:35:34 compute-0 podman[95707]: 2025-10-01 16:35:34.236688987 +0000 UTC m=+0.092956885 container attach 830c38983b7ca27670c1677dc33840f307da450045b65cdda356fb5a56c153cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Oct 01 16:35:34 compute-0 practical_wilbur[95723]: 167 167
Oct 01 16:35:34 compute-0 systemd[1]: libpod-830c38983b7ca27670c1677dc33840f307da450045b65cdda356fb5a56c153cd.scope: Deactivated successfully.
Oct 01 16:35:34 compute-0 podman[95707]: 2025-10-01 16:35:34.237958354 +0000 UTC m=+0.094226242 container died 830c38983b7ca27670c1677dc33840f307da450045b65cdda356fb5a56c153cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 01 16:35:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffdd2ebc794e28feda6237b8ad2f29a91fb13957194bc44fe74bb5afd67f61cc-merged.mount: Deactivated successfully.
Oct 01 16:35:34 compute-0 podman[95707]: 2025-10-01 16:35:34.162292201 +0000 UTC m=+0.018560099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:34 compute-0 podman[95707]: 2025-10-01 16:35:34.270598519 +0000 UTC m=+0.126866397 container remove 830c38983b7ca27670c1677dc33840f307da450045b65cdda356fb5a56c153cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_wilbur, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:35:34 compute-0 systemd[1]: libpod-conmon-830c38983b7ca27670c1677dc33840f307da450045b65cdda356fb5a56c153cd.scope: Deactivated successfully.
Oct 01 16:35:34 compute-0 ceph-mon[74273]: pgmap v58: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:34 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2805070380' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct 01 16:35:34 compute-0 ceph-mon[74273]: osdmap e27: 3 total, 3 up, 3 in
Oct 01 16:35:34 compute-0 ceph-mon[74273]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 01 16:35:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Oct 01 16:35:34 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1343776941' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct 01 16:35:34 compute-0 podman[95748]: 2025-10-01 16:35:34.413594597 +0000 UTC m=+0.039848892 container create e6f91e35fa58a7aecd04d7b3408ff320c5489e211a8f9b2f42007ef7ed0320d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wu, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:35:34 compute-0 systemd[1]: Started libpod-conmon-e6f91e35fa58a7aecd04d7b3408ff320c5489e211a8f9b2f42007ef7ed0320d6.scope.
Oct 01 16:35:34 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd343e966be919a164b873cc14a66f51511d1fe16d0c5647cf766a21482dc692/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd343e966be919a164b873cc14a66f51511d1fe16d0c5647cf766a21482dc692/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd343e966be919a164b873cc14a66f51511d1fe16d0c5647cf766a21482dc692/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd343e966be919a164b873cc14a66f51511d1fe16d0c5647cf766a21482dc692/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:34 compute-0 podman[95748]: 2025-10-01 16:35:34.398724413 +0000 UTC m=+0.024978708 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:34 compute-0 podman[95748]: 2025-10-01 16:35:34.512648692 +0000 UTC m=+0.138903017 container init e6f91e35fa58a7aecd04d7b3408ff320c5489e211a8f9b2f42007ef7ed0320d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wu, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Oct 01 16:35:34 compute-0 podman[95748]: 2025-10-01 16:35:34.520743075 +0000 UTC m=+0.146997380 container start e6f91e35fa58a7aecd04d7b3408ff320c5489e211a8f9b2f42007ef7ed0320d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 01 16:35:34 compute-0 podman[95748]: 2025-10-01 16:35:34.555847902 +0000 UTC m=+0.182102237 container attach e6f91e35fa58a7aecd04d7b3408ff320c5489e211a8f9b2f42007ef7ed0320d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 01 16:35:35 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v60: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Oct 01 16:35:35 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1343776941' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct 01 16:35:35 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1343776941' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct 01 16:35:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Oct 01 16:35:35 compute-0 exciting_archimedes[95595]: enabled application 'rbd' on pool 'images'
Oct 01 16:35:35 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Oct 01 16:35:35 compute-0 systemd[1]: libpod-473ded80c84e1f09649f35e2dd279dbb18327817f544bd87aaffe28e141cbd82.scope: Deactivated successfully.
Oct 01 16:35:35 compute-0 podman[95547]: 2025-10-01 16:35:35.343936171 +0000 UTC m=+1.669656448 container died 473ded80c84e1f09649f35e2dd279dbb18327817f544bd87aaffe28e141cbd82 (image=quay.io/ceph/ceph:v18, name=exciting_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:35:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-46c9862de29eafd5dcbf9b84086322c5a46b4b92dad40d797fc68bb3712c467a-merged.mount: Deactivated successfully.
Oct 01 16:35:35 compute-0 podman[95547]: 2025-10-01 16:35:35.385099804 +0000 UTC m=+1.710820081 container remove 473ded80c84e1f09649f35e2dd279dbb18327817f544bd87aaffe28e141cbd82 (image=quay.io/ceph/ceph:v18, name=exciting_archimedes, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 01 16:35:35 compute-0 systemd[1]: libpod-conmon-473ded80c84e1f09649f35e2dd279dbb18327817f544bd87aaffe28e141cbd82.scope: Deactivated successfully.
Oct 01 16:35:35 compute-0 sudo[95516]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:35 compute-0 sleepy_wu[95764]: {
Oct 01 16:35:35 compute-0 sleepy_wu[95764]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 16:35:35 compute-0 sleepy_wu[95764]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:35:35 compute-0 sleepy_wu[95764]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 16:35:35 compute-0 sleepy_wu[95764]:         "osd_id": 2,
Oct 01 16:35:35 compute-0 sleepy_wu[95764]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:35:35 compute-0 sleepy_wu[95764]:         "type": "bluestore"
Oct 01 16:35:35 compute-0 sleepy_wu[95764]:     },
Oct 01 16:35:35 compute-0 sleepy_wu[95764]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 16:35:35 compute-0 sleepy_wu[95764]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:35:35 compute-0 sleepy_wu[95764]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 16:35:35 compute-0 sleepy_wu[95764]:         "osd_id": 0,
Oct 01 16:35:35 compute-0 sleepy_wu[95764]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:35:35 compute-0 sleepy_wu[95764]:         "type": "bluestore"
Oct 01 16:35:35 compute-0 sleepy_wu[95764]:     },
Oct 01 16:35:35 compute-0 sleepy_wu[95764]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 16:35:35 compute-0 sleepy_wu[95764]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:35:35 compute-0 sleepy_wu[95764]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 16:35:35 compute-0 sleepy_wu[95764]:         "osd_id": 1,
Oct 01 16:35:35 compute-0 sleepy_wu[95764]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:35:35 compute-0 sleepy_wu[95764]:         "type": "bluestore"
Oct 01 16:35:35 compute-0 sleepy_wu[95764]:     }
Oct 01 16:35:35 compute-0 sleepy_wu[95764]: }
Oct 01 16:35:35 compute-0 systemd[1]: libpod-e6f91e35fa58a7aecd04d7b3408ff320c5489e211a8f9b2f42007ef7ed0320d6.scope: Deactivated successfully.
Oct 01 16:35:35 compute-0 podman[95748]: 2025-10-01 16:35:35.509682238 +0000 UTC m=+1.135936543 container died e6f91e35fa58a7aecd04d7b3408ff320c5489e211a8f9b2f42007ef7ed0320d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wu, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 01 16:35:35 compute-0 sudo[95834]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifftfsscxmcdbfupvppubgoqncdfaohk ; /usr/bin/python3'
Oct 01 16:35:35 compute-0 sudo[95834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:35:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd343e966be919a164b873cc14a66f51511d1fe16d0c5647cf766a21482dc692-merged.mount: Deactivated successfully.
Oct 01 16:35:35 compute-0 podman[95748]: 2025-10-01 16:35:35.571997948 +0000 UTC m=+1.198252253 container remove e6f91e35fa58a7aecd04d7b3408ff320c5489e211a8f9b2f42007ef7ed0320d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wu, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:35:35 compute-0 systemd[1]: libpod-conmon-e6f91e35fa58a7aecd04d7b3408ff320c5489e211a8f9b2f42007ef7ed0320d6.scope: Deactivated successfully.
Oct 01 16:35:35 compute-0 sudo[95625]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:35:35 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:35:35 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:35 compute-0 sudo[95850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:35 compute-0 sudo[95850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:35 compute-0 sudo[95850]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:35 compute-0 python3[95846]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:35:35 compute-0 sudo[95875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 16:35:35 compute-0 sudo[95875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:35 compute-0 sudo[95875]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:35 compute-0 podman[95883]: 2025-10-01 16:35:35.751568489 +0000 UTC m=+0.041030268 container create 6bbef2401bfdae256abd8cbc67f1404ce96309179b002d6587df2d5dd0c21deb (image=quay.io/ceph/ceph:v18, name=serene_leavitt, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:35:35 compute-0 systemd[1]: Started libpod-conmon-6bbef2401bfdae256abd8cbc67f1404ce96309179b002d6587df2d5dd0c21deb.scope.
Oct 01 16:35:35 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46d76063554c06a15b6dac40865e67fd235102ddafee972f26f271c645e6dade/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46d76063554c06a15b6dac40865e67fd235102ddafee972f26f271c645e6dade/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:35 compute-0 podman[95883]: 2025-10-01 16:35:35.732262489 +0000 UTC m=+0.021724318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:35:35 compute-0 podman[95883]: 2025-10-01 16:35:35.828938833 +0000 UTC m=+0.118400632 container init 6bbef2401bfdae256abd8cbc67f1404ce96309179b002d6587df2d5dd0c21deb (image=quay.io/ceph/ceph:v18, name=serene_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:35:35 compute-0 podman[95883]: 2025-10-01 16:35:35.834148738 +0000 UTC m=+0.123610517 container start 6bbef2401bfdae256abd8cbc67f1404ce96309179b002d6587df2d5dd0c21deb (image=quay.io/ceph/ceph:v18, name=serene_leavitt, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 01 16:35:35 compute-0 podman[95883]: 2025-10-01 16:35:35.836495213 +0000 UTC m=+0.125957012 container attach 6bbef2401bfdae256abd8cbc67f1404ce96309179b002d6587df2d5dd0c21deb (image=quay.io/ceph/ceph:v18, name=serene_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 01 16:35:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Oct 01 16:35:36 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2590338926' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct 01 16:35:36 compute-0 ceph-mon[74273]: pgmap v60: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:36 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1343776941' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct 01 16:35:36 compute-0 ceph-mon[74273]: osdmap e28: 3 total, 3 up, 3 in
Oct 01 16:35:36 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:36 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:36 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2590338926' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct 01 16:35:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Oct 01 16:35:36 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2590338926' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct 01 16:35:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Oct 01 16:35:36 compute-0 serene_leavitt[95915]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Oct 01 16:35:36 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Oct 01 16:35:36 compute-0 systemd[1]: libpod-6bbef2401bfdae256abd8cbc67f1404ce96309179b002d6587df2d5dd0c21deb.scope: Deactivated successfully.
Oct 01 16:35:36 compute-0 podman[95883]: 2025-10-01 16:35:36.674768932 +0000 UTC m=+0.964230751 container died 6bbef2401bfdae256abd8cbc67f1404ce96309179b002d6587df2d5dd0c21deb (image=quay.io/ceph/ceph:v18, name=serene_leavitt, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 01 16:35:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-46d76063554c06a15b6dac40865e67fd235102ddafee972f26f271c645e6dade-merged.mount: Deactivated successfully.
Oct 01 16:35:36 compute-0 podman[95883]: 2025-10-01 16:35:36.727402904 +0000 UTC m=+1.016864683 container remove 6bbef2401bfdae256abd8cbc67f1404ce96309179b002d6587df2d5dd0c21deb (image=quay.io/ceph/ceph:v18, name=serene_leavitt, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:35:36 compute-0 systemd[1]: libpod-conmon-6bbef2401bfdae256abd8cbc67f1404ce96309179b002d6587df2d5dd0c21deb.scope: Deactivated successfully.
Oct 01 16:35:36 compute-0 sudo[95834]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:36 compute-0 sudo[95976]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaceyxdsplrhquizxhghkvuenfyzwrse ; /usr/bin/python3'
Oct 01 16:35:36 compute-0 sudo[95976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:35:36 compute-0 python3[95978]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:35:37 compute-0 podman[95979]: 2025-10-01 16:35:37.032282894 +0000 UTC m=+0.034562815 container create 0385dacb1e64d462f851449655224bc058849e02a9a4135d399c95c5d0023bb0 (image=quay.io/ceph/ceph:v18, name=condescending_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:35:37 compute-0 systemd[1]: Started libpod-conmon-0385dacb1e64d462f851449655224bc058849e02a9a4135d399c95c5d0023bb0.scope.
Oct 01 16:35:37 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65778391fbaff5f85473aa0430854bc6a82017560f6d0a0574482a2feccbd4be/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65778391fbaff5f85473aa0430854bc6a82017560f6d0a0574482a2feccbd4be/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:37 compute-0 podman[95979]: 2025-10-01 16:35:37.087105458 +0000 UTC m=+0.089385379 container init 0385dacb1e64d462f851449655224bc058849e02a9a4135d399c95c5d0023bb0 (image=quay.io/ceph/ceph:v18, name=condescending_booth, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 01 16:35:37 compute-0 podman[95979]: 2025-10-01 16:35:37.092699722 +0000 UTC m=+0.094979643 container start 0385dacb1e64d462f851449655224bc058849e02a9a4135d399c95c5d0023bb0 (image=quay.io/ceph/ceph:v18, name=condescending_booth, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 01 16:35:37 compute-0 podman[95979]: 2025-10-01 16:35:37.095824685 +0000 UTC m=+0.098104616 container attach 0385dacb1e64d462f851449655224bc058849e02a9a4135d399c95c5d0023bb0 (image=quay.io/ceph/ceph:v18, name=condescending_booth, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 01 16:35:37 compute-0 podman[95979]: 2025-10-01 16:35:37.01712962 +0000 UTC m=+0.019409571 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:35:37 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Oct 01 16:35:37 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3062170572' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct 01 16:35:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Oct 01 16:35:37 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3062170572' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct 01 16:35:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Oct 01 16:35:37 compute-0 condescending_booth[95992]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Oct 01 16:35:37 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2590338926' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct 01 16:35:37 compute-0 ceph-mon[74273]: osdmap e29: 3 total, 3 up, 3 in
Oct 01 16:35:37 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3062170572' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct 01 16:35:37 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Oct 01 16:35:37 compute-0 systemd[1]: libpod-0385dacb1e64d462f851449655224bc058849e02a9a4135d399c95c5d0023bb0.scope: Deactivated successfully.
Oct 01 16:35:37 compute-0 podman[95979]: 2025-10-01 16:35:37.671836785 +0000 UTC m=+0.674116706 container died 0385dacb1e64d462f851449655224bc058849e02a9a4135d399c95c5d0023bb0 (image=quay.io/ceph/ceph:v18, name=condescending_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 01 16:35:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-65778391fbaff5f85473aa0430854bc6a82017560f6d0a0574482a2feccbd4be-merged.mount: Deactivated successfully.
Oct 01 16:35:37 compute-0 podman[95979]: 2025-10-01 16:35:37.715892316 +0000 UTC m=+0.718172257 container remove 0385dacb1e64d462f851449655224bc058849e02a9a4135d399c95c5d0023bb0 (image=quay.io/ceph/ceph:v18, name=condescending_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 01 16:35:37 compute-0 systemd[1]: libpod-conmon-0385dacb1e64d462f851449655224bc058849e02a9a4135d399c95c5d0023bb0.scope: Deactivated successfully.
Oct 01 16:35:37 compute-0 sudo[95976]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:38 compute-0 python3[96103]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 16:35:38 compute-0 ceph-mon[74273]: pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:38 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3062170572' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct 01 16:35:38 compute-0 ceph-mon[74273]: osdmap e30: 3 total, 3 up, 3 in
Oct 01 16:35:38 compute-0 ceph-mon[74273]: log_channel(cluster) log [WRN] : Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 01 16:35:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:35:38 compute-0 python3[96174]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759336538.3181374-33097-94872320667054/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:35:39 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:39 compute-0 sudo[96274]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdmrfgdsvvzlzeumuvmcudrjruhtgxwl ; /usr/bin/python3'
Oct 01 16:35:39 compute-0 sudo[96274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:35:39 compute-0 python3[96276]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 16:35:39 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 01 16:35:39 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 01 16:35:39 compute-0 ceph-mon[74273]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 01 16:35:39 compute-0 sudo[96274]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:39 compute-0 sudo[96349]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-davrweoodtbhrrwykncettxorpoaxvjb ; /usr/bin/python3'
Oct 01 16:35:39 compute-0 sudo[96349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:35:40 compute-0 python3[96351]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759336539.3228014-33111-199155269556466/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=ad2827e85d768cd1e052620cd64460ebc8d0b45c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:35:40 compute-0 sudo[96349]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:40 compute-0 sudo[96399]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eytwvccpfonyqlksnyyojuiqtgfydeeo ; /usr/bin/python3'
Oct 01 16:35:40 compute-0 sudo[96399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:35:40 compute-0 python3[96401]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:35:40 compute-0 podman[96402]: 2025-10-01 16:35:40.532506413 +0000 UTC m=+0.044105166 container create 83e236ee89834a4e9d8b340b8081c4f0a4a87c95e463634cb845ef2dd93b0ff9 (image=quay.io/ceph/ceph:v18, name=great_proskuriakova, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 01 16:35:40 compute-0 systemd[1]: Started libpod-conmon-83e236ee89834a4e9d8b340b8081c4f0a4a87c95e463634cb845ef2dd93b0ff9.scope.
Oct 01 16:35:40 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff223464c006076eac00359423de357ff87b9e0f0cca7a2ad0edd3af6207b1b1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff223464c006076eac00359423de357ff87b9e0f0cca7a2ad0edd3af6207b1b1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff223464c006076eac00359423de357ff87b9e0f0cca7a2ad0edd3af6207b1b1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:40 compute-0 podman[96402]: 2025-10-01 16:35:40.60230049 +0000 UTC m=+0.113899263 container init 83e236ee89834a4e9d8b340b8081c4f0a4a87c95e463634cb845ef2dd93b0ff9 (image=quay.io/ceph/ceph:v18, name=great_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 01 16:35:40 compute-0 podman[96402]: 2025-10-01 16:35:40.609876288 +0000 UTC m=+0.121475041 container start 83e236ee89834a4e9d8b340b8081c4f0a4a87c95e463634cb845ef2dd93b0ff9 (image=quay.io/ceph/ceph:v18, name=great_proskuriakova, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 01 16:35:40 compute-0 podman[96402]: 2025-10-01 16:35:40.517315113 +0000 UTC m=+0.028913896 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:35:40 compute-0 podman[96402]: 2025-10-01 16:35:40.612645468 +0000 UTC m=+0.124244221 container attach 83e236ee89834a4e9d8b340b8081c4f0a4a87c95e463634cb845ef2dd93b0ff9 (image=quay.io/ceph/ceph:v18, name=great_proskuriakova, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 01 16:35:40 compute-0 ceph-mon[74273]: pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:40 compute-0 ceph-mon[74273]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 01 16:35:40 compute-0 ceph-mon[74273]: Cluster is now healthy
Oct 01 16:35:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct 01 16:35:41 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/283860894' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 01 16:35:41 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/283860894' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 01 16:35:41 compute-0 great_proskuriakova[96418]: 
Oct 01 16:35:41 compute-0 great_proskuriakova[96418]: [global]
Oct 01 16:35:41 compute-0 great_proskuriakova[96418]:         fsid = f44264e3-e26a-5bd3-9e84-b4ba651d9cf5
Oct 01 16:35:41 compute-0 great_proskuriakova[96418]:         mon_host = 192.168.122.100
Oct 01 16:35:41 compute-0 systemd[1]: libpod-83e236ee89834a4e9d8b340b8081c4f0a4a87c95e463634cb845ef2dd93b0ff9.scope: Deactivated successfully.
Oct 01 16:35:41 compute-0 conmon[96418]: conmon 83e236ee89834a4e9d8b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-83e236ee89834a4e9d8b340b8081c4f0a4a87c95e463634cb845ef2dd93b0ff9.scope/container/memory.events
Oct 01 16:35:41 compute-0 podman[96402]: 2025-10-01 16:35:41.148651504 +0000 UTC m=+0.660250297 container died 83e236ee89834a4e9d8b340b8081c4f0a4a87c95e463634cb845ef2dd93b0ff9 (image=quay.io/ceph/ceph:v18, name=great_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:35:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff223464c006076eac00359423de357ff87b9e0f0cca7a2ad0edd3af6207b1b1-merged.mount: Deactivated successfully.
Oct 01 16:35:41 compute-0 podman[96402]: 2025-10-01 16:35:41.195280955 +0000 UTC m=+0.706879698 container remove 83e236ee89834a4e9d8b340b8081c4f0a4a87c95e463634cb845ef2dd93b0ff9 (image=quay.io/ceph/ceph:v18, name=great_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:35:41 compute-0 sudo[96443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:41 compute-0 systemd[1]: libpod-conmon-83e236ee89834a4e9d8b340b8081c4f0a4a87c95e463634cb845ef2dd93b0ff9.scope: Deactivated successfully.
Oct 01 16:35:41 compute-0 sudo[96443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:41 compute-0 sudo[96443]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:41 compute-0 sudo[96399]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:41 compute-0 sudo[96481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:35:41 compute-0 sudo[96481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:41 compute-0 sudo[96481]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:41 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:41 compute-0 sudo[96506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:41 compute-0 sudo[96506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:35:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:35:41 compute-0 sudo[96506]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:35:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:35:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:35:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:35:41 compute-0 sudo[96553]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkwfigjaxceviulifjzdtjloamokwfqr ; /usr/bin/python3'
Oct 01 16:35:41 compute-0 sudo[96553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:35:41 compute-0 sudo[96555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 01 16:35:41 compute-0 sudo[96555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:41 compute-0 python3[96560]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:35:41 compute-0 podman[96582]: 2025-10-01 16:35:41.551084266 +0000 UTC m=+0.045283743 container create 73efe58cf47e10980ac2dfd0eca42e6bf13a5bc85845982d4c79e392533fc312 (image=quay.io/ceph/ceph:v18, name=happy_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:35:41 compute-0 systemd[1]: Started libpod-conmon-73efe58cf47e10980ac2dfd0eca42e6bf13a5bc85845982d4c79e392533fc312.scope.
Oct 01 16:35:41 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c507129969164b1c78fac79ae12f97558315a6c1b1889bf4574286f86769babe/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c507129969164b1c78fac79ae12f97558315a6c1b1889bf4574286f86769babe/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c507129969164b1c78fac79ae12f97558315a6c1b1889bf4574286f86769babe/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:41 compute-0 podman[96582]: 2025-10-01 16:35:41.612787699 +0000 UTC m=+0.106987176 container init 73efe58cf47e10980ac2dfd0eca42e6bf13a5bc85845982d4c79e392533fc312 (image=quay.io/ceph/ceph:v18, name=happy_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 16:35:41 compute-0 podman[96582]: 2025-10-01 16:35:41.620969173 +0000 UTC m=+0.115168660 container start 73efe58cf47e10980ac2dfd0eca42e6bf13a5bc85845982d4c79e392533fc312 (image=quay.io/ceph/ceph:v18, name=happy_hamilton, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Oct 01 16:35:41 compute-0 podman[96582]: 2025-10-01 16:35:41.624841368 +0000 UTC m=+0.119040875 container attach 73efe58cf47e10980ac2dfd0eca42e6bf13a5bc85845982d4c79e392533fc312 (image=quay.io/ceph/ceph:v18, name=happy_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 01 16:35:41 compute-0 podman[96582]: 2025-10-01 16:35:41.531810742 +0000 UTC m=+0.026010259 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:35:41 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/283860894' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 01 16:35:41 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/283860894' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 01 16:35:41 compute-0 podman[96670]: 2025-10-01 16:35:41.887330193 +0000 UTC m=+0.055648688 container exec bfdaa9b78cc1558959452c7020a00aa78f3da27e3ededf3766f2f88165c2443b (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:35:41 compute-0 podman[96670]: 2025-10-01 16:35:41.971631822 +0000 UTC m=+0.139950287 container exec_died bfdaa9b78cc1558959452c7020a00aa78f3da27e3ededf3766f2f88165c2443b (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:35:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Oct 01 16:35:42 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2611254867' entity='client.admin' 
Oct 01 16:35:42 compute-0 happy_hamilton[96615]: set ssl_option
Oct 01 16:35:42 compute-0 systemd[1]: libpod-73efe58cf47e10980ac2dfd0eca42e6bf13a5bc85845982d4c79e392533fc312.scope: Deactivated successfully.
Oct 01 16:35:42 compute-0 podman[96582]: 2025-10-01 16:35:42.276697412 +0000 UTC m=+0.770896889 container died 73efe58cf47e10980ac2dfd0eca42e6bf13a5bc85845982d4c79e392533fc312 (image=quay.io/ceph/ceph:v18, name=happy_hamilton, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:35:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-c507129969164b1c78fac79ae12f97558315a6c1b1889bf4574286f86769babe-merged.mount: Deactivated successfully.
Oct 01 16:35:42 compute-0 podman[96582]: 2025-10-01 16:35:42.337362515 +0000 UTC m=+0.831561992 container remove 73efe58cf47e10980ac2dfd0eca42e6bf13a5bc85845982d4c79e392533fc312 (image=quay.io/ceph/ceph:v18, name=happy_hamilton, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 01 16:35:42 compute-0 systemd[1]: libpod-conmon-73efe58cf47e10980ac2dfd0eca42e6bf13a5bc85845982d4c79e392533fc312.scope: Deactivated successfully.
Oct 01 16:35:42 compute-0 sudo[96553]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:42 compute-0 sudo[96555]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:35:42 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:35:42 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:35:42 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:35:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 16:35:42 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:35:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 16:35:42 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:42 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 3dca41a2-2541-460c-90fc-a2696f02500a does not exist
Oct 01 16:35:42 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 948bec31-d14f-4d89-927a-ca7289655f7b does not exist
Oct 01 16:35:42 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 38bcda50-6b3d-49e8-85d8-539ef135b383 does not exist
Oct 01 16:35:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 16:35:42 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:35:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 16:35:42 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:35:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:35:42 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:35:42 compute-0 sudo[96870]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wldpmpapfldmfnaghcgwiekfokboupfw ; /usr/bin/python3'
Oct 01 16:35:42 compute-0 sudo[96870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:35:42 compute-0 sudo[96831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:42 compute-0 sudo[96831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:42 compute-0 sudo[96831]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:42 compute-0 sudo[96877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:35:42 compute-0 sudo[96877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:42 compute-0 sudo[96877]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:42 compute-0 sudo[96902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:42 compute-0 python3[96874]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:35:42 compute-0 sudo[96902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:42 compute-0 sudo[96902]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:42 compute-0 sudo[96928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 16:35:42 compute-0 sudo[96928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:42 compute-0 ceph-mon[74273]: pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:42 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2611254867' entity='client.admin' 
Oct 01 16:35:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:35:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:35:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:35:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:35:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:35:42 compute-0 podman[96927]: 2025-10-01 16:35:42.712261287 +0000 UTC m=+0.054483840 container create a4bef07014b63cb7ad3682baaa24a8afc07e2603db7e9d0537ee21889ac4059b (image=quay.io/ceph/ceph:v18, name=sharp_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:35:42 compute-0 systemd[1]: Started libpod-conmon-a4bef07014b63cb7ad3682baaa24a8afc07e2603db7e9d0537ee21889ac4059b.scope.
Oct 01 16:35:42 compute-0 podman[96927]: 2025-10-01 16:35:42.679215754 +0000 UTC m=+0.021438337 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:35:42 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b231227622555e7a158fde23173142fc8667e6279caa0b767d069bbf3a869772/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b231227622555e7a158fde23173142fc8667e6279caa0b767d069bbf3a869772/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b231227622555e7a158fde23173142fc8667e6279caa0b767d069bbf3a869772/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:42 compute-0 podman[96927]: 2025-10-01 16:35:42.806434213 +0000 UTC m=+0.148656796 container init a4bef07014b63cb7ad3682baaa24a8afc07e2603db7e9d0537ee21889ac4059b (image=quay.io/ceph/ceph:v18, name=sharp_jones, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:35:42 compute-0 podman[96927]: 2025-10-01 16:35:42.814051996 +0000 UTC m=+0.156274539 container start a4bef07014b63cb7ad3682baaa24a8afc07e2603db7e9d0537ee21889ac4059b (image=quay.io/ceph/ceph:v18, name=sharp_jones, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 01 16:35:42 compute-0 podman[96927]: 2025-10-01 16:35:42.819162261 +0000 UTC m=+0.161384814 container attach a4bef07014b63cb7ad3682baaa24a8afc07e2603db7e9d0537ee21889ac4059b (image=quay.io/ceph/ceph:v18, name=sharp_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Oct 01 16:35:42 compute-0 podman[97011]: 2025-10-01 16:35:42.997201282 +0000 UTC m=+0.040611251 container create 1847e3896c136673b0e7abe81c2e1310c22d220bfc8faf29658f546688acd581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:35:43 compute-0 systemd[1]: Started libpod-conmon-1847e3896c136673b0e7abe81c2e1310c22d220bfc8faf29658f546688acd581.scope.
Oct 01 16:35:43 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:43 compute-0 podman[97011]: 2025-10-01 16:35:42.979716262 +0000 UTC m=+0.023126261 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:43 compute-0 podman[97011]: 2025-10-01 16:35:43.090574832 +0000 UTC m=+0.133984861 container init 1847e3896c136673b0e7abe81c2e1310c22d220bfc8faf29658f546688acd581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mayer, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:35:43 compute-0 podman[97011]: 2025-10-01 16:35:43.100102626 +0000 UTC m=+0.143512595 container start 1847e3896c136673b0e7abe81c2e1310c22d220bfc8faf29658f546688acd581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 01 16:35:43 compute-0 nervous_mayer[97028]: 167 167
Oct 01 16:35:43 compute-0 systemd[1]: libpod-1847e3896c136673b0e7abe81c2e1310c22d220bfc8faf29658f546688acd581.scope: Deactivated successfully.
Oct 01 16:35:43 compute-0 conmon[97028]: conmon 1847e3896c136673b0e7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1847e3896c136673b0e7abe81c2e1310c22d220bfc8faf29658f546688acd581.scope/container/memory.events
Oct 01 16:35:43 compute-0 podman[97011]: 2025-10-01 16:35:43.14010228 +0000 UTC m=+0.183512269 container attach 1847e3896c136673b0e7abe81c2e1310c22d220bfc8faf29658f546688acd581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:35:43 compute-0 podman[97011]: 2025-10-01 16:35:43.141297835 +0000 UTC m=+0.184707834 container died 1847e3896c136673b0e7abe81c2e1310c22d220bfc8faf29658f546688acd581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mayer, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:35:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-238c9946673f013fa3be2fa951e670dacb2381ea9a7c4476da39fe76580dec81-merged.mount: Deactivated successfully.
Oct 01 16:35:43 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:43 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 16:35:43 compute-0 ceph-mgr[74571]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Oct 01 16:35:43 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Oct 01 16:35:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 01 16:35:43 compute-0 podman[97011]: 2025-10-01 16:35:43.40254386 +0000 UTC m=+0.445953829 container remove 1847e3896c136673b0e7abe81c2e1310c22d220bfc8faf29658f546688acd581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mayer, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 01 16:35:43 compute-0 systemd[1]: libpod-conmon-1847e3896c136673b0e7abe81c2e1310c22d220bfc8faf29658f546688acd581.scope: Deactivated successfully.
Oct 01 16:35:43 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:43 compute-0 sharp_jones[96967]: Scheduled rgw.rgw update...
Oct 01 16:35:43 compute-0 systemd[1]: libpod-a4bef07014b63cb7ad3682baaa24a8afc07e2603db7e9d0537ee21889ac4059b.scope: Deactivated successfully.
Oct 01 16:35:43 compute-0 podman[96927]: 2025-10-01 16:35:43.435538378 +0000 UTC m=+0.777760941 container died a4bef07014b63cb7ad3682baaa24a8afc07e2603db7e9d0537ee21889ac4059b (image=quay.io/ceph/ceph:v18, name=sharp_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 01 16:35:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-b231227622555e7a158fde23173142fc8667e6279caa0b767d069bbf3a869772-merged.mount: Deactivated successfully.
Oct 01 16:35:43 compute-0 podman[96927]: 2025-10-01 16:35:43.496863551 +0000 UTC m=+0.839086104 container remove a4bef07014b63cb7ad3682baaa24a8afc07e2603db7e9d0537ee21889ac4059b (image=quay.io/ceph/ceph:v18, name=sharp_jones, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:35:43 compute-0 systemd[1]: libpod-conmon-a4bef07014b63cb7ad3682baaa24a8afc07e2603db7e9d0537ee21889ac4059b.scope: Deactivated successfully.
Oct 01 16:35:43 compute-0 sudo[96870]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:43 compute-0 podman[97084]: 2025-10-01 16:35:43.557154643 +0000 UTC m=+0.045629277 container create 199aa131a4c670b4a47f0f370f4762b1f314cf25f90f86a7b536b954fef24670 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_curran, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 01 16:35:43 compute-0 systemd[1]: Started libpod-conmon-199aa131a4c670b4a47f0f370f4762b1f314cf25f90f86a7b536b954fef24670.scope.
Oct 01 16:35:43 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:43 compute-0 podman[97084]: 2025-10-01 16:35:43.533510947 +0000 UTC m=+0.021985601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aed12f5b427ab83ca1a4d82917580bf20376d1492b1cea8dc649284c2ca81f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aed12f5b427ab83ca1a4d82917580bf20376d1492b1cea8dc649284c2ca81f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aed12f5b427ab83ca1a4d82917580bf20376d1492b1cea8dc649284c2ca81f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aed12f5b427ab83ca1a4d82917580bf20376d1492b1cea8dc649284c2ca81f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aed12f5b427ab83ca1a4d82917580bf20376d1492b1cea8dc649284c2ca81f8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:43 compute-0 podman[97084]: 2025-10-01 16:35:43.646405584 +0000 UTC m=+0.134880228 container init 199aa131a4c670b4a47f0f370f4762b1f314cf25f90f86a7b536b954fef24670 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:35:43 compute-0 podman[97084]: 2025-10-01 16:35:43.659593704 +0000 UTC m=+0.148068338 container start 199aa131a4c670b4a47f0f370f4762b1f314cf25f90f86a7b536b954fef24670 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_curran, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:35:43 compute-0 podman[97084]: 2025-10-01 16:35:43.666382894 +0000 UTC m=+0.154857538 container attach 199aa131a4c670b4a47f0f370f4762b1f314cf25f90f86a7b536b954fef24670 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_curran, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 01 16:35:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:35:44 compute-0 ceph-mon[74273]: pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:44 compute-0 ceph-mon[74273]: from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 16:35:44 compute-0 ceph-mon[74273]: Saving service rgw.rgw spec with placement compute-0
Oct 01 16:35:44 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:44 compute-0 python3[97181]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 16:35:44 compute-0 bold_curran[97101]: --> passed data devices: 0 physical, 3 LVM
Oct 01 16:35:44 compute-0 bold_curran[97101]: --> relative data size: 1.0
Oct 01 16:35:44 compute-0 bold_curran[97101]: --> All data devices are unavailable
Oct 01 16:35:44 compute-0 systemd[1]: libpod-199aa131a4c670b4a47f0f370f4762b1f314cf25f90f86a7b536b954fef24670.scope: Deactivated successfully.
Oct 01 16:35:44 compute-0 podman[97084]: 2025-10-01 16:35:44.721544509 +0000 UTC m=+1.210019163 container died 199aa131a4c670b4a47f0f370f4762b1f314cf25f90f86a7b536b954fef24670 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_curran, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Oct 01 16:35:44 compute-0 systemd[1]: libpod-199aa131a4c670b4a47f0f370f4762b1f314cf25f90f86a7b536b954fef24670.scope: Consumed 1.000s CPU time.
Oct 01 16:35:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-7aed12f5b427ab83ca1a4d82917580bf20376d1492b1cea8dc649284c2ca81f8-merged.mount: Deactivated successfully.
Oct 01 16:35:44 compute-0 podman[97084]: 2025-10-01 16:35:44.786399353 +0000 UTC m=+1.274873987 container remove 199aa131a4c670b4a47f0f370f4762b1f314cf25f90f86a7b536b954fef24670 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:35:44 compute-0 systemd[1]: libpod-conmon-199aa131a4c670b4a47f0f370f4762b1f314cf25f90f86a7b536b954fef24670.scope: Deactivated successfully.
Oct 01 16:35:44 compute-0 python3[97273]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759336544.2054474-33152-250873176066325/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:35:44 compute-0 sudo[96928]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:44 compute-0 sudo[97291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:44 compute-0 sudo[97291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:44 compute-0 sudo[97291]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:44 compute-0 sudo[97340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:35:44 compute-0 sudo[97340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:44 compute-0 sudo[97340]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:45 compute-0 sudo[97365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:45 compute-0 sudo[97365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:45 compute-0 sudo[97365]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:45 compute-0 sudo[97390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 16:35:45 compute-0 sudo[97390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:45 compute-0 sudo[97438]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xelhuufdfnbbudqadljnhmlpqfisewwb ; /usr/bin/python3'
Oct 01 16:35:45 compute-0 sudo[97438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:35:45 compute-0 python3[97440]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:35:45 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:45 compute-0 podman[97455]: 2025-10-01 16:35:45.346251808 +0000 UTC m=+0.049528278 container create e149a7ef5d0340d22906e3f706c10dbc8e1a254c642d5cdb0b4ab0818d3e6796 (image=quay.io/ceph/ceph:v18, name=sleepy_ptolemy, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:35:45 compute-0 systemd[1]: Started libpod-conmon-e149a7ef5d0340d22906e3f706c10dbc8e1a254c642d5cdb0b4ab0818d3e6796.scope.
Oct 01 16:35:45 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/244d97fcd77343a7d67d4d2277e5a6def5c71a8361e7ca0de8e8b0a3f22cb151/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/244d97fcd77343a7d67d4d2277e5a6def5c71a8361e7ca0de8e8b0a3f22cb151/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/244d97fcd77343a7d67d4d2277e5a6def5c71a8361e7ca0de8e8b0a3f22cb151/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:45 compute-0 podman[97455]: 2025-10-01 16:35:45.416034363 +0000 UTC m=+0.119310863 container init e149a7ef5d0340d22906e3f706c10dbc8e1a254c642d5cdb0b4ab0818d3e6796 (image=quay.io/ceph/ceph:v18, name=sleepy_ptolemy, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:35:45 compute-0 podman[97455]: 2025-10-01 16:35:45.324583194 +0000 UTC m=+0.027859694 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:35:45 compute-0 podman[97455]: 2025-10-01 16:35:45.435323466 +0000 UTC m=+0.138599946 container start e149a7ef5d0340d22906e3f706c10dbc8e1a254c642d5cdb0b4ab0818d3e6796 (image=quay.io/ceph/ceph:v18, name=sleepy_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:35:45 compute-0 podman[97455]: 2025-10-01 16:35:45.442858789 +0000 UTC m=+0.146135309 container attach e149a7ef5d0340d22906e3f706c10dbc8e1a254c642d5cdb0b4ab0818d3e6796 (image=quay.io/ceph/ceph:v18, name=sleepy_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Oct 01 16:35:45 compute-0 podman[97499]: 2025-10-01 16:35:45.506549928 +0000 UTC m=+0.038012983 container create e6a36c489bb39cf3fb24c89eda1f8ea0d78fe40df7f7215ce5bf7b272b5ed5cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_knuth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:35:45 compute-0 systemd[1]: Started libpod-conmon-e6a36c489bb39cf3fb24c89eda1f8ea0d78fe40df7f7215ce5bf7b272b5ed5cd.scope.
Oct 01 16:35:45 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:45 compute-0 podman[97499]: 2025-10-01 16:35:45.488446725 +0000 UTC m=+0.019909790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:45 compute-0 podman[97499]: 2025-10-01 16:35:45.588247978 +0000 UTC m=+0.119711073 container init e6a36c489bb39cf3fb24c89eda1f8ea0d78fe40df7f7215ce5bf7b272b5ed5cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_knuth, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:35:45 compute-0 podman[97499]: 2025-10-01 16:35:45.595445432 +0000 UTC m=+0.126908477 container start e6a36c489bb39cf3fb24c89eda1f8ea0d78fe40df7f7215ce5bf7b272b5ed5cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_knuth, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:35:45 compute-0 podman[97499]: 2025-10-01 16:35:45.598577312 +0000 UTC m=+0.130040387 container attach e6a36c489bb39cf3fb24c89eda1f8ea0d78fe40df7f7215ce5bf7b272b5ed5cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_knuth, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:35:45 compute-0 competent_knuth[97516]: 167 167
Oct 01 16:35:45 compute-0 systemd[1]: libpod-e6a36c489bb39cf3fb24c89eda1f8ea0d78fe40df7f7215ce5bf7b272b5ed5cd.scope: Deactivated successfully.
Oct 01 16:35:45 compute-0 conmon[97516]: conmon e6a36c489bb39cf3fb24 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e6a36c489bb39cf3fb24c89eda1f8ea0d78fe40df7f7215ce5bf7b272b5ed5cd.scope/container/memory.events
Oct 01 16:35:45 compute-0 podman[97499]: 2025-10-01 16:35:45.602347658 +0000 UTC m=+0.133810743 container died e6a36c489bb39cf3fb24c89eda1f8ea0d78fe40df7f7215ce5bf7b272b5ed5cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 01 16:35:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ad15693e673e9480e0c069ccfe49fbbea11184a743e6be22f9d6078fa6d28d0-merged.mount: Deactivated successfully.
Oct 01 16:35:45 compute-0 podman[97499]: 2025-10-01 16:35:45.647185025 +0000 UTC m=+0.178648080 container remove e6a36c489bb39cf3fb24c89eda1f8ea0d78fe40df7f7215ce5bf7b272b5ed5cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:35:45 compute-0 systemd[1]: libpod-conmon-e6a36c489bb39cf3fb24c89eda1f8ea0d78fe40df7f7215ce5bf7b272b5ed5cd.scope: Deactivated successfully.
Oct 01 16:35:45 compute-0 podman[97559]: 2025-10-01 16:35:45.827583129 +0000 UTC m=+0.043216826 container create a6db234efde108cf0093250da129f05f0d8aecc2ac1aaf414fb494cb66734b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 01 16:35:45 compute-0 systemd[1]: Started libpod-conmon-a6db234efde108cf0093250da129f05f0d8aecc2ac1aaf414fb494cb66734b8d.scope.
Oct 01 16:35:45 compute-0 podman[97559]: 2025-10-01 16:35:45.808690196 +0000 UTC m=+0.024323873 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:45 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee4b524ed52a428bf3911e634a820ba5bd7743c9dc38ffe48dff2490dfc36731/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee4b524ed52a428bf3911e634a820ba5bd7743c9dc38ffe48dff2490dfc36731/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee4b524ed52a428bf3911e634a820ba5bd7743c9dc38ffe48dff2490dfc36731/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee4b524ed52a428bf3911e634a820ba5bd7743c9dc38ffe48dff2490dfc36731/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:45 compute-0 podman[97559]: 2025-10-01 16:35:45.928818788 +0000 UTC m=+0.144452485 container init a6db234efde108cf0093250da129f05f0d8aecc2ac1aaf414fb494cb66734b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_heisenberg, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 01 16:35:45 compute-0 podman[97559]: 2025-10-01 16:35:45.941123963 +0000 UTC m=+0.156757620 container start a6db234efde108cf0093250da129f05f0d8aecc2ac1aaf414fb494cb66734b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_heisenberg, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 01 16:35:45 compute-0 podman[97559]: 2025-10-01 16:35:45.945092584 +0000 UTC m=+0.160726351 container attach a6db234efde108cf0093250da129f05f0d8aecc2ac1aaf414fb494cb66734b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_heisenberg, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:35:45 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 16:35:45 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct 01 16:35:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Oct 01 16:35:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct 01 16:35:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Oct 01 16:35:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct 01 16:35:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Oct 01 16:35:46 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct 01 16:35:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Oct 01 16:35:46 compute-0 ceph-mon[74273]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 01 16:35:46 compute-0 ceph-mon[74273]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct 01 16:35:46 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0[74269]: 2025-10-01T16:35:46.001+0000 7fe4ddde0640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 01 16:35:46 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct 01 16:35:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).mds e2 new map
Oct 01 16:35:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-01T16:35:46.001882+0000
                                           modified        2025-10-01T16:35:46.001992+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
Oct 01 16:35:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Oct 01 16:35:46 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Oct 01 16:35:46 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Oct 01 16:35:46 compute-0 ceph-mgr[74571]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Oct 01 16:35:46 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Oct 01 16:35:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 01 16:35:46 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:46 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct 01 16:35:46 compute-0 systemd[1]: libpod-e149a7ef5d0340d22906e3f706c10dbc8e1a254c642d5cdb0b4ab0818d3e6796.scope: Deactivated successfully.
Oct 01 16:35:46 compute-0 podman[97455]: 2025-10-01 16:35:46.049645928 +0000 UTC m=+0.752922398 container died e149a7ef5d0340d22906e3f706c10dbc8e1a254c642d5cdb0b4ab0818d3e6796 (image=quay.io/ceph/ceph:v18, name=sleepy_ptolemy, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:35:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-244d97fcd77343a7d67d4d2277e5a6def5c71a8361e7ca0de8e8b0a3f22cb151-merged.mount: Deactivated successfully.
Oct 01 16:35:46 compute-0 podman[97455]: 2025-10-01 16:35:46.094969218 +0000 UTC m=+0.798245688 container remove e149a7ef5d0340d22906e3f706c10dbc8e1a254c642d5cdb0b4ab0818d3e6796 (image=quay.io/ceph/ceph:v18, name=sleepy_ptolemy, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:35:46 compute-0 systemd[1]: libpod-conmon-e149a7ef5d0340d22906e3f706c10dbc8e1a254c642d5cdb0b4ab0818d3e6796.scope: Deactivated successfully.
Oct 01 16:35:46 compute-0 sudo[97438]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:46 compute-0 sudo[97620]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yftlypbsdjxyaqrwtpihfdtutnrucoqz ; /usr/bin/python3'
Oct 01 16:35:46 compute-0 sudo[97620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:35:46 compute-0 python3[97622]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:35:46 compute-0 ceph-mon[74273]: pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct 01 16:35:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct 01 16:35:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct 01 16:35:46 compute-0 ceph-mon[74273]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 01 16:35:46 compute-0 ceph-mon[74273]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct 01 16:35:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct 01 16:35:46 compute-0 ceph-mon[74273]: osdmap e31: 3 total, 3 up, 3 in
Oct 01 16:35:46 compute-0 ceph-mon[74273]: fsmap cephfs:0
Oct 01 16:35:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:46 compute-0 podman[97623]: 2025-10-01 16:35:46.490958366 +0000 UTC m=+0.063333281 container create e14e293c9979704fbcd77e0bce48e4020bcd424934bb9678b0f9715c320c72da (image=quay.io/ceph/ceph:v18, name=laughing_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:35:46 compute-0 systemd[1]: Started libpod-conmon-e14e293c9979704fbcd77e0bce48e4020bcd424934bb9678b0f9715c320c72da.scope.
Oct 01 16:35:46 compute-0 podman[97623]: 2025-10-01 16:35:46.467061345 +0000 UTC m=+0.039436300 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:35:46 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7294d063da2b7006c3c1712b4d93f5bcdfef51062b835f65bfc70ea25d18e4e3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7294d063da2b7006c3c1712b4d93f5bcdfef51062b835f65bfc70ea25d18e4e3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7294d063da2b7006c3c1712b4d93f5bcdfef51062b835f65bfc70ea25d18e4e3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:46 compute-0 podman[97623]: 2025-10-01 16:35:46.584473828 +0000 UTC m=+0.156848753 container init e14e293c9979704fbcd77e0bce48e4020bcd424934bb9678b0f9715c320c72da (image=quay.io/ceph/ceph:v18, name=laughing_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 01 16:35:46 compute-0 podman[97623]: 2025-10-01 16:35:46.59236815 +0000 UTC m=+0.164743065 container start e14e293c9979704fbcd77e0bce48e4020bcd424934bb9678b0f9715c320c72da (image=quay.io/ceph/ceph:v18, name=laughing_hofstadter, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:35:46 compute-0 podman[97623]: 2025-10-01 16:35:46.596465305 +0000 UTC m=+0.168840220 container attach e14e293c9979704fbcd77e0bce48e4020bcd424934bb9678b0f9715c320c72da (image=quay.io/ceph/ceph:v18, name=laughing_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]: {
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:     "0": [
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:         {
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             "devices": [
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "/dev/loop3"
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             ],
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             "lv_name": "ceph_lv0",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             "lv_size": "21470642176",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             "name": "ceph_lv0",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             "tags": {
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "ceph.cluster_name": "ceph",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "ceph.crush_device_class": "",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "ceph.encrypted": "0",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "ceph.osd_id": "0",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "ceph.type": "block",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "ceph.vdo": "0"
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             },
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             "type": "block",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             "vg_name": "ceph_vg0"
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:         }
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:     ],
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:     "1": [
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:         {
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             "devices": [
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "/dev/loop4"
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             ],
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             "lv_name": "ceph_lv1",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             "lv_size": "21470642176",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             "name": "ceph_lv1",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             "tags": {
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "ceph.cluster_name": "ceph",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "ceph.crush_device_class": "",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "ceph.encrypted": "0",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "ceph.osd_id": "1",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "ceph.type": "block",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "ceph.vdo": "0"
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             },
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             "type": "block",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             "vg_name": "ceph_vg1"
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:         }
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:     ],
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:     "2": [
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:         {
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             "devices": [
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "/dev/loop5"
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             ],
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             "lv_name": "ceph_lv2",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             "lv_size": "21470642176",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             "name": "ceph_lv2",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             "tags": {
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "ceph.cluster_name": "ceph",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "ceph.crush_device_class": "",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "ceph.encrypted": "0",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "ceph.osd_id": "2",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "ceph.type": "block",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:                 "ceph.vdo": "0"
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             },
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             "type": "block",
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:             "vg_name": "ceph_vg2"
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:         }
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]:     ]
Oct 01 16:35:46 compute-0 relaxed_heisenberg[97576]: }
Oct 01 16:35:46 compute-0 systemd[1]: libpod-a6db234efde108cf0093250da129f05f0d8aecc2ac1aaf414fb494cb66734b8d.scope: Deactivated successfully.
Oct 01 16:35:46 compute-0 podman[97559]: 2025-10-01 16:35:46.81326875 +0000 UTC m=+1.028902427 container died a6db234efde108cf0093250da129f05f0d8aecc2ac1aaf414fb494cb66734b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_heisenberg, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 01 16:35:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee4b524ed52a428bf3911e634a820ba5bd7743c9dc38ffe48dff2490dfc36731-merged.mount: Deactivated successfully.
Oct 01 16:35:46 compute-0 podman[97559]: 2025-10-01 16:35:46.864266105 +0000 UTC m=+1.079899762 container remove a6db234efde108cf0093250da129f05f0d8aecc2ac1aaf414fb494cb66734b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_heisenberg, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 01 16:35:46 compute-0 systemd[1]: libpod-conmon-a6db234efde108cf0093250da129f05f0d8aecc2ac1aaf414fb494cb66734b8d.scope: Deactivated successfully.
Oct 01 16:35:46 compute-0 sudo[97390]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:46 compute-0 sudo[97659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:46 compute-0 sudo[97659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:46 compute-0 sudo[97659]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:47 compute-0 sudo[97703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:35:47 compute-0 sudo[97703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:47 compute-0 sudo[97703]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:47 compute-0 sudo[97728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:47 compute-0 sudo[97728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:47 compute-0 sudo[97728]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:47 compute-0 sudo[97753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 16:35:47 compute-0 sudo[97753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:47 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 16:35:47 compute-0 ceph-mgr[74571]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Oct 01 16:35:47 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Oct 01 16:35:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 01 16:35:47 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:47 compute-0 laughing_hofstadter[97639]: Scheduled mds.cephfs update...
Oct 01 16:35:47 compute-0 systemd[1]: libpod-e14e293c9979704fbcd77e0bce48e4020bcd424934bb9678b0f9715c320c72da.scope: Deactivated successfully.
Oct 01 16:35:47 compute-0 podman[97623]: 2025-10-01 16:35:47.222577 +0000 UTC m=+0.794951885 container died e14e293c9979704fbcd77e0bce48e4020bcd424934bb9678b0f9715c320c72da (image=quay.io/ceph/ceph:v18, name=laughing_hofstadter, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 01 16:35:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-7294d063da2b7006c3c1712b4d93f5bcdfef51062b835f65bfc70ea25d18e4e3-merged.mount: Deactivated successfully.
Oct 01 16:35:47 compute-0 podman[97623]: 2025-10-01 16:35:47.27108169 +0000 UTC m=+0.843456605 container remove e14e293c9979704fbcd77e0bce48e4020bcd424934bb9678b0f9715c320c72da (image=quay.io/ceph/ceph:v18, name=laughing_hofstadter, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 01 16:35:47 compute-0 systemd[1]: libpod-conmon-e14e293c9979704fbcd77e0bce48e4020bcd424934bb9678b0f9715c320c72da.scope: Deactivated successfully.
Oct 01 16:35:47 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:47 compute-0 sudo[97620]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:47 compute-0 ceph-mon[74273]: from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 16:35:47 compute-0 ceph-mon[74273]: Saving service mds.cephfs spec with placement compute-0
Oct 01 16:35:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:47 compute-0 podman[97836]: 2025-10-01 16:35:47.445232895 +0000 UTC m=+0.032385370 container create 1bc46b3de1d902764bc1604f12e0cdbf0d162d19617eb275cff0ad6a339d9381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 01 16:35:47 compute-0 systemd[1]: Started libpod-conmon-1bc46b3de1d902764bc1604f12e0cdbf0d162d19617eb275cff0ad6a339d9381.scope.
Oct 01 16:35:47 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:47 compute-0 podman[97836]: 2025-10-01 16:35:47.499836011 +0000 UTC m=+0.086988506 container init 1bc46b3de1d902764bc1604f12e0cdbf0d162d19617eb275cff0ad6a339d9381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 01 16:35:47 compute-0 podman[97836]: 2025-10-01 16:35:47.505190568 +0000 UTC m=+0.092343053 container start 1bc46b3de1d902764bc1604f12e0cdbf0d162d19617eb275cff0ad6a339d9381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 16:35:47 compute-0 podman[97836]: 2025-10-01 16:35:47.508474502 +0000 UTC m=+0.095626987 container attach 1bc46b3de1d902764bc1604f12e0cdbf0d162d19617eb275cff0ad6a339d9381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:35:47 compute-0 great_grothendieck[97853]: 167 167
Oct 01 16:35:47 compute-0 podman[97836]: 2025-10-01 16:35:47.510190056 +0000 UTC m=+0.097342571 container died 1bc46b3de1d902764bc1604f12e0cdbf0d162d19617eb275cff0ad6a339d9381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 01 16:35:47 compute-0 systemd[1]: libpod-1bc46b3de1d902764bc1604f12e0cdbf0d162d19617eb275cff0ad6a339d9381.scope: Deactivated successfully.
Oct 01 16:35:47 compute-0 podman[97836]: 2025-10-01 16:35:47.431637137 +0000 UTC m=+0.018789632 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f65c65da720874b96c93860bcb571be013306466b7aa9ba08ce86dbe35fa646-merged.mount: Deactivated successfully.
Oct 01 16:35:47 compute-0 podman[97836]: 2025-10-01 16:35:47.545010327 +0000 UTC m=+0.132162802 container remove 1bc46b3de1d902764bc1604f12e0cdbf0d162d19617eb275cff0ad6a339d9381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:35:47 compute-0 systemd[1]: libpod-conmon-1bc46b3de1d902764bc1604f12e0cdbf0d162d19617eb275cff0ad6a339d9381.scope: Deactivated successfully.
Oct 01 16:35:47 compute-0 podman[97877]: 2025-10-01 16:35:47.693948896 +0000 UTC m=+0.045771641 container create cbb68c5bafd0def86a1aac72344547a267a57fb8000a3552651bc6d32904664f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct 01 16:35:47 compute-0 systemd[1]: Started libpod-conmon-cbb68c5bafd0def86a1aac72344547a267a57fb8000a3552651bc6d32904664f.scope.
Oct 01 16:35:47 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a96fa17cd5705df606c638b65423d59c292675c98caf7f7716791898f40667a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a96fa17cd5705df606c638b65423d59c292675c98caf7f7716791898f40667a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a96fa17cd5705df606c638b65423d59c292675c98caf7f7716791898f40667a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a96fa17cd5705df606c638b65423d59c292675c98caf7f7716791898f40667a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:47 compute-0 podman[97877]: 2025-10-01 16:35:47.674483359 +0000 UTC m=+0.026306104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:47 compute-0 podman[97877]: 2025-10-01 16:35:47.775588885 +0000 UTC m=+0.127411630 container init cbb68c5bafd0def86a1aac72344547a267a57fb8000a3552651bc6d32904664f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 01 16:35:47 compute-0 podman[97877]: 2025-10-01 16:35:47.782277116 +0000 UTC m=+0.134099841 container start cbb68c5bafd0def86a1aac72344547a267a57fb8000a3552651bc6d32904664f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 01 16:35:47 compute-0 podman[97877]: 2025-10-01 16:35:47.786977866 +0000 UTC m=+0.138800591 container attach cbb68c5bafd0def86a1aac72344547a267a57fb8000a3552651bc6d32904664f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_curie, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:35:47 compute-0 sudo[97974]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxwpcmckvprbqqanovmyyatcyjoyvikj ; /usr/bin/python3'
Oct 01 16:35:47 compute-0 sudo[97974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:35:48 compute-0 python3[97976]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 16:35:48 compute-0 sudo[97974]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:48 compute-0 sudo[98047]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjhhudoujcqyrcsdgkkhnmabjrdbirgj ; /usr/bin/python3'
Oct 01 16:35:48 compute-0 sudo[98047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:35:48 compute-0 python3[98049]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759336547.709713-33182-112431247170074/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=636bce12226d86adf51a8a262c22c91f203e7646 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:35:48 compute-0 sudo[98047]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:48 compute-0 ceph-mon[74273]: from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 16:35:48 compute-0 ceph-mon[74273]: Saving service mds.cephfs spec with placement compute-0
Oct 01 16:35:48 compute-0 ceph-mon[74273]: pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:35:48 compute-0 sudo[98117]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyagqpsuxmmlrmjzutcqhcrmmvfuxyzy ; /usr/bin/python3'
Oct 01 16:35:48 compute-0 sudo[98117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:35:48 compute-0 frosty_curie[97925]: {
Oct 01 16:35:48 compute-0 frosty_curie[97925]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 16:35:48 compute-0 frosty_curie[97925]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:35:48 compute-0 frosty_curie[97925]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 16:35:48 compute-0 frosty_curie[97925]:         "osd_id": 2,
Oct 01 16:35:48 compute-0 frosty_curie[97925]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:35:48 compute-0 frosty_curie[97925]:         "type": "bluestore"
Oct 01 16:35:48 compute-0 frosty_curie[97925]:     },
Oct 01 16:35:48 compute-0 frosty_curie[97925]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 16:35:48 compute-0 frosty_curie[97925]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:35:48 compute-0 frosty_curie[97925]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 16:35:48 compute-0 frosty_curie[97925]:         "osd_id": 0,
Oct 01 16:35:48 compute-0 frosty_curie[97925]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:35:48 compute-0 frosty_curie[97925]:         "type": "bluestore"
Oct 01 16:35:48 compute-0 frosty_curie[97925]:     },
Oct 01 16:35:48 compute-0 frosty_curie[97925]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 16:35:48 compute-0 frosty_curie[97925]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:35:48 compute-0 frosty_curie[97925]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 16:35:48 compute-0 frosty_curie[97925]:         "osd_id": 1,
Oct 01 16:35:48 compute-0 frosty_curie[97925]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:35:48 compute-0 frosty_curie[97925]:         "type": "bluestore"
Oct 01 16:35:48 compute-0 frosty_curie[97925]:     }
Oct 01 16:35:48 compute-0 frosty_curie[97925]: }
Oct 01 16:35:48 compute-0 systemd[1]: libpod-cbb68c5bafd0def86a1aac72344547a267a57fb8000a3552651bc6d32904664f.scope: Deactivated successfully.
Oct 01 16:35:48 compute-0 systemd[1]: libpod-cbb68c5bafd0def86a1aac72344547a267a57fb8000a3552651bc6d32904664f.scope: Consumed 1.042s CPU time.
Oct 01 16:35:48 compute-0 python3[98122]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:35:48 compute-0 podman[98128]: 2025-10-01 16:35:48.886250413 +0000 UTC m=+0.039212704 container died cbb68c5bafd0def86a1aac72344547a267a57fb8000a3552651bc6d32904664f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 01 16:35:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-a96fa17cd5705df606c638b65423d59c292675c98caf7f7716791898f40667a7-merged.mount: Deactivated successfully.
Oct 01 16:35:48 compute-0 podman[98128]: 2025-10-01 16:35:48.946364961 +0000 UTC m=+0.099327242 container remove cbb68c5bafd0def86a1aac72344547a267a57fb8000a3552651bc6d32904664f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_curie, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:35:48 compute-0 systemd[1]: libpod-conmon-cbb68c5bafd0def86a1aac72344547a267a57fb8000a3552651bc6d32904664f.scope: Deactivated successfully.
Oct 01 16:35:48 compute-0 podman[98139]: 2025-10-01 16:35:48.960533743 +0000 UTC m=+0.065300651 container create 9da7b8eb71871545625502f5277a75355b86e2bd18d7becb3826f5281e3b8845 (image=quay.io/ceph/ceph:v18, name=busy_moore, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 01 16:35:48 compute-0 sudo[97753]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:48 compute-0 systemd[1]: Started libpod-conmon-9da7b8eb71871545625502f5277a75355b86e2bd18d7becb3826f5281e3b8845.scope.
Oct 01 16:35:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:35:49 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:35:49 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:49 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/049d5b6ae888c0c4e2682295120f7448f8ddc3934fe661ea0594a4b8b334c4de/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/049d5b6ae888c0c4e2682295120f7448f8ddc3934fe661ea0594a4b8b334c4de/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:49 compute-0 podman[98139]: 2025-10-01 16:35:48.940394408 +0000 UTC m=+0.045161346 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:35:49 compute-0 podman[98139]: 2025-10-01 16:35:49.036570008 +0000 UTC m=+0.141336946 container init 9da7b8eb71871545625502f5277a75355b86e2bd18d7becb3826f5281e3b8845 (image=quay.io/ceph/ceph:v18, name=busy_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:35:49 compute-0 podman[98139]: 2025-10-01 16:35:49.042674404 +0000 UTC m=+0.147441322 container start 9da7b8eb71871545625502f5277a75355b86e2bd18d7becb3826f5281e3b8845 (image=quay.io/ceph/ceph:v18, name=busy_moore, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:35:49 compute-0 podman[98139]: 2025-10-01 16:35:49.045703612 +0000 UTC m=+0.150470570 container attach 9da7b8eb71871545625502f5277a75355b86e2bd18d7becb3826f5281e3b8845 (image=quay.io/ceph/ceph:v18, name=busy_moore, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 01 16:35:49 compute-0 sudo[98160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:49 compute-0 sudo[98160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:49 compute-0 sudo[98160]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:49 compute-0 sudo[98186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 16:35:49 compute-0 sudo[98186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:49 compute-0 sudo[98186]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:49 compute-0 sudo[98211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:49 compute-0 sudo[98211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:49 compute-0 sudo[98211]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:49 compute-0 sudo[98236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:35:49 compute-0 sudo[98236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:49 compute-0 sudo[98236]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:49 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:49 compute-0 sudo[98261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:49 compute-0 sudo[98261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:49 compute-0 sudo[98261]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:49 compute-0 sudo[98286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 01 16:35:49 compute-0 sudo[98286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Oct 01 16:35:49 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1213680060' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct 01 16:35:49 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1213680060' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct 01 16:35:49 compute-0 systemd[1]: libpod-9da7b8eb71871545625502f5277a75355b86e2bd18d7becb3826f5281e3b8845.scope: Deactivated successfully.
Oct 01 16:35:49 compute-0 podman[98139]: 2025-10-01 16:35:49.688227135 +0000 UTC m=+0.792994043 container died 9da7b8eb71871545625502f5277a75355b86e2bd18d7becb3826f5281e3b8845 (image=quay.io/ceph/ceph:v18, name=busy_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 01 16:35:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-049d5b6ae888c0c4e2682295120f7448f8ddc3934fe661ea0594a4b8b334c4de-merged.mount: Deactivated successfully.
Oct 01 16:35:49 compute-0 podman[98139]: 2025-10-01 16:35:49.814023843 +0000 UTC m=+0.918790761 container remove 9da7b8eb71871545625502f5277a75355b86e2bd18d7becb3826f5281e3b8845 (image=quay.io/ceph/ceph:v18, name=busy_moore, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 01 16:35:49 compute-0 systemd[1]: libpod-conmon-9da7b8eb71871545625502f5277a75355b86e2bd18d7becb3826f5281e3b8845.scope: Deactivated successfully.
Oct 01 16:35:49 compute-0 sudo[98117]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:49 compute-0 podman[98417]: 2025-10-01 16:35:49.859596879 +0000 UTC m=+0.088323271 container exec bfdaa9b78cc1558959452c7020a00aa78f3da27e3ededf3766f2f88165c2443b (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 01 16:35:49 compute-0 podman[98417]: 2025-10-01 16:35:49.956309922 +0000 UTC m=+0.185036294 container exec_died bfdaa9b78cc1558959452c7020a00aa78f3da27e3ededf3766f2f88165c2443b (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:35:50 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:50 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:50 compute-0 ceph-mon[74273]: pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:50 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1213680060' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct 01 16:35:50 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1213680060' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct 01 16:35:50 compute-0 sudo[98286]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:35:50 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:50 compute-0 sudo[98562]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bseiuwzphlwzgifywhxrrwllvmbgddfp ; /usr/bin/python3'
Oct 01 16:35:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:35:50 compute-0 sudo[98562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:35:50 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:35:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:35:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 16:35:50 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:35:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 16:35:50 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:50 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 0cef352d-1f7e-4760-9418-8642981ef93c does not exist
Oct 01 16:35:50 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev fae69da1-55d4-45f2-9961-1535d5578df6 does not exist
Oct 01 16:35:50 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 9c746cf6-1948-4937-bd27-f8bb4bd629d1 does not exist
Oct 01 16:35:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 16:35:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:35:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 16:35:50 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:35:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:35:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:35:50 compute-0 sudo[98565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:50 compute-0 sudo[98565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:50 compute-0 sudo[98565]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:50 compute-0 sudo[98590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:35:50 compute-0 sudo[98590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:50 compute-0 sudo[98590]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:50 compute-0 python3[98564]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:35:50 compute-0 sudo[98615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:50 compute-0 sudo[98615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:50 compute-0 sudo[98615]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:50 compute-0 podman[98621]: 2025-10-01 16:35:50.542028334 +0000 UTC m=+0.034657297 container create 7f3a6fee17aa8d1ee954d9f4d59e35c25badbf991d8bd37fc2a648a801594e79 (image=quay.io/ceph/ceph:v18, name=stoic_rosalind, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 01 16:35:50 compute-0 systemd[1]: Started libpod-conmon-7f3a6fee17aa8d1ee954d9f4d59e35c25badbf991d8bd37fc2a648a801594e79.scope.
Oct 01 16:35:50 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:50 compute-0 sudo[98655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 16:35:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b0059a31c3219946d374240728ad95c1f5d3dad7d6a4159c762f80d9d0a835/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b0059a31c3219946d374240728ad95c1f5d3dad7d6a4159c762f80d9d0a835/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:50 compute-0 sudo[98655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:50 compute-0 podman[98621]: 2025-10-01 16:35:50.611196373 +0000 UTC m=+0.103825356 container init 7f3a6fee17aa8d1ee954d9f4d59e35c25badbf991d8bd37fc2a648a801594e79 (image=quay.io/ceph/ceph:v18, name=stoic_rosalind, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:35:50 compute-0 podman[98621]: 2025-10-01 16:35:50.616584261 +0000 UTC m=+0.109213224 container start 7f3a6fee17aa8d1ee954d9f4d59e35c25badbf991d8bd37fc2a648a801594e79 (image=quay.io/ceph/ceph:v18, name=stoic_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:35:50 compute-0 podman[98621]: 2025-10-01 16:35:50.526091306 +0000 UTC m=+0.018720279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:35:50 compute-0 podman[98621]: 2025-10-01 16:35:50.622998245 +0000 UTC m=+0.115627198 container attach 7f3a6fee17aa8d1ee954d9f4d59e35c25badbf991d8bd37fc2a648a801594e79 (image=quay.io/ceph/ceph:v18, name=stoic_rosalind, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 01 16:35:50 compute-0 podman[98725]: 2025-10-01 16:35:50.901397946 +0000 UTC m=+0.037862829 container create b20a693bd52a6460853ed26659f77cc636dd462f95b62585cd015a77fd928ab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_einstein, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:35:50 compute-0 systemd[1]: Started libpod-conmon-b20a693bd52a6460853ed26659f77cc636dd462f95b62585cd015a77fd928ab4.scope.
Oct 01 16:35:50 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:50 compute-0 podman[98725]: 2025-10-01 16:35:50.973277865 +0000 UTC m=+0.109742768 container init b20a693bd52a6460853ed26659f77cc636dd462f95b62585cd015a77fd928ab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_einstein, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:35:50 compute-0 podman[98725]: 2025-10-01 16:35:50.881438296 +0000 UTC m=+0.017903229 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:50 compute-0 podman[98725]: 2025-10-01 16:35:50.979127184 +0000 UTC m=+0.115592057 container start b20a693bd52a6460853ed26659f77cc636dd462f95b62585cd015a77fd928ab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:35:50 compute-0 podman[98725]: 2025-10-01 16:35:50.981584887 +0000 UTC m=+0.118049770 container attach b20a693bd52a6460853ed26659f77cc636dd462f95b62585cd015a77fd928ab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_einstein, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:35:50 compute-0 infallible_einstein[98760]: 167 167
Oct 01 16:35:50 compute-0 systemd[1]: libpod-b20a693bd52a6460853ed26659f77cc636dd462f95b62585cd015a77fd928ab4.scope: Deactivated successfully.
Oct 01 16:35:50 compute-0 podman[98725]: 2025-10-01 16:35:50.983053535 +0000 UTC m=+0.119518418 container died b20a693bd52a6460853ed26659f77cc636dd462f95b62585cd015a77fd928ab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:35:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-eed6f2795d5d524303240b6778e16e2efc8d908c9fb102f0026f6e7db5d9781d-merged.mount: Deactivated successfully.
Oct 01 16:35:51 compute-0 podman[98725]: 2025-10-01 16:35:51.016699075 +0000 UTC m=+0.153163958 container remove b20a693bd52a6460853ed26659f77cc636dd462f95b62585cd015a77fd928ab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 01 16:35:51 compute-0 systemd[1]: libpod-conmon-b20a693bd52a6460853ed26659f77cc636dd462f95b62585cd015a77fd928ab4.scope: Deactivated successfully.
Oct 01 16:35:51 compute-0 podman[98784]: 2025-10-01 16:35:51.154981902 +0000 UTC m=+0.031427185 container create bb7d83055018b3076e36f86e0a9537b032abef31959f4e9229f019ba6bd8a3f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lamport, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 01 16:35:51 compute-0 systemd[1]: Started libpod-conmon-bb7d83055018b3076e36f86e0a9537b032abef31959f4e9229f019ba6bd8a3f4.scope.
Oct 01 16:35:51 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 01 16:35:51 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1978316345' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 01 16:35:51 compute-0 stoic_rosalind[98680]: 
Oct 01 16:35:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b6638c1949e42f55375d297469bc643303aa7bd8afaaee16a3956efe39055fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:51 compute-0 stoic_rosalind[98680]: {"fsid":"f44264e3-e26a-5bd3-9e84-b4ba651d9cf5","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":147,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":31,"num_osds":3,"num_up_osds":3,"osd_up_since":1759336522,"num_in_osds":3,"osd_in_since":1759336495,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83841024,"bytes_avail":64328085504,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-01T16:35:13.275828+0000","services":{}},"progress_events":{}}
Oct 01 16:35:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b6638c1949e42f55375d297469bc643303aa7bd8afaaee16a3956efe39055fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b6638c1949e42f55375d297469bc643303aa7bd8afaaee16a3956efe39055fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b6638c1949e42f55375d297469bc643303aa7bd8afaaee16a3956efe39055fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b6638c1949e42f55375d297469bc643303aa7bd8afaaee16a3956efe39055fc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:51 compute-0 systemd[1]: libpod-7f3a6fee17aa8d1ee954d9f4d59e35c25badbf991d8bd37fc2a648a801594e79.scope: Deactivated successfully.
Oct 01 16:35:51 compute-0 podman[98621]: 2025-10-01 16:35:51.218282911 +0000 UTC m=+0.710911874 container died 7f3a6fee17aa8d1ee954d9f4d59e35c25badbf991d8bd37fc2a648a801594e79 (image=quay.io/ceph/ceph:v18, name=stoic_rosalind, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:35:51 compute-0 podman[98784]: 2025-10-01 16:35:51.141620951 +0000 UTC m=+0.018066254 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:51 compute-0 podman[98784]: 2025-10-01 16:35:51.245574019 +0000 UTC m=+0.122019322 container init bb7d83055018b3076e36f86e0a9537b032abef31959f4e9229f019ba6bd8a3f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 01 16:35:51 compute-0 podman[98784]: 2025-10-01 16:35:51.251029499 +0000 UTC m=+0.127474782 container start bb7d83055018b3076e36f86e0a9537b032abef31959f4e9229f019ba6bd8a3f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lamport, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 01 16:35:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-12b0059a31c3219946d374240728ad95c1f5d3dad7d6a4159c762f80d9d0a835-merged.mount: Deactivated successfully.
Oct 01 16:35:51 compute-0 podman[98621]: 2025-10-01 16:35:51.281652672 +0000 UTC m=+0.774281635 container remove 7f3a6fee17aa8d1ee954d9f4d59e35c25badbf991d8bd37fc2a648a801594e79 (image=quay.io/ceph/ceph:v18, name=stoic_rosalind, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:35:51 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:51 compute-0 podman[98784]: 2025-10-01 16:35:51.289435261 +0000 UTC m=+0.165880564 container attach bb7d83055018b3076e36f86e0a9537b032abef31959f4e9229f019ba6bd8a3f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lamport, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 01 16:35:51 compute-0 sudo[98562]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:51 compute-0 systemd[1]: libpod-conmon-7f3a6fee17aa8d1ee954d9f4d59e35c25badbf991d8bd37fc2a648a801594e79.scope: Deactivated successfully.
Oct 01 16:35:51 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:51 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:51 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:35:51 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:35:51 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:51 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:35:51 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:35:51 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:35:51 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1978316345' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 01 16:35:51 compute-0 sudo[98843]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utmnqxvpmjehgcdgmucvhxnpvoqjudcm ; /usr/bin/python3'
Oct 01 16:35:51 compute-0 sudo[98843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:35:51 compute-0 python3[98845]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:35:51 compute-0 podman[98846]: 2025-10-01 16:35:51.65492018 +0000 UTC m=+0.090275140 container create a1a5e2cf637097ef5b41df5048b406fc7fbb2d13e489b3fca460453bfc0a1459 (image=quay.io/ceph/ceph:v18, name=wizardly_bell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 01 16:35:51 compute-0 systemd[1]: Started libpod-conmon-a1a5e2cf637097ef5b41df5048b406fc7fbb2d13e489b3fca460453bfc0a1459.scope.
Oct 01 16:35:51 compute-0 podman[98846]: 2025-10-01 16:35:51.591034036 +0000 UTC m=+0.026389026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:35:51 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/add8bc8fb54539fa654a277fcf9fed39cdc996e90a51340225955fe5c1f395c0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/add8bc8fb54539fa654a277fcf9fed39cdc996e90a51340225955fe5c1f395c0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:51 compute-0 podman[98846]: 2025-10-01 16:35:51.722417356 +0000 UTC m=+0.157772336 container init a1a5e2cf637097ef5b41df5048b406fc7fbb2d13e489b3fca460453bfc0a1459 (image=quay.io/ceph/ceph:v18, name=wizardly_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:35:51 compute-0 podman[98846]: 2025-10-01 16:35:51.728198484 +0000 UTC m=+0.163553444 container start a1a5e2cf637097ef5b41df5048b406fc7fbb2d13e489b3fca460453bfc0a1459 (image=quay.io/ceph/ceph:v18, name=wizardly_bell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 01 16:35:51 compute-0 podman[98846]: 2025-10-01 16:35:51.730908893 +0000 UTC m=+0.166263853 container attach a1a5e2cf637097ef5b41df5048b406fc7fbb2d13e489b3fca460453bfc0a1459 (image=quay.io/ceph/ceph:v18, name=wizardly_bell, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:35:52 compute-0 priceless_lamport[98801]: --> passed data devices: 0 physical, 3 LVM
Oct 01 16:35:52 compute-0 priceless_lamport[98801]: --> relative data size: 1.0
Oct 01 16:35:52 compute-0 priceless_lamport[98801]: --> All data devices are unavailable
Oct 01 16:35:52 compute-0 systemd[1]: libpod-bb7d83055018b3076e36f86e0a9537b032abef31959f4e9229f019ba6bd8a3f4.scope: Deactivated successfully.
Oct 01 16:35:52 compute-0 conmon[98801]: conmon bb7d83055018b3076e36 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bb7d83055018b3076e36f86e0a9537b032abef31959f4e9229f019ba6bd8a3f4.scope/container/memory.events
Oct 01 16:35:52 compute-0 podman[98784]: 2025-10-01 16:35:52.313479904 +0000 UTC m=+1.189925207 container died bb7d83055018b3076e36f86e0a9537b032abef31959f4e9229f019ba6bd8a3f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lamport, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:35:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 16:35:52 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/618568547' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 16:35:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b6638c1949e42f55375d297469bc643303aa7bd8afaaee16a3956efe39055fc-merged.mount: Deactivated successfully.
Oct 01 16:35:52 compute-0 wizardly_bell[98861]: 
Oct 01 16:35:52 compute-0 wizardly_bell[98861]: {"epoch":1,"fsid":"f44264e3-e26a-5bd3-9e84-b4ba651d9cf5","modified":"2025-10-01T16:33:18.050288Z","created":"2025-10-01T16:33:18.050288Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Oct 01 16:35:52 compute-0 wizardly_bell[98861]: dumped monmap epoch 1
Oct 01 16:35:52 compute-0 systemd[1]: libpod-a1a5e2cf637097ef5b41df5048b406fc7fbb2d13e489b3fca460453bfc0a1459.scope: Deactivated successfully.
Oct 01 16:35:52 compute-0 podman[98846]: 2025-10-01 16:35:52.362690013 +0000 UTC m=+0.798044973 container died a1a5e2cf637097ef5b41df5048b406fc7fbb2d13e489b3fca460453bfc0a1459 (image=quay.io/ceph/ceph:v18, name=wizardly_bell, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 01 16:35:52 compute-0 ceph-mon[74273]: pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:52 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/618568547' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 16:35:52 compute-0 podman[98784]: 2025-10-01 16:35:52.390964686 +0000 UTC m=+1.267409979 container remove bb7d83055018b3076e36f86e0a9537b032abef31959f4e9229f019ba6bd8a3f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lamport, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:35:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-add8bc8fb54539fa654a277fcf9fed39cdc996e90a51340225955fe5c1f395c0-merged.mount: Deactivated successfully.
Oct 01 16:35:52 compute-0 systemd[1]: libpod-conmon-bb7d83055018b3076e36f86e0a9537b032abef31959f4e9229f019ba6bd8a3f4.scope: Deactivated successfully.
Oct 01 16:35:52 compute-0 sudo[98655]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:52 compute-0 podman[98846]: 2025-10-01 16:35:52.429958374 +0000 UTC m=+0.865313334 container remove a1a5e2cf637097ef5b41df5048b406fc7fbb2d13e489b3fca460453bfc0a1459 (image=quay.io/ceph/ceph:v18, name=wizardly_bell, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 01 16:35:52 compute-0 systemd[1]: libpod-conmon-a1a5e2cf637097ef5b41df5048b406fc7fbb2d13e489b3fca460453bfc0a1459.scope: Deactivated successfully.
Oct 01 16:35:52 compute-0 sudo[98843]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:52 compute-0 sudo[98932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:52 compute-0 sudo[98932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:52 compute-0 sudo[98932]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:52 compute-0 sudo[98957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:35:52 compute-0 sudo[98957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:52 compute-0 sudo[98957]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:52 compute-0 sudo[98982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:52 compute-0 sudo[98982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:52 compute-0 sudo[98982]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:52 compute-0 sudo[99007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 16:35:52 compute-0 sudo[99007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:52 compute-0 sudo[99075]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyvrnizoxjknjcuhdpjuhyjnzgyhmbth ; /usr/bin/python3'
Oct 01 16:35:52 compute-0 sudo[99075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:35:52 compute-0 python3[99082]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:35:52 compute-0 podman[99100]: 2025-10-01 16:35:52.999523432 +0000 UTC m=+0.048852580 container create 8fb58c99f7a033ae4725455de28f1cec751b9f1f0911c865d25e552e76f72096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_nobel, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:35:53 compute-0 podman[99114]: 2025-10-01 16:35:53.039886475 +0000 UTC m=+0.036632878 container create 6574091b389789288182a02fc3c4ed6e5a2229b520bea7e8e6165c055b1d69f8 (image=quay.io/ceph/ceph:v18, name=wizardly_williams, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Oct 01 16:35:53 compute-0 systemd[1]: Started libpod-conmon-8fb58c99f7a033ae4725455de28f1cec751b9f1f0911c865d25e552e76f72096.scope.
Oct 01 16:35:53 compute-0 podman[99100]: 2025-10-01 16:35:52.978983007 +0000 UTC m=+0.028312185 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:53 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:53 compute-0 systemd[1]: Started libpod-conmon-6574091b389789288182a02fc3c4ed6e5a2229b520bea7e8e6165c055b1d69f8.scope.
Oct 01 16:35:53 compute-0 podman[99100]: 2025-10-01 16:35:53.089186925 +0000 UTC m=+0.138516173 container init 8fb58c99f7a033ae4725455de28f1cec751b9f1f0911c865d25e552e76f72096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_nobel, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Oct 01 16:35:53 compute-0 podman[99100]: 2025-10-01 16:35:53.10074208 +0000 UTC m=+0.150071228 container start 8fb58c99f7a033ae4725455de28f1cec751b9f1f0911c865d25e552e76f72096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_nobel, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 01 16:35:53 compute-0 podman[99100]: 2025-10-01 16:35:53.104155747 +0000 UTC m=+0.153484915 container attach 8fb58c99f7a033ae4725455de28f1cec751b9f1f0911c865d25e552e76f72096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 01 16:35:53 compute-0 agitated_nobel[99129]: 167 167
Oct 01 16:35:53 compute-0 podman[99100]: 2025-10-01 16:35:53.106647981 +0000 UTC m=+0.155977159 container died 8fb58c99f7a033ae4725455de28f1cec751b9f1f0911c865d25e552e76f72096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:35:53 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:53 compute-0 systemd[1]: libpod-8fb58c99f7a033ae4725455de28f1cec751b9f1f0911c865d25e552e76f72096.scope: Deactivated successfully.
Oct 01 16:35:53 compute-0 podman[99114]: 2025-10-01 16:35:53.02406007 +0000 UTC m=+0.020806493 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/285ad0df08f16a28d84c1ba85317bb83e86a3d97b3f38fec9779134f7fc10e76/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/285ad0df08f16a28d84c1ba85317bb83e86a3d97b3f38fec9779134f7fc10e76/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:53 compute-0 podman[99114]: 2025-10-01 16:35:53.142537209 +0000 UTC m=+0.139283692 container init 6574091b389789288182a02fc3c4ed6e5a2229b520bea7e8e6165c055b1d69f8 (image=quay.io/ceph/ceph:v18, name=wizardly_williams, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 01 16:35:53 compute-0 podman[99114]: 2025-10-01 16:35:53.15352335 +0000 UTC m=+0.150269793 container start 6574091b389789288182a02fc3c4ed6e5a2229b520bea7e8e6165c055b1d69f8 (image=quay.io/ceph/ceph:v18, name=wizardly_williams, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:35:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-5090acac7f91d0a2feacde8d328425baea320ff869f92fc153290b3510bb3589-merged.mount: Deactivated successfully.
Oct 01 16:35:53 compute-0 podman[99114]: 2025-10-01 16:35:53.157648236 +0000 UTC m=+0.154394689 container attach 6574091b389789288182a02fc3c4ed6e5a2229b520bea7e8e6165c055b1d69f8 (image=quay.io/ceph/ceph:v18, name=wizardly_williams, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:35:53 compute-0 podman[99100]: 2025-10-01 16:35:53.169194211 +0000 UTC m=+0.218523359 container remove 8fb58c99f7a033ae4725455de28f1cec751b9f1f0911c865d25e552e76f72096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:35:53 compute-0 systemd[1]: libpod-conmon-8fb58c99f7a033ae4725455de28f1cec751b9f1f0911c865d25e552e76f72096.scope: Deactivated successfully.
Oct 01 16:35:53 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:53 compute-0 podman[99159]: 2025-10-01 16:35:53.369249748 +0000 UTC m=+0.055856460 container create 553076ef497a084044968bc5ad4711e5a5e94bd3facd979cf449bf9f9aa62275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mayer, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:35:53 compute-0 systemd[1]: Started libpod-conmon-553076ef497a084044968bc5ad4711e5a5e94bd3facd979cf449bf9f9aa62275.scope.
Oct 01 16:35:53 compute-0 podman[99159]: 2025-10-01 16:35:53.335169796 +0000 UTC m=+0.021776538 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:53 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e17f2993493ec1550b9c11371d2e8541773ecd44ca7afcfb12604c67e9e78c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e17f2993493ec1550b9c11371d2e8541773ecd44ca7afcfb12604c67e9e78c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e17f2993493ec1550b9c11371d2e8541773ecd44ca7afcfb12604c67e9e78c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e17f2993493ec1550b9c11371d2e8541773ecd44ca7afcfb12604c67e9e78c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:53 compute-0 podman[99159]: 2025-10-01 16:35:53.456000857 +0000 UTC m=+0.142607559 container init 553076ef497a084044968bc5ad4711e5a5e94bd3facd979cf449bf9f9aa62275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 01 16:35:53 compute-0 podman[99159]: 2025-10-01 16:35:53.467086171 +0000 UTC m=+0.153692873 container start 553076ef497a084044968bc5ad4711e5a5e94bd3facd979cf449bf9f9aa62275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 01 16:35:53 compute-0 podman[99159]: 2025-10-01 16:35:53.470465577 +0000 UTC m=+0.157072299 container attach 553076ef497a084044968bc5ad4711e5a5e94bd3facd979cf449bf9f9aa62275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mayer, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:35:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:35:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Oct 01 16:35:53 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3925408747' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct 01 16:35:53 compute-0 wizardly_williams[99134]: [client.openstack]
Oct 01 16:35:53 compute-0 wizardly_williams[99134]:         key = AQC1V91oAAAAABAAOuYA0InTprUH/o2bXP7eNg==
Oct 01 16:35:53 compute-0 wizardly_williams[99134]:         caps mgr = "allow *"
Oct 01 16:35:53 compute-0 wizardly_williams[99134]:         caps mon = "profile rbd"
Oct 01 16:35:53 compute-0 wizardly_williams[99134]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Oct 01 16:35:53 compute-0 systemd[1]: libpod-6574091b389789288182a02fc3c4ed6e5a2229b520bea7e8e6165c055b1d69f8.scope: Deactivated successfully.
Oct 01 16:35:53 compute-0 podman[99114]: 2025-10-01 16:35:53.780656741 +0000 UTC m=+0.777403154 container died 6574091b389789288182a02fc3c4ed6e5a2229b520bea7e8e6165c055b1d69f8 (image=quay.io/ceph/ceph:v18, name=wizardly_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 01 16:35:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-285ad0df08f16a28d84c1ba85317bb83e86a3d97b3f38fec9779134f7fc10e76-merged.mount: Deactivated successfully.
Oct 01 16:35:53 compute-0 podman[99114]: 2025-10-01 16:35:53.832695292 +0000 UTC m=+0.829441705 container remove 6574091b389789288182a02fc3c4ed6e5a2229b520bea7e8e6165c055b1d69f8 (image=quay.io/ceph/ceph:v18, name=wizardly_williams, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 01 16:35:53 compute-0 systemd[1]: libpod-conmon-6574091b389789288182a02fc3c4ed6e5a2229b520bea7e8e6165c055b1d69f8.scope: Deactivated successfully.
Oct 01 16:35:53 compute-0 sudo[99075]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:54 compute-0 sad_mayer[99176]: {
Oct 01 16:35:54 compute-0 sad_mayer[99176]:     "0": [
Oct 01 16:35:54 compute-0 sad_mayer[99176]:         {
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             "devices": [
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "/dev/loop3"
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             ],
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             "lv_name": "ceph_lv0",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             "lv_size": "21470642176",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             "name": "ceph_lv0",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             "tags": {
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "ceph.cluster_name": "ceph",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "ceph.crush_device_class": "",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "ceph.encrypted": "0",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "ceph.osd_id": "0",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "ceph.type": "block",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "ceph.vdo": "0"
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             },
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             "type": "block",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             "vg_name": "ceph_vg0"
Oct 01 16:35:54 compute-0 sad_mayer[99176]:         }
Oct 01 16:35:54 compute-0 sad_mayer[99176]:     ],
Oct 01 16:35:54 compute-0 sad_mayer[99176]:     "1": [
Oct 01 16:35:54 compute-0 sad_mayer[99176]:         {
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             "devices": [
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "/dev/loop4"
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             ],
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             "lv_name": "ceph_lv1",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             "lv_size": "21470642176",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             "name": "ceph_lv1",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             "tags": {
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "ceph.cluster_name": "ceph",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "ceph.crush_device_class": "",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "ceph.encrypted": "0",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "ceph.osd_id": "1",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "ceph.type": "block",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "ceph.vdo": "0"
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             },
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             "type": "block",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             "vg_name": "ceph_vg1"
Oct 01 16:35:54 compute-0 sad_mayer[99176]:         }
Oct 01 16:35:54 compute-0 sad_mayer[99176]:     ],
Oct 01 16:35:54 compute-0 sad_mayer[99176]:     "2": [
Oct 01 16:35:54 compute-0 sad_mayer[99176]:         {
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             "devices": [
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "/dev/loop5"
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             ],
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             "lv_name": "ceph_lv2",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             "lv_size": "21470642176",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             "name": "ceph_lv2",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             "tags": {
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "ceph.cluster_name": "ceph",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "ceph.crush_device_class": "",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "ceph.encrypted": "0",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "ceph.osd_id": "2",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "ceph.type": "block",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:                 "ceph.vdo": "0"
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             },
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             "type": "block",
Oct 01 16:35:54 compute-0 sad_mayer[99176]:             "vg_name": "ceph_vg2"
Oct 01 16:35:54 compute-0 sad_mayer[99176]:         }
Oct 01 16:35:54 compute-0 sad_mayer[99176]:     ]
Oct 01 16:35:54 compute-0 sad_mayer[99176]: }
Oct 01 16:35:54 compute-0 systemd[1]: libpod-553076ef497a084044968bc5ad4711e5a5e94bd3facd979cf449bf9f9aa62275.scope: Deactivated successfully.
Oct 01 16:35:54 compute-0 podman[99159]: 2025-10-01 16:35:54.280826335 +0000 UTC m=+0.967433047 container died 553076ef497a084044968bc5ad4711e5a5e94bd3facd979cf449bf9f9aa62275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:35:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e17f2993493ec1550b9c11371d2e8541773ecd44ca7afcfb12604c67e9e78c5-merged.mount: Deactivated successfully.
Oct 01 16:35:54 compute-0 podman[99159]: 2025-10-01 16:35:54.330871485 +0000 UTC m=+1.017478187 container remove 553076ef497a084044968bc5ad4711e5a5e94bd3facd979cf449bf9f9aa62275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:35:54 compute-0 systemd[1]: libpod-conmon-553076ef497a084044968bc5ad4711e5a5e94bd3facd979cf449bf9f9aa62275.scope: Deactivated successfully.
Oct 01 16:35:54 compute-0 sudo[99007]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:54 compute-0 ceph-mon[74273]: pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:54 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3925408747' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct 01 16:35:54 compute-0 sudo[99235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:54 compute-0 sudo[99235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:54 compute-0 sudo[99235]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:54 compute-0 sudo[99260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:35:54 compute-0 sudo[99260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:54 compute-0 sudo[99260]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:54 compute-0 sudo[99285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:54 compute-0 sudo[99285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:54 compute-0 sudo[99285]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:54 compute-0 sudo[99310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 16:35:54 compute-0 sudo[99310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:54 compute-0 podman[99394]: 2025-10-01 16:35:54.833707236 +0000 UTC m=+0.034625517 container create 596a38918848a4194e8d7d305dc489487bf25b4bdab299a82d873bcfd6e8ff4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:35:54 compute-0 systemd[1]: Started libpod-conmon-596a38918848a4194e8d7d305dc489487bf25b4bdab299a82d873bcfd6e8ff4b.scope.
Oct 01 16:35:54 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:54 compute-0 podman[99394]: 2025-10-01 16:35:54.899909019 +0000 UTC m=+0.100827330 container init 596a38918848a4194e8d7d305dc489487bf25b4bdab299a82d873bcfd6e8ff4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_newton, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:35:54 compute-0 podman[99394]: 2025-10-01 16:35:54.906038456 +0000 UTC m=+0.106956737 container start 596a38918848a4194e8d7d305dc489487bf25b4bdab299a82d873bcfd6e8ff4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_newton, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:35:54 compute-0 friendly_newton[99445]: 167 167
Oct 01 16:35:54 compute-0 systemd[1]: libpod-596a38918848a4194e8d7d305dc489487bf25b4bdab299a82d873bcfd6e8ff4b.scope: Deactivated successfully.
Oct 01 16:35:54 compute-0 podman[99394]: 2025-10-01 16:35:54.91011101 +0000 UTC m=+0.111029331 container attach 596a38918848a4194e8d7d305dc489487bf25b4bdab299a82d873bcfd6e8ff4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:35:54 compute-0 podman[99394]: 2025-10-01 16:35:54.911510936 +0000 UTC m=+0.112429227 container died 596a38918848a4194e8d7d305dc489487bf25b4bdab299a82d873bcfd6e8ff4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:35:54 compute-0 podman[99394]: 2025-10-01 16:35:54.819171424 +0000 UTC m=+0.020089735 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-37b00656cfe4b3cff7f254f4ca55e260a38c58630f0db1a4e8ee78c9ba156b40-merged.mount: Deactivated successfully.
Oct 01 16:35:54 compute-0 podman[99394]: 2025-10-01 16:35:54.944793667 +0000 UTC m=+0.145711958 container remove 596a38918848a4194e8d7d305dc489487bf25b4bdab299a82d873bcfd6e8ff4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 01 16:35:54 compute-0 systemd[1]: libpod-conmon-596a38918848a4194e8d7d305dc489487bf25b4bdab299a82d873bcfd6e8ff4b.scope: Deactivated successfully.
Oct 01 16:35:55 compute-0 sudo[99571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcqbwjltxyqfxsbytpbqbxcyvonrkyfn ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759336554.807347-33255-253545466311549/async_wrapper.py j828415165638 30 /home/zuul/.ansible/tmp/ansible-tmp-1759336554.807347-33255-253545466311549/AnsiballZ_command.py _'
Oct 01 16:35:55 compute-0 sudo[99571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:35:55 compute-0 podman[99554]: 2025-10-01 16:35:55.103830955 +0000 UTC m=+0.040053125 container create 0321f81ca2cb2eba6f1366c3cf272be099208036b078752be8b9b30d4b4e1e3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_liskov, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 01 16:35:55 compute-0 systemd[1]: Started libpod-conmon-0321f81ca2cb2eba6f1366c3cf272be099208036b078752be8b9b30d4b4e1e3d.scope.
Oct 01 16:35:55 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/147554fd2c120d094ca62dc2e3aae9a11a2867ee0a7a8e2d3b7ba6bc4e367474/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/147554fd2c120d094ca62dc2e3aae9a11a2867ee0a7a8e2d3b7ba6bc4e367474/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/147554fd2c120d094ca62dc2e3aae9a11a2867ee0a7a8e2d3b7ba6bc4e367474/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/147554fd2c120d094ca62dc2e3aae9a11a2867ee0a7a8e2d3b7ba6bc4e367474/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:55 compute-0 podman[99554]: 2025-10-01 16:35:55.177451178 +0000 UTC m=+0.113673378 container init 0321f81ca2cb2eba6f1366c3cf272be099208036b078752be8b9b30d4b4e1e3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_liskov, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:35:55 compute-0 podman[99554]: 2025-10-01 16:35:55.08602592 +0000 UTC m=+0.022248120 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:55 compute-0 podman[99554]: 2025-10-01 16:35:55.186154411 +0000 UTC m=+0.122376571 container start 0321f81ca2cb2eba6f1366c3cf272be099208036b078752be8b9b30d4b4e1e3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:35:55 compute-0 podman[99554]: 2025-10-01 16:35:55.188927122 +0000 UTC m=+0.125149342 container attach 0321f81ca2cb2eba6f1366c3cf272be099208036b078752be8b9b30d4b4e1e3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_liskov, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Oct 01 16:35:55 compute-0 ansible-async_wrapper.py[99578]: Invoked with j828415165638 30 /home/zuul/.ansible/tmp/ansible-tmp-1759336554.807347-33255-253545466311549/AnsiballZ_command.py _
Oct 01 16:35:55 compute-0 ansible-async_wrapper.py[99591]: Starting module and watcher
Oct 01 16:35:55 compute-0 ansible-async_wrapper.py[99591]: Start watching 99592 (30)
Oct 01 16:35:55 compute-0 ansible-async_wrapper.py[99592]: Start module (99592)
Oct 01 16:35:55 compute-0 ansible-async_wrapper.py[99578]: Return async_wrapper task started.
Oct 01 16:35:55 compute-0 sudo[99571]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:55 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:55 compute-0 python3[99593]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:35:55 compute-0 podman[99594]: 2025-10-01 16:35:55.449454666 +0000 UTC m=+0.041335669 container create 5d8e26b8acdd8cea7d37100aacee3f9a5d263a0a782ad1f36fd4f8fe77ade2c5 (image=quay.io/ceph/ceph:v18, name=boring_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Oct 01 16:35:55 compute-0 systemd[1]: Started libpod-conmon-5d8e26b8acdd8cea7d37100aacee3f9a5d263a0a782ad1f36fd4f8fe77ade2c5.scope.
Oct 01 16:35:55 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/305bf3c923324fdb4c1544bd431ebfbff402986b4706fb8be61150d325e2520e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/305bf3c923324fdb4c1544bd431ebfbff402986b4706fb8be61150d325e2520e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:55 compute-0 podman[99594]: 2025-10-01 16:35:55.428867049 +0000 UTC m=+0.020748062 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:35:55 compute-0 podman[99594]: 2025-10-01 16:35:55.55127722 +0000 UTC m=+0.143158253 container init 5d8e26b8acdd8cea7d37100aacee3f9a5d263a0a782ad1f36fd4f8fe77ade2c5 (image=quay.io/ceph/ceph:v18, name=boring_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:35:55 compute-0 podman[99594]: 2025-10-01 16:35:55.558821013 +0000 UTC m=+0.150702016 container start 5d8e26b8acdd8cea7d37100aacee3f9a5d263a0a782ad1f36fd4f8fe77ade2c5 (image=quay.io/ceph/ceph:v18, name=boring_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:35:55 compute-0 podman[99594]: 2025-10-01 16:35:55.561984844 +0000 UTC m=+0.153865867 container attach 5d8e26b8acdd8cea7d37100aacee3f9a5d263a0a782ad1f36fd4f8fe77ade2c5 (image=quay.io/ceph/ceph:v18, name=boring_hellman, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:35:56 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 01 16:35:56 compute-0 boring_hellman[99609]: 
Oct 01 16:35:56 compute-0 boring_hellman[99609]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 01 16:35:56 compute-0 systemd[1]: libpod-5d8e26b8acdd8cea7d37100aacee3f9a5d263a0a782ad1f36fd4f8fe77ade2c5.scope: Deactivated successfully.
Oct 01 16:35:56 compute-0 podman[99594]: 2025-10-01 16:35:56.095228063 +0000 UTC m=+0.687109066 container died 5d8e26b8acdd8cea7d37100aacee3f9a5d263a0a782ad1f36fd4f8fe77ade2c5 (image=quay.io/ceph/ceph:v18, name=boring_hellman, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:35:56 compute-0 cool_liskov[99584]: {
Oct 01 16:35:56 compute-0 cool_liskov[99584]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 16:35:56 compute-0 cool_liskov[99584]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:35:56 compute-0 cool_liskov[99584]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 16:35:56 compute-0 cool_liskov[99584]:         "osd_id": 2,
Oct 01 16:35:56 compute-0 cool_liskov[99584]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:35:56 compute-0 cool_liskov[99584]:         "type": "bluestore"
Oct 01 16:35:56 compute-0 cool_liskov[99584]:     },
Oct 01 16:35:56 compute-0 cool_liskov[99584]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 16:35:56 compute-0 cool_liskov[99584]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:35:56 compute-0 cool_liskov[99584]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 16:35:56 compute-0 cool_liskov[99584]:         "osd_id": 0,
Oct 01 16:35:56 compute-0 cool_liskov[99584]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:35:56 compute-0 cool_liskov[99584]:         "type": "bluestore"
Oct 01 16:35:56 compute-0 cool_liskov[99584]:     },
Oct 01 16:35:56 compute-0 cool_liskov[99584]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 16:35:56 compute-0 cool_liskov[99584]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:35:56 compute-0 cool_liskov[99584]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 16:35:56 compute-0 cool_liskov[99584]:         "osd_id": 1,
Oct 01 16:35:56 compute-0 cool_liskov[99584]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:35:56 compute-0 cool_liskov[99584]:         "type": "bluestore"
Oct 01 16:35:56 compute-0 cool_liskov[99584]:     }
Oct 01 16:35:56 compute-0 cool_liskov[99584]: }
Oct 01 16:35:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-305bf3c923324fdb4c1544bd431ebfbff402986b4706fb8be61150d325e2520e-merged.mount: Deactivated successfully.
Oct 01 16:35:56 compute-0 systemd[1]: libpod-0321f81ca2cb2eba6f1366c3cf272be099208036b078752be8b9b30d4b4e1e3d.scope: Deactivated successfully.
Oct 01 16:35:56 compute-0 podman[99554]: 2025-10-01 16:35:56.144999766 +0000 UTC m=+1.081221946 container died 0321f81ca2cb2eba6f1366c3cf272be099208036b078752be8b9b30d4b4e1e3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_liskov, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 01 16:35:56 compute-0 podman[99594]: 2025-10-01 16:35:56.162054313 +0000 UTC m=+0.753935316 container remove 5d8e26b8acdd8cea7d37100aacee3f9a5d263a0a782ad1f36fd4f8fe77ade2c5 (image=quay.io/ceph/ceph:v18, name=boring_hellman, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:35:56 compute-0 systemd[1]: libpod-conmon-5d8e26b8acdd8cea7d37100aacee3f9a5d263a0a782ad1f36fd4f8fe77ade2c5.scope: Deactivated successfully.
Oct 01 16:35:56 compute-0 ansible-async_wrapper.py[99592]: Module complete (99592)
Oct 01 16:35:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-147554fd2c120d094ca62dc2e3aae9a11a2867ee0a7a8e2d3b7ba6bc4e367474-merged.mount: Deactivated successfully.
Oct 01 16:35:56 compute-0 sudo[99730]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjxjgucmomcnlnjbzdkaixhhymvcejjg ; /usr/bin/python3'
Oct 01 16:35:56 compute-0 sudo[99730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:35:56 compute-0 podman[99554]: 2025-10-01 16:35:56.460416364 +0000 UTC m=+1.396638544 container remove 0321f81ca2cb2eba6f1366c3cf272be099208036b078752be8b9b30d4b4e1e3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_liskov, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Oct 01 16:35:56 compute-0 ceph-mon[74273]: pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:56 compute-0 systemd[1]: libpod-conmon-0321f81ca2cb2eba6f1366c3cf272be099208036b078752be8b9b30d4b4e1e3d.scope: Deactivated successfully.
Oct 01 16:35:56 compute-0 sudo[99310]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:35:56 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:35:56 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:56 compute-0 ceph-mgr[74571]: [progress INFO root] update: starting ev 933e45d6-ff78-4de6-9abd-ba57067ef444 (Updating rgw.rgw deployment (+1 -> 1))
Oct 01 16:35:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ktodly", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Oct 01 16:35:56 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ktodly", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 01 16:35:56 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ktodly", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 01 16:35:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Oct 01 16:35:56 compute-0 python3[99732]: ansible-ansible.legacy.async_status Invoked with jid=j828415165638.99578 mode=status _async_dir=/root/.ansible_async
Oct 01 16:35:56 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:35:56 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:35:56 compute-0 ceph-mgr[74571]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.ktodly on compute-0
Oct 01 16:35:56 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.ktodly on compute-0
Oct 01 16:35:56 compute-0 sudo[99730]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:56 compute-0 sudo[99733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:56 compute-0 sudo[99733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:56 compute-0 sudo[99733]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:56 compute-0 sudo[99781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:35:56 compute-0 sudo[99781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:56 compute-0 sudo[99781]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:56 compute-0 sudo[99829]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhhmibhngskafqhmdkyjoozqkytxiric ; /usr/bin/python3'
Oct 01 16:35:56 compute-0 sudo[99829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:35:56 compute-0 sudo[99830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:56 compute-0 sudo[99830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:56 compute-0 sudo[99830]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:56 compute-0 sudo[99857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5
Oct 01 16:35:56 compute-0 sudo[99857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:56 compute-0 python3[99836]: ansible-ansible.legacy.async_status Invoked with jid=j828415165638.99578 mode=cleanup _async_dir=/root/.ansible_async
Oct 01 16:35:56 compute-0 sudo[99829]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:57 compute-0 podman[99923]: 2025-10-01 16:35:57.135677695 +0000 UTC m=+0.051149719 container create d06e2cc863a5a6a278ba0415229c56f3c015781142e05974cb1f90a32a3738fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lichterman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct 01 16:35:57 compute-0 systemd[1]: Started libpod-conmon-d06e2cc863a5a6a278ba0415229c56f3c015781142e05974cb1f90a32a3738fd.scope.
Oct 01 16:35:57 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:57 compute-0 podman[99923]: 2025-10-01 16:35:57.111996239 +0000 UTC m=+0.027468363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:57 compute-0 podman[99923]: 2025-10-01 16:35:57.213346422 +0000 UTC m=+0.128818466 container init d06e2cc863a5a6a278ba0415229c56f3c015781142e05974cb1f90a32a3738fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:35:57 compute-0 podman[99923]: 2025-10-01 16:35:57.219163721 +0000 UTC m=+0.134635745 container start d06e2cc863a5a6a278ba0415229c56f3c015781142e05974cb1f90a32a3738fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lichterman, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 01 16:35:57 compute-0 podman[99923]: 2025-10-01 16:35:57.222465775 +0000 UTC m=+0.137937799 container attach d06e2cc863a5a6a278ba0415229c56f3c015781142e05974cb1f90a32a3738fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:35:57 compute-0 reverent_lichterman[99940]: 167 167
Oct 01 16:35:57 compute-0 systemd[1]: libpod-d06e2cc863a5a6a278ba0415229c56f3c015781142e05974cb1f90a32a3738fd.scope: Deactivated successfully.
Oct 01 16:35:57 compute-0 conmon[99940]: conmon d06e2cc863a5a6a278ba <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d06e2cc863a5a6a278ba0415229c56f3c015781142e05974cb1f90a32a3738fd.scope/container/memory.events
Oct 01 16:35:57 compute-0 podman[99923]: 2025-10-01 16:35:57.228499929 +0000 UTC m=+0.143971983 container died d06e2cc863a5a6a278ba0415229c56f3c015781142e05974cb1f90a32a3738fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 01 16:35:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-744cec36b09d2b61d8997173864a7675e1dbb998bcb050787185a570b63abb92-merged.mount: Deactivated successfully.
Oct 01 16:35:57 compute-0 podman[99923]: 2025-10-01 16:35:57.272233308 +0000 UTC m=+0.187705332 container remove d06e2cc863a5a6a278ba0415229c56f3c015781142e05974cb1f90a32a3738fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Oct 01 16:35:57 compute-0 sudo[99978]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyzisbgtcvxudwieblbmqjqjxszkywes ; /usr/bin/python3'
Oct 01 16:35:57 compute-0 sudo[99978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:35:57 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:57 compute-0 systemd[1]: libpod-conmon-d06e2cc863a5a6a278ba0415229c56f3c015781142e05974cb1f90a32a3738fd.scope: Deactivated successfully.
Oct 01 16:35:57 compute-0 systemd[1]: Reloading.
Oct 01 16:35:57 compute-0 systemd-sysv-generator[100010]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:35:57 compute-0 systemd-rc-local-generator[100006]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:35:57 compute-0 python3[99980]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:35:57 compute-0 podman[100018]: 2025-10-01 16:35:57.466774324 +0000 UTC m=+0.034211776 container create 07a42e1670c633961b730c609a8c0b2353868c2a14a100056b726323873d0e38 (image=quay.io/ceph/ceph:v18, name=hardcore_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:35:57 compute-0 ceph-mon[74273]: from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 01 16:35:57 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:57 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:57 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ktodly", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 01 16:35:57 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ktodly", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 01 16:35:57 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:57 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:35:57 compute-0 ceph-mon[74273]: Deploying daemon rgw.rgw.compute-0.ktodly on compute-0
Oct 01 16:35:57 compute-0 podman[100018]: 2025-10-01 16:35:57.453683229 +0000 UTC m=+0.021120701 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:35:57 compute-0 systemd[1]: Started libpod-conmon-07a42e1670c633961b730c609a8c0b2353868c2a14a100056b726323873d0e38.scope.
Oct 01 16:35:57 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be480759141e75dea9d6ab614e447903cf9075c0f0c5ee084723ee8ea2951e92/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be480759141e75dea9d6ab614e447903cf9075c0f0c5ee084723ee8ea2951e92/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:57 compute-0 podman[100018]: 2025-10-01 16:35:57.637230534 +0000 UTC m=+0.204668086 container init 07a42e1670c633961b730c609a8c0b2353868c2a14a100056b726323873d0e38 (image=quay.io/ceph/ceph:v18, name=hardcore_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:35:57 compute-0 podman[100018]: 2025-10-01 16:35:57.65036531 +0000 UTC m=+0.217802802 container start 07a42e1670c633961b730c609a8c0b2353868c2a14a100056b726323873d0e38 (image=quay.io/ceph/ceph:v18, name=hardcore_germain, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 01 16:35:57 compute-0 systemd[1]: Reloading.
Oct 01 16:35:57 compute-0 podman[100018]: 2025-10-01 16:35:57.654725241 +0000 UTC m=+0.222162793 container attach 07a42e1670c633961b730c609a8c0b2353868c2a14a100056b726323873d0e38 (image=quay.io/ceph/ceph:v18, name=hardcore_germain, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 01 16:35:57 compute-0 systemd-rc-local-generator[100073]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:35:57 compute-0 systemd-sysv-generator[100076]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:35:57 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.ktodly for f44264e3-e26a-5bd3-9e84-b4ba651d9cf5...
Oct 01 16:35:58 compute-0 podman[100146]: 2025-10-01 16:35:58.189033348 +0000 UTC m=+0.059816998 container create 102da573bc9259099f8ded3e41a60c69eb2534ba6efb37a271c310e46dfbdb3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-rgw-rgw-compute-0-ktodly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:35:58 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 01 16:35:58 compute-0 hardcore_germain[100036]: 
Oct 01 16:35:58 compute-0 hardcore_germain[100036]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 01 16:35:58 compute-0 systemd[1]: libpod-07a42e1670c633961b730c609a8c0b2353868c2a14a100056b726323873d0e38.scope: Deactivated successfully.
Oct 01 16:35:58 compute-0 podman[100018]: 2025-10-01 16:35:58.234747107 +0000 UTC m=+0.802184559 container died 07a42e1670c633961b730c609a8c0b2353868c2a14a100056b726323873d0e38 (image=quay.io/ceph/ceph:v18, name=hardcore_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:35:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3c94d5a82a21ac14b9dc87e0a1b9121be947df3e004c6feaaf532a2ce538294/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3c94d5a82a21ac14b9dc87e0a1b9121be947df3e004c6feaaf532a2ce538294/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3c94d5a82a21ac14b9dc87e0a1b9121be947df3e004c6feaaf532a2ce538294/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3c94d5a82a21ac14b9dc87e0a1b9121be947df3e004c6feaaf532a2ce538294/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.ktodly supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:58 compute-0 podman[100146]: 2025-10-01 16:35:58.153328926 +0000 UTC m=+0.024112606 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:58 compute-0 podman[100146]: 2025-10-01 16:35:58.259617681 +0000 UTC m=+0.130401321 container init 102da573bc9259099f8ded3e41a60c69eb2534ba6efb37a271c310e46dfbdb3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-rgw-rgw-compute-0-ktodly, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 01 16:35:58 compute-0 podman[100146]: 2025-10-01 16:35:58.267499156 +0000 UTC m=+0.138282786 container start 102da573bc9259099f8ded3e41a60c69eb2534ba6efb37a271c310e46dfbdb3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-rgw-rgw-compute-0-ktodly, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:35:58 compute-0 bash[100146]: 102da573bc9259099f8ded3e41a60c69eb2534ba6efb37a271c310e46dfbdb3c
Oct 01 16:35:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-be480759141e75dea9d6ab614e447903cf9075c0f0c5ee084723ee8ea2951e92-merged.mount: Deactivated successfully.
Oct 01 16:35:58 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.ktodly for f44264e3-e26a-5bd3-9e84-b4ba651d9cf5.
Oct 01 16:35:58 compute-0 podman[100018]: 2025-10-01 16:35:58.295255622 +0000 UTC m=+0.862693074 container remove 07a42e1670c633961b730c609a8c0b2353868c2a14a100056b726323873d0e38 (image=quay.io/ceph/ceph:v18, name=hardcore_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:35:58 compute-0 sudo[99978]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:58 compute-0 systemd[1]: libpod-conmon-07a42e1670c633961b730c609a8c0b2353868c2a14a100056b726323873d0e38.scope: Deactivated successfully.
Oct 01 16:35:58 compute-0 radosgw[100177]: deferred set uid:gid to 167:167 (ceph:ceph)
Oct 01 16:35:58 compute-0 radosgw[100177]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Oct 01 16:35:58 compute-0 radosgw[100177]: framework: beast
Oct 01 16:35:58 compute-0 radosgw[100177]: framework conf key: endpoint, val: 192.168.122.100:8082
Oct 01 16:35:58 compute-0 radosgw[100177]: init_numa not setting numa affinity
Oct 01 16:35:58 compute-0 sudo[99857]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:35:58 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:35:58 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 01 16:35:58 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:58 compute-0 ceph-mgr[74571]: [progress INFO root] complete: finished ev 933e45d6-ff78-4de6-9abd-ba57067ef444 (Updating rgw.rgw deployment (+1 -> 1))
Oct 01 16:35:58 compute-0 ceph-mgr[74571]: [progress INFO root] Completed event 933e45d6-ff78-4de6-9abd-ba57067ef444 (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Oct 01 16:35:58 compute-0 ceph-mgr[74571]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Oct 01 16:35:58 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Oct 01 16:35:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 01 16:35:58 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 01 16:35:58 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:58 compute-0 ceph-mgr[74571]: [progress INFO root] update: starting ev d70fc322-185f-4b99-a7c7-33301931a306 (Updating mds.cephfs deployment (+1 -> 1))
Oct 01 16:35:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.dbklxe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Oct 01 16:35:58 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.dbklxe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 01 16:35:58 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.dbklxe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 01 16:35:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:35:58 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:35:58 compute-0 ceph-mgr[74571]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.dbklxe on compute-0
Oct 01 16:35:58 compute-0 ceph-mgr[74571]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.dbklxe on compute-0
Oct 01 16:35:58 compute-0 sudo[100239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:58 compute-0 sudo[100239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:58 compute-0 sudo[100239]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:58 compute-0 sudo[100264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:35:58 compute-0 sudo[100264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:58 compute-0 sudo[100264]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:58 compute-0 ceph-mon[74273]: pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:58 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:58 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:58 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:58 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:58 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:35:58 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.dbklxe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 01 16:35:58 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.dbklxe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 01 16:35:58 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:35:58 compute-0 sudo[100289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:35:58 compute-0 sudo[100289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:58 compute-0 sudo[100289]: pam_unix(sudo:session): session closed for user root
Oct 01 16:35:58 compute-0 sudo[100314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5
Oct 01 16:35:58 compute-0 sudo[100314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:35:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:35:59 compute-0 podman[100381]: 2025-10-01 16:35:59.025595639 +0000 UTC m=+0.050240681 container create 4a4a68ce1675c5fd5e4357731b8428300fb0e95c39da682ea2ae236838e49f10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Oct 01 16:35:59 compute-0 systemd[1]: Started libpod-conmon-4a4a68ce1675c5fd5e4357731b8428300fb0e95c39da682ea2ae236838e49f10.scope.
Oct 01 16:35:59 compute-0 podman[100381]: 2025-10-01 16:35:59.001946325 +0000 UTC m=+0.026591377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:35:59 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:59 compute-0 sudo[100423]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hueliewxmfctmrrscthqrhwrtkrfvzfr ; /usr/bin/python3'
Oct 01 16:35:59 compute-0 sudo[100423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:35:59 compute-0 podman[100381]: 2025-10-01 16:35:59.121211761 +0000 UTC m=+0.145856783 container init 4a4a68ce1675c5fd5e4357731b8428300fb0e95c39da682ea2ae236838e49f10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:35:59 compute-0 podman[100381]: 2025-10-01 16:35:59.130975962 +0000 UTC m=+0.155620964 container start 4a4a68ce1675c5fd5e4357731b8428300fb0e95c39da682ea2ae236838e49f10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_mendeleev, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 01 16:35:59 compute-0 podman[100381]: 2025-10-01 16:35:59.134248673 +0000 UTC m=+0.158893725 container attach 4a4a68ce1675c5fd5e4357731b8428300fb0e95c39da682ea2ae236838e49f10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_mendeleev, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 01 16:35:59 compute-0 relaxed_mendeleev[100404]: 167 167
Oct 01 16:35:59 compute-0 systemd[1]: libpod-4a4a68ce1675c5fd5e4357731b8428300fb0e95c39da682ea2ae236838e49f10.scope: Deactivated successfully.
Oct 01 16:35:59 compute-0 podman[100381]: 2025-10-01 16:35:59.139311748 +0000 UTC m=+0.163956740 container died 4a4a68ce1675c5fd5e4357731b8428300fb0e95c39da682ea2ae236838e49f10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:35:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-f737a8307f814cb34655a8b8f77cbffda3a84883d5d48457c11d462a54f6e20b-merged.mount: Deactivated successfully.
Oct 01 16:35:59 compute-0 podman[100381]: 2025-10-01 16:35:59.176297121 +0000 UTC m=+0.200942123 container remove 4a4a68ce1675c5fd5e4357731b8428300fb0e95c39da682ea2ae236838e49f10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_mendeleev, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 01 16:35:59 compute-0 systemd[1]: libpod-conmon-4a4a68ce1675c5fd5e4357731b8428300fb0e95c39da682ea2ae236838e49f10.scope: Deactivated successfully.
Oct 01 16:35:59 compute-0 systemd[1]: Reloading.
Oct 01 16:35:59 compute-0 python3[100425]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:35:59 compute-0 systemd-rc-local-generator[100468]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:35:59 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:35:59 compute-0 systemd-sysv-generator[100472]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:35:59 compute-0 podman[100476]: 2025-10-01 16:35:59.339261716 +0000 UTC m=+0.041054615 container create 782dfdef71dcdbf293088b57bde64243aeab5f9fec7a23dfe251a975a1a56b2f (image=quay.io/ceph/ceph:v18, name=cranky_keller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:35:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Oct 01 16:35:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Oct 01 16:35:59 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Oct 01 16:35:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Oct 01 16:35:59 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/664966323' entity='client.rgw.rgw.compute-0.ktodly' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 01 16:35:59 compute-0 podman[100476]: 2025-10-01 16:35:59.320338649 +0000 UTC m=+0.022131578 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:35:59 compute-0 systemd[1]: Started libpod-conmon-782dfdef71dcdbf293088b57bde64243aeab5f9fec7a23dfe251a975a1a56b2f.scope.
Oct 01 16:35:59 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:35:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1bc011fcf6b6d98b21d2dd17a383dcfa38fe3cd80ce37950fddac52d5dd391b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1bc011fcf6b6d98b21d2dd17a383dcfa38fe3cd80ce37950fddac52d5dd391b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:35:59 compute-0 systemd[1]: Reloading.
Oct 01 16:35:59 compute-0 podman[100476]: 2025-10-01 16:35:59.529797982 +0000 UTC m=+0.231590931 container init 782dfdef71dcdbf293088b57bde64243aeab5f9fec7a23dfe251a975a1a56b2f (image=quay.io/ceph/ceph:v18, name=cranky_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:35:59 compute-0 podman[100476]: 2025-10-01 16:35:59.537919133 +0000 UTC m=+0.239712032 container start 782dfdef71dcdbf293088b57bde64243aeab5f9fec7a23dfe251a975a1a56b2f (image=quay.io/ceph/ceph:v18, name=cranky_keller, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:35:59 compute-0 podman[100476]: 2025-10-01 16:35:59.541549732 +0000 UTC m=+0.243342651 container attach 782dfdef71dcdbf293088b57bde64243aeab5f9fec7a23dfe251a975a1a56b2f (image=quay.io/ceph/ceph:v18, name=cranky_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 01 16:35:59 compute-0 ceph-mon[74273]: from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 01 16:35:59 compute-0 ceph-mon[74273]: Saving service rgw.rgw spec with placement compute-0
Oct 01 16:35:59 compute-0 ceph-mon[74273]: Deploying daemon mds.cephfs.compute-0.dbklxe on compute-0
Oct 01 16:35:59 compute-0 ceph-mon[74273]: osdmap e32: 3 total, 3 up, 3 in
Oct 01 16:35:59 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/664966323' entity='client.rgw.rgw.compute-0.ktodly' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 01 16:35:59 compute-0 systemd-sysv-generator[100531]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:35:59 compute-0 systemd-rc-local-generator[100527]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:35:59 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 32 pg[8.0( empty local-lis/les=0/0 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [1] r=0 lpr=32 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:35:59 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.dbklxe for f44264e3-e26a-5bd3-9e84-b4ba651d9cf5...
Oct 01 16:35:59 compute-0 podman[100604]: 2025-10-01 16:35:59.981226571 +0000 UTC m=+0.036651436 container create 8eb2601c3c53d486610814c9366f7f50eee952a38a7283827cf0015e4a77ba8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mds-cephfs-compute-0-dbklxe, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Oct 01 16:36:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/337a46aa024b665547987966955ad3c0398b9774db1777abeb0d1a1d737bf2d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/337a46aa024b665547987966955ad3c0398b9774db1777abeb0d1a1d737bf2d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/337a46aa024b665547987966955ad3c0398b9774db1777abeb0d1a1d737bf2d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/337a46aa024b665547987966955ad3c0398b9774db1777abeb0d1a1d737bf2d8/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.dbklxe supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:00 compute-0 podman[100604]: 2025-10-01 16:36:00.050844481 +0000 UTC m=+0.106269346 container init 8eb2601c3c53d486610814c9366f7f50eee952a38a7283827cf0015e4a77ba8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mds-cephfs-compute-0-dbklxe, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:36:00 compute-0 podman[100604]: 2025-10-01 16:36:00.055384733 +0000 UTC m=+0.110809598 container start 8eb2601c3c53d486610814c9366f7f50eee952a38a7283827cf0015e4a77ba8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mds-cephfs-compute-0-dbklxe, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 01 16:36:00 compute-0 podman[100604]: 2025-10-01 16:35:59.964038777 +0000 UTC m=+0.019463662 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:36:00 compute-0 bash[100604]: 8eb2601c3c53d486610814c9366f7f50eee952a38a7283827cf0015e4a77ba8c
Oct 01 16:36:00 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.dbklxe for f44264e3-e26a-5bd3-9e84-b4ba651d9cf5.
Oct 01 16:36:00 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14265 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 01 16:36:00 compute-0 cranky_keller[100495]: 
Oct 01 16:36:00 compute-0 cranky_keller[100495]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Oct 01 16:36:00 compute-0 systemd[1]: libpod-782dfdef71dcdbf293088b57bde64243aeab5f9fec7a23dfe251a975a1a56b2f.scope: Deactivated successfully.
Oct 01 16:36:00 compute-0 podman[100476]: 2025-10-01 16:36:00.10385912 +0000 UTC m=+0.805652019 container died 782dfdef71dcdbf293088b57bde64243aeab5f9fec7a23dfe251a975a1a56b2f (image=quay.io/ceph/ceph:v18, name=cranky_keller, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:36:00 compute-0 ceph-mds[100624]: set uid:gid to 167:167 (ceph:ceph)
Oct 01 16:36:00 compute-0 ceph-mds[100624]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Oct 01 16:36:00 compute-0 ceph-mds[100624]: main not setting numa affinity
Oct 01 16:36:00 compute-0 ceph-mds[100624]: pidfile_write: ignore empty --pid-file
Oct 01 16:36:00 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mds-cephfs-compute-0-dbklxe[100620]: starting mds.cephfs.compute-0.dbklxe at 
Oct 01 16:36:00 compute-0 sudo[100314]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:00 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe Updating MDS map to version 2 from mon.0
Oct 01 16:36:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:36:00 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:36:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:36:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1bc011fcf6b6d98b21d2dd17a383dcfa38fe3cd80ce37950fddac52d5dd391b-merged.mount: Deactivated successfully.
Oct 01 16:36:00 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:36:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 01 16:36:00 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:36:00 compute-0 ceph-mgr[74571]: [progress INFO root] complete: finished ev d70fc322-185f-4b99-a7c7-33301931a306 (Updating mds.cephfs deployment (+1 -> 1))
Oct 01 16:36:00 compute-0 ceph-mgr[74571]: [progress INFO root] Completed event d70fc322-185f-4b99-a7c7-33301931a306 (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Oct 01 16:36:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Oct 01 16:36:00 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:36:00 compute-0 podman[100476]: 2025-10-01 16:36:00.16456596 +0000 UTC m=+0.866358859 container remove 782dfdef71dcdbf293088b57bde64243aeab5f9fec7a23dfe251a975a1a56b2f (image=quay.io/ceph/ceph:v18, name=cranky_keller, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 01 16:36:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 01 16:36:00 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:36:00 compute-0 systemd[1]: libpod-conmon-782dfdef71dcdbf293088b57bde64243aeab5f9fec7a23dfe251a975a1a56b2f.scope: Deactivated successfully.
Oct 01 16:36:00 compute-0 sudo[100423]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:00 compute-0 ansible-async_wrapper.py[99591]: Done in kid B.
Oct 01 16:36:00 compute-0 sudo[100657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:36:00 compute-0 sudo[100657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:00 compute-0 sudo[100657]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:00 compute-0 sudo[100682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 16:36:00 compute-0 sudo[100682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:00 compute-0 sudo[100682]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Oct 01 16:36:00 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/664966323' entity='client.rgw.rgw.compute-0.ktodly' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct 01 16:36:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Oct 01 16:36:00 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Oct 01 16:36:00 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 33 pg[8.0( empty local-lis/les=32/33 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [1] r=0 lpr=32 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:00 compute-0 sudo[100707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:36:00 compute-0 sudo[100707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:00 compute-0 sudo[100707]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:00 compute-0 sudo[100732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:36:00 compute-0 sudo[100732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:00 compute-0 sudo[100732]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:00 compute-0 sudo[100758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:36:00 compute-0 sudo[100758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:00 compute-0 sudo[100758]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).mds e3 new map
Oct 01 16:36:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-01T16:35:46.001882+0000
                                           modified        2025-10-01T16:35:46.001992+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.dbklxe{-1:14267} state up:standby seq 1 addr [v2:192.168.122.100:6814/2063870053,v1:192.168.122.100:6815/2063870053] compat {c=[1],r=[1],i=[7ff]}]
Oct 01 16:36:00 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe Updating MDS map to version 3 from mon.0
Oct 01 16:36:00 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe Monitors have assigned me to become a standby.
Oct 01 16:36:00 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2063870053,v1:192.168.122.100:6815/2063870053] up:boot
Oct 01 16:36:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/2063870053,v1:192.168.122.100:6815/2063870053] as mds.0
Oct 01 16:36:00 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.dbklxe assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct 01 16:36:00 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct 01 16:36:00 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct 01 16:36:00 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 01 16:36:00 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Oct 01 16:36:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.dbklxe"} v 0) v1
Oct 01 16:36:00 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.dbklxe"}]: dispatch
Oct 01 16:36:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).mds e3 all = 0
Oct 01 16:36:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).mds e4 new map
Oct 01 16:36:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-01T16:35:46.001882+0000
                                           modified        2025-10-01T16:36:00.565459+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=14267}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-0.dbklxe{0:14267} state up:creating seq 1 addr [v2:192.168.122.100:6814/2063870053,v1:192.168.122.100:6815/2063870053] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Oct 01 16:36:00 compute-0 ceph-mon[74273]: pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:36:00 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:36:00 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:36:00 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:36:00 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:36:00 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:36:00 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/664966323' entity='client.rgw.rgw.compute-0.ktodly' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct 01 16:36:00 compute-0 ceph-mon[74273]: osdmap e33: 3 total, 3 up, 3 in
Oct 01 16:36:00 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe Updating MDS map to version 4 from mon.0
Oct 01 16:36:00 compute-0 ceph-mds[100624]: mds.0.4 handle_mds_map i am now mds.0.4
Oct 01 16:36:00 compute-0 ceph-mds[100624]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Oct 01 16:36:00 compute-0 ceph-mds[100624]: mds.0.cache creating system inode with ino:0x1
Oct 01 16:36:00 compute-0 ceph-mds[100624]: mds.0.cache creating system inode with ino:0x100
Oct 01 16:36:00 compute-0 ceph-mds[100624]: mds.0.cache creating system inode with ino:0x600
Oct 01 16:36:00 compute-0 ceph-mds[100624]: mds.0.cache creating system inode with ino:0x601
Oct 01 16:36:00 compute-0 ceph-mds[100624]: mds.0.cache creating system inode with ino:0x602
Oct 01 16:36:00 compute-0 ceph-mds[100624]: mds.0.cache creating system inode with ino:0x603
Oct 01 16:36:00 compute-0 ceph-mds[100624]: mds.0.cache creating system inode with ino:0x604
Oct 01 16:36:00 compute-0 ceph-mds[100624]: mds.0.cache creating system inode with ino:0x605
Oct 01 16:36:00 compute-0 ceph-mds[100624]: mds.0.cache creating system inode with ino:0x606
Oct 01 16:36:00 compute-0 ceph-mds[100624]: mds.0.cache creating system inode with ino:0x607
Oct 01 16:36:00 compute-0 ceph-mds[100624]: mds.0.cache creating system inode with ino:0x608
Oct 01 16:36:00 compute-0 ceph-mds[100624]: mds.0.cache creating system inode with ino:0x609
Oct 01 16:36:00 compute-0 sudo[100784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 01 16:36:00 compute-0 sudo[100784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:00 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.dbklxe=up:creating}
Oct 01 16:36:00 compute-0 ceph-mds[100624]: mds.0.4 creating_done
Oct 01 16:36:00 compute-0 ceph-mon[74273]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.dbklxe is now active in filesystem cephfs as rank 0
Oct 01 16:36:00 compute-0 sudo[100908]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbyzvwhfirjfrwqpwjeihodiiqgihlpi ; /usr/bin/python3'
Oct 01 16:36:00 compute-0 sudo[100908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:36:01 compute-0 podman[100915]: 2025-10-01 16:36:01.031956911 +0000 UTC m=+0.068857861 container exec bfdaa9b78cc1558959452c7020a00aa78f3da27e3ededf3766f2f88165c2443b (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 01 16:36:01 compute-0 python3[100914]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:36:01 compute-0 podman[100915]: 2025-10-01 16:36:01.10844978 +0000 UTC m=+0.145350700 container exec_died bfdaa9b78cc1558959452c7020a00aa78f3da27e3ededf3766f2f88165c2443b (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:36:01 compute-0 podman[100935]: 2025-10-01 16:36:01.143132857 +0000 UTC m=+0.058335382 container create 0b89e2eef0e9083510c94187d52ff912b7e85f72c424842332ea5f3b02273bef (image=quay.io/ceph/ceph:v18, name=affectionate_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:36:01 compute-0 systemd[1]: Started libpod-conmon-0b89e2eef0e9083510c94187d52ff912b7e85f72c424842332ea5f3b02273bef.scope.
Oct 01 16:36:01 compute-0 podman[100935]: 2025-10-01 16:36:01.104432081 +0000 UTC m=+0.019634616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:36:01 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:36:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b981d1e2d91240627bc57562943976868e66100d76d07439b05574fc9e3516/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b981d1e2d91240627bc57562943976868e66100d76d07439b05574fc9e3516/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:01 compute-0 podman[100935]: 2025-10-01 16:36:01.230478874 +0000 UTC m=+0.145681409 container init 0b89e2eef0e9083510c94187d52ff912b7e85f72c424842332ea5f3b02273bef (image=quay.io/ceph/ceph:v18, name=affectionate_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 01 16:36:01 compute-0 podman[100935]: 2025-10-01 16:36:01.235884908 +0000 UTC m=+0.151087453 container start 0b89e2eef0e9083510c94187d52ff912b7e85f72c424842332ea5f3b02273bef (image=quay.io/ceph/ceph:v18, name=affectionate_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 01 16:36:01 compute-0 podman[100935]: 2025-10-01 16:36:01.240102292 +0000 UTC m=+0.155304827 container attach 0b89e2eef0e9083510c94187d52ff912b7e85f72c424842332ea5f3b02273bef (image=quay.io/ceph/ceph:v18, name=affectionate_mclean, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 01 16:36:01 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v79: 8 pgs: 1 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:36:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Oct 01 16:36:01 compute-0 ceph-mgr[74571]: [progress INFO root] Writing back 5 completed events
Oct 01 16:36:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 01 16:36:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Oct 01 16:36:01 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Oct 01 16:36:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Oct 01 16:36:01 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/664966323' entity='client.rgw.rgw.compute-0.ktodly' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 01 16:36:01 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:36:01 compute-0 ceph-mon[74273]: from='client.14265 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 01 16:36:01 compute-0 ceph-mon[74273]: mds.? [v2:192.168.122.100:6814/2063870053,v1:192.168.122.100:6815/2063870053] up:boot
Oct 01 16:36:01 compute-0 ceph-mon[74273]: daemon mds.cephfs.compute-0.dbklxe assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct 01 16:36:01 compute-0 ceph-mon[74273]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct 01 16:36:01 compute-0 ceph-mon[74273]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct 01 16:36:01 compute-0 ceph-mon[74273]: Cluster is now healthy
Oct 01 16:36:01 compute-0 ceph-mon[74273]: fsmap cephfs:0 1 up:standby
Oct 01 16:36:01 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.dbklxe"}]: dispatch
Oct 01 16:36:01 compute-0 ceph-mon[74273]: fsmap cephfs:1 {0=cephfs.compute-0.dbklxe=up:creating}
Oct 01 16:36:01 compute-0 ceph-mon[74273]: daemon mds.cephfs.compute-0.dbklxe is now active in filesystem cephfs as rank 0
Oct 01 16:36:01 compute-0 ceph-mon[74273]: osdmap e34: 3 total, 3 up, 3 in
Oct 01 16:36:01 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/664966323' entity='client.rgw.rgw.compute-0.ktodly' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 01 16:36:01 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:36:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).mds e5 new map
Oct 01 16:36:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-01T16:35:46.001882+0000
                                           modified        2025-10-01T16:36:01.572568+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=14267}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-0.dbklxe{0:14267} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/2063870053,v1:192.168.122.100:6815/2063870053] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Oct 01 16:36:01 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe Updating MDS map to version 5 from mon.0
Oct 01 16:36:01 compute-0 ceph-mds[100624]: mds.0.4 handle_mds_map i am now mds.0.4
Oct 01 16:36:01 compute-0 ceph-mds[100624]: mds.0.4 handle_mds_map state change up:creating --> up:active
Oct 01 16:36:01 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2063870053,v1:192.168.122.100:6815/2063870053] up:active
Oct 01 16:36:01 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.dbklxe=up:active}
Oct 01 16:36:01 compute-0 ceph-mds[100624]: mds.0.4 recovery_done -- successful recovery!
Oct 01 16:36:01 compute-0 ceph-mds[100624]: mds.0.4 active_start
Oct 01 16:36:01 compute-0 sudo[100784]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:36:01 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:36:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:36:01 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:36:01 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14269 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 01 16:36:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:36:01 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:36:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 16:36:01 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:36:01 compute-0 affectionate_mclean[100963]: 
Oct 01 16:36:01 compute-0 affectionate_mclean[100963]: [{"container_id": "8c459a1c54f7", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.49%", "created": "2025-10-01T16:34:38.107012Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-10-01T16:34:38.160782Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-01T16:36:01.700089Z", "memory_usage": 11628707, "ports": [], "service_name": "crash", "started": "2025-10-01T16:34:38.007554Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5@crash.compute-0", "version": "18.2.7"}, {"container_id": "8eb2601c3c53", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "8.32%", "created": "2025-10-01T16:36:00.079408Z", "daemon_id": "cephfs.compute-0.dbklxe", "daemon_name": "mds.cephfs.compute-0.dbklxe", "daemon_type": "mds", "events": ["2025-10-01T16:36:00.150415Z daemon:mds.cephfs.compute-0.dbklxe [INFO] \"Deployed mds.cephfs.compute-0.dbklxe on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-01T16:36:01.700752Z", "memory_usage": 14963179, "ports": [], "service_name": "mds.cephfs", "started": "2025-10-01T16:35:59.968491Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5@mds.cephfs.compute-0.dbklxe", "version": "18.2.7"}, {"container_id": "9642ab418b13", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "30.36%", "created": "2025-10-01T16:33:27.159033Z", "daemon_id": "compute-0.pmbdpj", "daemon_name": "mgr.compute-0.pmbdpj", "daemon_type": "mgr", "events": ["2025-10-01T16:34:43.185882Z daemon:mgr.compute-0.pmbdpj [INFO] \"Reconfigured mgr.compute-0.pmbdpj on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-01T16:36:01.699960Z", "memory_usage": 548510105, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-10-01T16:33:27.071292Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5@mgr.compute-0.pmbdpj", "version": "18.2.7"}, {"container_id": "bfdaa9b78cc1", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "2.08%", "created": "2025-10-01T16:33:19.862785Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-10-01T16:34:42.530378Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-01T16:36:01.699776Z", "memory_request": 2147483648, "memory_usage": 39699087, "ports": [], "service_name": "mon", "started": "2025-10-01T16:33:23.039337Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5@mon.compute-0", "version": "18.2.7"}, {"container_id": "072e3be7e651", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.51%", "created": "2025-10-01T16:35:05.740151Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-10-01T16:35:05.797291Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-01T16:36:01.700241Z", "memory_request": 4294967296, "memory_usage": 59978547, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-01T16:35:05.619691Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5@osd.0", "version": "18.2.7"}, {"container_id": "e9f714ab807d", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.59%", "created": "2025-10-01T16:35:09.953711Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-10-01T16:35:10.032823Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-01T16:36:01.700360Z", "memory_request": 4294967296, "memory_usage": 57503907, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-01T16:35:09.820957Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5@osd.1", "version": "18.2.7"}, {"container_id": "412bad0677b0", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.70%", "created": "2025-10-01T16:35:14.785205Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-10-01T16:35:14.884859Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-01T16:36:01.700503Z", "memory_request": 4294967296, "memory_usage": 55354327, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-01T16:35:14.632851Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5@osd.2", "version": "18.2.7"}, {"container_id": "102da573bc92", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "3.10%", "created": "2025-10-01T16:35:58.287364Z", "daemon_id": "rgw.compute-0.ktodly", "daemon_name": "rgw.rgw.compute-0.ktodly", "daemon_type": "rgw", "events": ["2025-10-01T16:35:58.368666Z daemon:rgw.rgw.compute-0.ktodly [INFO] \"Deployed rgw.rgw.compute-0.ktodly on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "last_refresh": "2025-10-01T16:36:01.700629Z", "memory_usage": 20436746, "ports": [8082], "service_name": "rgw.rgw", "started": "2025-10-01T16:35:58.161531Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5@rgw.rgw.compute-0.ktodly", "version": "18.2.7"}]
Oct 01 16:36:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 16:36:01 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:36:01 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 7a681e13-036b-4bd6-b91a-d6ec4904ab27 does not exist
Oct 01 16:36:01 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 7f49eb34-0344-4132-9972-ba6eebf4d358 does not exist
Oct 01 16:36:01 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev a83ded4b-03a5-4b36-8532-604f622f82be does not exist
Oct 01 16:36:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 16:36:01 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:36:01 compute-0 systemd[1]: libpod-0b89e2eef0e9083510c94187d52ff912b7e85f72c424842332ea5f3b02273bef.scope: Deactivated successfully.
Oct 01 16:36:01 compute-0 podman[100935]: 2025-10-01 16:36:01.780467258 +0000 UTC m=+0.695669803 container died 0b89e2eef0e9083510c94187d52ff912b7e85f72c424842332ea5f3b02273bef (image=quay.io/ceph/ceph:v18, name=affectionate_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:36:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 16:36:01 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:36:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:36:01 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:36:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4b981d1e2d91240627bc57562943976868e66100d76d07439b05574fc9e3516-merged.mount: Deactivated successfully.
Oct 01 16:36:01 compute-0 rsyslogd[1001]: message too long (8588) with configured size 8096, begin of message is: [{"container_id": "8c459a1c54f7", "container_image_digests": ["quay.io/ceph/ceph [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct 01 16:36:01 compute-0 sudo[101110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:36:01 compute-0 sudo[101110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:01 compute-0 sudo[101110]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:01 compute-0 sudo[101147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:36:01 compute-0 sudo[101147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:01 compute-0 sudo[101147]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:01 compute-0 podman[100935]: 2025-10-01 16:36:01.932339819 +0000 UTC m=+0.847542334 container remove 0b89e2eef0e9083510c94187d52ff912b7e85f72c424842332ea5f3b02273bef (image=quay.io/ceph/ceph:v18, name=affectionate_mclean, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:36:01 compute-0 systemd[1]: libpod-conmon-0b89e2eef0e9083510c94187d52ff912b7e85f72c424842332ea5f3b02273bef.scope: Deactivated successfully.
Oct 01 16:36:01 compute-0 sudo[100908]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:01 compute-0 sudo[101172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:36:01 compute-0 sudo[101172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:02 compute-0 sudo[101172]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:02 compute-0 sudo[101197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 16:36:02 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 34 pg[9.0( empty local-lis/les=0/0 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [1] r=0 lpr=34 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:02 compute-0 sudo[101197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Oct 01 16:36:02 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/664966323' entity='client.rgw.rgw.compute-0.ktodly' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 01 16:36:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Oct 01 16:36:02 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Oct 01 16:36:02 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 35 pg[9.0( empty local-lis/les=34/35 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [1] r=0 lpr=34 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:02 compute-0 podman[101262]: 2025-10-01 16:36:02.389880149 +0000 UTC m=+0.036044881 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:36:02 compute-0 podman[101262]: 2025-10-01 16:36:02.491832057 +0000 UTC m=+0.137996739 container create 52264ed7574035dd89068e76896bc61e1c0ea1ecb146b2f6b709ddf51f0ed54a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_jennings, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:36:02 compute-0 systemd[1]: Started libpod-conmon-52264ed7574035dd89068e76896bc61e1c0ea1ecb146b2f6b709ddf51f0ed54a.scope.
Oct 01 16:36:02 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:36:02 compute-0 ceph-mon[74273]: pgmap v79: 8 pgs: 1 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:36:02 compute-0 ceph-mon[74273]: mds.? [v2:192.168.122.100:6814/2063870053,v1:192.168.122.100:6815/2063870053] up:active
Oct 01 16:36:02 compute-0 ceph-mon[74273]: fsmap cephfs:1 {0=cephfs.compute-0.dbklxe=up:active}
Oct 01 16:36:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:36:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:36:02 compute-0 ceph-mon[74273]: from='client.14269 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 01 16:36:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:36:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:36:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:36:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:36:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:36:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:36:02 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/664966323' entity='client.rgw.rgw.compute-0.ktodly' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 01 16:36:02 compute-0 ceph-mon[74273]: osdmap e35: 3 total, 3 up, 3 in
Oct 01 16:36:02 compute-0 podman[101262]: 2025-10-01 16:36:02.629737773 +0000 UTC m=+0.275902455 container init 52264ed7574035dd89068e76896bc61e1c0ea1ecb146b2f6b709ddf51f0ed54a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 01 16:36:02 compute-0 podman[101262]: 2025-10-01 16:36:02.638870768 +0000 UTC m=+0.285035410 container start 52264ed7574035dd89068e76896bc61e1c0ea1ecb146b2f6b709ddf51f0ed54a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_jennings, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 01 16:36:02 compute-0 competent_jennings[101281]: 167 167
Oct 01 16:36:02 compute-0 systemd[1]: libpod-52264ed7574035dd89068e76896bc61e1c0ea1ecb146b2f6b709ddf51f0ed54a.scope: Deactivated successfully.
Oct 01 16:36:02 compute-0 podman[101262]: 2025-10-01 16:36:02.670330225 +0000 UTC m=+0.316494887 container attach 52264ed7574035dd89068e76896bc61e1c0ea1ecb146b2f6b709ddf51f0ed54a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_jennings, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 01 16:36:02 compute-0 podman[101262]: 2025-10-01 16:36:02.670694124 +0000 UTC m=+0.316858766 container died 52264ed7574035dd89068e76896bc61e1c0ea1ecb146b2f6b709ddf51f0ed54a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_jennings, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 01 16:36:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-fcaedb52e37aa9f543b0a11f0aad7d72f140b17e4600ec1390cbac1770fe8471-merged.mount: Deactivated successfully.
Oct 01 16:36:02 compute-0 sudo[101321]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrnynxxsknwaqhzfbvljzvyueeyglryl ; /usr/bin/python3'
Oct 01 16:36:02 compute-0 sudo[101321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:36:02 compute-0 podman[101262]: 2025-10-01 16:36:02.81099343 +0000 UTC m=+0.457158112 container remove 52264ed7574035dd89068e76896bc61e1c0ea1ecb146b2f6b709ddf51f0ed54a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:36:02 compute-0 systemd[1]: libpod-conmon-52264ed7574035dd89068e76896bc61e1c0ea1ecb146b2f6b709ddf51f0ed54a.scope: Deactivated successfully.
Oct 01 16:36:02 compute-0 python3[101323]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:36:02 compute-0 podman[101331]: 2025-10-01 16:36:02.985285344 +0000 UTC m=+0.067155709 container create 1d51787a5c1e08ef9bb67a841b7783682f5872c17b522161bd0161886ce11a7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:36:03 compute-0 podman[101346]: 2025-10-01 16:36:03.026074242 +0000 UTC m=+0.064543786 container create 9c38d8a1624b3f2007dac1b7be473f8f918dcdc8004f0ba7a616bb37dd47454b (image=quay.io/ceph/ceph:v18, name=distracted_shirley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:36:03 compute-0 podman[101331]: 2025-10-01 16:36:02.939704778 +0000 UTC m=+0.021575133 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:36:03 compute-0 systemd[1]: Started libpod-conmon-1d51787a5c1e08ef9bb67a841b7783682f5872c17b522161bd0161886ce11a7c.scope.
Oct 01 16:36:03 compute-0 systemd[1]: Started libpod-conmon-9c38d8a1624b3f2007dac1b7be473f8f918dcdc8004f0ba7a616bb37dd47454b.scope.
Oct 01 16:36:03 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:36:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1968b315873382cb4a95c51734a914de3e7c1933fd7b604a562230ac338ed76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1968b315873382cb4a95c51734a914de3e7c1933fd7b604a562230ac338ed76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1968b315873382cb4a95c51734a914de3e7c1933fd7b604a562230ac338ed76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1968b315873382cb4a95c51734a914de3e7c1933fd7b604a562230ac338ed76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1968b315873382cb4a95c51734a914de3e7c1933fd7b604a562230ac338ed76/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:03 compute-0 podman[101331]: 2025-10-01 16:36:03.07215425 +0000 UTC m=+0.154024595 container init 1d51787a5c1e08ef9bb67a841b7783682f5872c17b522161bd0161886ce11a7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:36:03 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:36:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3602ab2e31b6e53b05c6089e8ce5c852675ca8acfaec2ef88bfa09fad904b29/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3602ab2e31b6e53b05c6089e8ce5c852675ca8acfaec2ef88bfa09fad904b29/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:03 compute-0 podman[101331]: 2025-10-01 16:36:03.084542596 +0000 UTC m=+0.166412921 container start 1d51787a5c1e08ef9bb67a841b7783682f5872c17b522161bd0161886ce11a7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 01 16:36:03 compute-0 podman[101331]: 2025-10-01 16:36:03.088410591 +0000 UTC m=+0.170280936 container attach 1d51787a5c1e08ef9bb67a841b7783682f5872c17b522161bd0161886ce11a7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:36:03 compute-0 podman[101346]: 2025-10-01 16:36:02.996586393 +0000 UTC m=+0.035055957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:36:03 compute-0 podman[101346]: 2025-10-01 16:36:03.104534849 +0000 UTC m=+0.143004403 container init 9c38d8a1624b3f2007dac1b7be473f8f918dcdc8004f0ba7a616bb37dd47454b (image=quay.io/ceph/ceph:v18, name=distracted_shirley, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:36:03 compute-0 podman[101346]: 2025-10-01 16:36:03.109233145 +0000 UTC m=+0.147702699 container start 9c38d8a1624b3f2007dac1b7be473f8f918dcdc8004f0ba7a616bb37dd47454b (image=quay.io/ceph/ceph:v18, name=distracted_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Oct 01 16:36:03 compute-0 podman[101346]: 2025-10-01 16:36:03.112746502 +0000 UTC m=+0.151216066 container attach 9c38d8a1624b3f2007dac1b7be473f8f918dcdc8004f0ba7a616bb37dd47454b (image=quay.io/ceph/ceph:v18, name=distracted_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:36:03 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v82: 9 pgs: 9 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1023 B/s wr, 1 op/s
Oct 01 16:36:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Oct 01 16:36:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Oct 01 16:36:03 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Oct 01 16:36:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Oct 01 16:36:03 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/664966323' entity='client.rgw.rgw.compute-0.ktodly' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 01 16:36:03 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 36 pg[10.0( empty local-lis/les=0/0 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [2] r=0 lpr=36 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:36:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 01 16:36:03 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1031312971' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 01 16:36:03 compute-0 distracted_shirley[101365]: 
Oct 01 16:36:03 compute-0 distracted_shirley[101365]: {"fsid":"f44264e3-e26a-5bd3-9e84-b4ba651d9cf5","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":159,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":36,"num_osds":3,"num_up_osds":3,"osd_up_since":1759336522,"num_in_osds":3,"osd_in_since":1759336495,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":9}],"num_pgs":9,"num_pools":9,"num_objects":6,"data_bytes":460666,"bytes_used":83947520,"bytes_avail":64327979008,"bytes_total":64411926528,"read_bytes_sec":1023,"write_bytes_sec":1023,"read_op_per_sec":0,"write_op_per_sec":0},"fsmap":{"epoch":5,"id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.dbklxe","status":"up:active","gid":14267}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-01T16:35:13.275828+0000","services":{}},"progress_events":{}}
Oct 01 16:36:03 compute-0 systemd[1]: libpod-9c38d8a1624b3f2007dac1b7be473f8f918dcdc8004f0ba7a616bb37dd47454b.scope: Deactivated successfully.
Oct 01 16:36:03 compute-0 podman[101396]: 2025-10-01 16:36:03.74712886 +0000 UTC m=+0.029750336 container died 9c38d8a1624b3f2007dac1b7be473f8f918dcdc8004f0ba7a616bb37dd47454b (image=quay.io/ceph/ceph:v18, name=distracted_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 16:36:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3602ab2e31b6e53b05c6089e8ce5c852675ca8acfaec2ef88bfa09fad904b29-merged.mount: Deactivated successfully.
Oct 01 16:36:03 compute-0 podman[101396]: 2025-10-01 16:36:03.79570084 +0000 UTC m=+0.078322306 container remove 9c38d8a1624b3f2007dac1b7be473f8f918dcdc8004f0ba7a616bb37dd47454b (image=quay.io/ceph/ceph:v18, name=distracted_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:36:03 compute-0 systemd[1]: libpod-conmon-9c38d8a1624b3f2007dac1b7be473f8f918dcdc8004f0ba7a616bb37dd47454b.scope: Deactivated successfully.
Oct 01 16:36:03 compute-0 sudo[101321]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:04 compute-0 charming_kalam[101363]: --> passed data devices: 0 physical, 3 LVM
Oct 01 16:36:04 compute-0 charming_kalam[101363]: --> relative data size: 1.0
Oct 01 16:36:04 compute-0 charming_kalam[101363]: --> All data devices are unavailable
Oct 01 16:36:04 compute-0 systemd[1]: libpod-1d51787a5c1e08ef9bb67a841b7783682f5872c17b522161bd0161886ce11a7c.scope: Deactivated successfully.
Oct 01 16:36:04 compute-0 podman[101331]: 2025-10-01 16:36:04.068708761 +0000 UTC m=+1.150579076 container died 1d51787a5c1e08ef9bb67a841b7783682f5872c17b522161bd0161886ce11a7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_kalam, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:36:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1968b315873382cb4a95c51734a914de3e7c1933fd7b604a562230ac338ed76-merged.mount: Deactivated successfully.
Oct 01 16:36:04 compute-0 podman[101331]: 2025-10-01 16:36:04.133594684 +0000 UTC m=+1.215465019 container remove 1d51787a5c1e08ef9bb67a841b7783682f5872c17b522161bd0161886ce11a7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 01 16:36:04 compute-0 systemd[1]: libpod-conmon-1d51787a5c1e08ef9bb67a841b7783682f5872c17b522161bd0161886ce11a7c.scope: Deactivated successfully.
Oct 01 16:36:04 compute-0 sudo[101197]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:04 compute-0 sudo[101443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:36:04 compute-0 sudo[101443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:04 compute-0 sudo[101443]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:04 compute-0 sudo[101468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:36:04 compute-0 sudo[101468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:04 compute-0 sudo[101468]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:04 compute-0 sudo[101493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:36:04 compute-0 sudo[101493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:04 compute-0 sudo[101493]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:04 compute-0 sudo[101518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 16:36:04 compute-0 sudo[101518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Oct 01 16:36:04 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/664966323' entity='client.rgw.rgw.compute-0.ktodly' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 01 16:36:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Oct 01 16:36:04 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Oct 01 16:36:04 compute-0 ceph-mon[74273]: pgmap v82: 9 pgs: 9 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1023 B/s wr, 1 op/s
Oct 01 16:36:04 compute-0 ceph-mon[74273]: osdmap e36: 3 total, 3 up, 3 in
Oct 01 16:36:04 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/664966323' entity='client.rgw.rgw.compute-0.ktodly' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 01 16:36:04 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1031312971' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 01 16:36:04 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 37 pg[10.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [2] r=0 lpr=36 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:04 compute-0 sudo[101618]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdqinttlpjjkxjamaokiuzrppuozbmhf ; /usr/bin/python3'
Oct 01 16:36:04 compute-0 sudo[101618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:36:04 compute-0 podman[101620]: 2025-10-01 16:36:04.68934495 +0000 UTC m=+0.036968534 container create 1fc3aadf3264e94e3001cc2c512feae47053b885b6e9a55a32026673ac00e914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_villani, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 01 16:36:04 compute-0 systemd[1]: Started libpod-conmon-1fc3aadf3264e94e3001cc2c512feae47053b885b6e9a55a32026673ac00e914.scope.
Oct 01 16:36:04 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:36:04 compute-0 podman[101620]: 2025-10-01 16:36:04.762180969 +0000 UTC m=+0.109804573 container init 1fc3aadf3264e94e3001cc2c512feae47053b885b6e9a55a32026673ac00e914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_villani, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 01 16:36:04 compute-0 podman[101620]: 2025-10-01 16:36:04.670352891 +0000 UTC m=+0.017976495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:36:04 compute-0 podman[101620]: 2025-10-01 16:36:04.772297399 +0000 UTC m=+0.119920983 container start 1fc3aadf3264e94e3001cc2c512feae47053b885b6e9a55a32026673ac00e914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_villani, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:36:04 compute-0 podman[101620]: 2025-10-01 16:36:04.775306863 +0000 UTC m=+0.122930477 container attach 1fc3aadf3264e94e3001cc2c512feae47053b885b6e9a55a32026673ac00e914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 01 16:36:04 compute-0 loving_villani[101637]: 167 167
Oct 01 16:36:04 compute-0 systemd[1]: libpod-1fc3aadf3264e94e3001cc2c512feae47053b885b6e9a55a32026673ac00e914.scope: Deactivated successfully.
Oct 01 16:36:04 compute-0 podman[101620]: 2025-10-01 16:36:04.777719963 +0000 UTC m=+0.125343567 container died 1fc3aadf3264e94e3001cc2c512feae47053b885b6e9a55a32026673ac00e914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 01 16:36:04 compute-0 python3[101621]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:36:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-334d6cb86eb5da28f2da2afcef640872e00a7a63d4a2c91aa8d3eabb3c7dcb4b-merged.mount: Deactivated successfully.
Oct 01 16:36:04 compute-0 podman[101620]: 2025-10-01 16:36:04.820604842 +0000 UTC m=+0.168228436 container remove 1fc3aadf3264e94e3001cc2c512feae47053b885b6e9a55a32026673ac00e914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_villani, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:36:04 compute-0 systemd[1]: libpod-conmon-1fc3aadf3264e94e3001cc2c512feae47053b885b6e9a55a32026673ac00e914.scope: Deactivated successfully.
Oct 01 16:36:04 compute-0 podman[101644]: 2025-10-01 16:36:04.848757297 +0000 UTC m=+0.042799438 container create 95f920e15cceb55feb51397fa8a363bb845a037570e7332b356abf5a6fd4f512 (image=quay.io/ceph/ceph:v18, name=pensive_ritchie, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:36:04 compute-0 systemd[1]: Started libpod-conmon-95f920e15cceb55feb51397fa8a363bb845a037570e7332b356abf5a6fd4f512.scope.
Oct 01 16:36:04 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:36:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00d27f3482a4db78a3b50233058fd3a3a3b480d46fa3f7c578cc64225a228c8a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00d27f3482a4db78a3b50233058fd3a3a3b480d46fa3f7c578cc64225a228c8a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:04 compute-0 podman[101644]: 2025-10-01 16:36:04.903428717 +0000 UTC m=+0.097470878 container init 95f920e15cceb55feb51397fa8a363bb845a037570e7332b356abf5a6fd4f512 (image=quay.io/ceph/ceph:v18, name=pensive_ritchie, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:36:04 compute-0 podman[101644]: 2025-10-01 16:36:04.910963873 +0000 UTC m=+0.105006014 container start 95f920e15cceb55feb51397fa8a363bb845a037570e7332b356abf5a6fd4f512 (image=quay.io/ceph/ceph:v18, name=pensive_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:36:04 compute-0 podman[101644]: 2025-10-01 16:36:04.915774682 +0000 UTC m=+0.109816843 container attach 95f920e15cceb55feb51397fa8a363bb845a037570e7332b356abf5a6fd4f512 (image=quay.io/ceph/ceph:v18, name=pensive_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:36:04 compute-0 podman[101644]: 2025-10-01 16:36:04.832077355 +0000 UTC m=+0.026119516 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:36:04 compute-0 podman[101681]: 2025-10-01 16:36:04.972033372 +0000 UTC m=+0.038221765 container create ad2dd3b007b056479d732c2c2f04d5fae6717506c4184ef80681f4d5e8171ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bose, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 01 16:36:05 compute-0 systemd[1]: Started libpod-conmon-ad2dd3b007b056479d732c2c2f04d5fae6717506c4184ef80681f4d5e8171ab6.scope.
Oct 01 16:36:05 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:36:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ee34ca1692d60c19ce2071e90c23ca71cfc9641b5d3fcad85620b48d87e230a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ee34ca1692d60c19ce2071e90c23ca71cfc9641b5d3fcad85620b48d87e230a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ee34ca1692d60c19ce2071e90c23ca71cfc9641b5d3fcad85620b48d87e230a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ee34ca1692d60c19ce2071e90c23ca71cfc9641b5d3fcad85620b48d87e230a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:05 compute-0 podman[101681]: 2025-10-01 16:36:04.954076748 +0000 UTC m=+0.020265121 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:36:05 compute-0 podman[101681]: 2025-10-01 16:36:05.060852875 +0000 UTC m=+0.127041238 container init ad2dd3b007b056479d732c2c2f04d5fae6717506c4184ef80681f4d5e8171ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bose, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 01 16:36:05 compute-0 podman[101681]: 2025-10-01 16:36:05.066190007 +0000 UTC m=+0.132378360 container start ad2dd3b007b056479d732c2c2f04d5fae6717506c4184ef80681f4d5e8171ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:36:05 compute-0 podman[101681]: 2025-10-01 16:36:05.069178261 +0000 UTC m=+0.135366634 container attach ad2dd3b007b056479d732c2c2f04d5fae6717506c4184ef80681f4d5e8171ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 01 16:36:05 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v85: 10 pgs: 1 unknown, 9 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Oct 01 16:36:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Oct 01 16:36:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Oct 01 16:36:05 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Oct 01 16:36:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Oct 01 16:36:05 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/779073899' entity='client.rgw.rgw.compute-0.ktodly' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 01 16:36:05 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 38 pg[11.0( empty local-lis/les=0/0 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [1] r=0 lpr=38 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:05 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/664966323' entity='client.rgw.rgw.compute-0.ktodly' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 01 16:36:05 compute-0 ceph-mon[74273]: osdmap e37: 3 total, 3 up, 3 in
Oct 01 16:36:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 01 16:36:05 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2613896551' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 01 16:36:05 compute-0 pensive_ritchie[101672]: 
Oct 01 16:36:05 compute-0 systemd[1]: libpod-95f920e15cceb55feb51397fa8a363bb845a037570e7332b356abf5a6fd4f512.scope: Deactivated successfully.
Oct 01 16:36:05 compute-0 pensive_ritchie[101672]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.ktodly","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Oct 01 16:36:05 compute-0 podman[101644]: 2025-10-01 16:36:05.486289043 +0000 UTC m=+0.680331194 container died 95f920e15cceb55feb51397fa8a363bb845a037570e7332b356abf5a6fd4f512 (image=quay.io/ceph/ceph:v18, name=pensive_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 01 16:36:05 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 16:36:05 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 16:36:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-00d27f3482a4db78a3b50233058fd3a3a3b480d46fa3f7c578cc64225a228c8a-merged.mount: Deactivated successfully.
Oct 01 16:36:05 compute-0 podman[101644]: 2025-10-01 16:36:05.546443648 +0000 UTC m=+0.740485799 container remove 95f920e15cceb55feb51397fa8a363bb845a037570e7332b356abf5a6fd4f512 (image=quay.io/ceph/ceph:v18, name=pensive_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 01 16:36:05 compute-0 systemd[1]: libpod-conmon-95f920e15cceb55feb51397fa8a363bb845a037570e7332b356abf5a6fd4f512.scope: Deactivated successfully.
Oct 01 16:36:05 compute-0 sudo[101618]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:05 compute-0 ceph-mds[100624]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Oct 01 16:36:05 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mds-cephfs-compute-0-dbklxe[100620]: 2025-10-01T16:36:05.572+0000 7f2806962640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Oct 01 16:36:05 compute-0 musing_bose[101698]: {
Oct 01 16:36:05 compute-0 musing_bose[101698]:     "0": [
Oct 01 16:36:05 compute-0 musing_bose[101698]:         {
Oct 01 16:36:05 compute-0 musing_bose[101698]:             "devices": [
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "/dev/loop3"
Oct 01 16:36:05 compute-0 musing_bose[101698]:             ],
Oct 01 16:36:05 compute-0 musing_bose[101698]:             "lv_name": "ceph_lv0",
Oct 01 16:36:05 compute-0 musing_bose[101698]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:36:05 compute-0 musing_bose[101698]:             "lv_size": "21470642176",
Oct 01 16:36:05 compute-0 musing_bose[101698]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:36:05 compute-0 musing_bose[101698]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:36:05 compute-0 musing_bose[101698]:             "name": "ceph_lv0",
Oct 01 16:36:05 compute-0 musing_bose[101698]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:36:05 compute-0 musing_bose[101698]:             "tags": {
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "ceph.cluster_name": "ceph",
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "ceph.crush_device_class": "",
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "ceph.encrypted": "0",
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "ceph.osd_id": "0",
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "ceph.type": "block",
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "ceph.vdo": "0"
Oct 01 16:36:05 compute-0 musing_bose[101698]:             },
Oct 01 16:36:05 compute-0 musing_bose[101698]:             "type": "block",
Oct 01 16:36:05 compute-0 musing_bose[101698]:             "vg_name": "ceph_vg0"
Oct 01 16:36:05 compute-0 musing_bose[101698]:         }
Oct 01 16:36:05 compute-0 musing_bose[101698]:     ],
Oct 01 16:36:05 compute-0 musing_bose[101698]:     "1": [
Oct 01 16:36:05 compute-0 musing_bose[101698]:         {
Oct 01 16:36:05 compute-0 musing_bose[101698]:             "devices": [
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "/dev/loop4"
Oct 01 16:36:05 compute-0 musing_bose[101698]:             ],
Oct 01 16:36:05 compute-0 musing_bose[101698]:             "lv_name": "ceph_lv1",
Oct 01 16:36:05 compute-0 musing_bose[101698]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:36:05 compute-0 musing_bose[101698]:             "lv_size": "21470642176",
Oct 01 16:36:05 compute-0 musing_bose[101698]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:36:05 compute-0 musing_bose[101698]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:36:05 compute-0 musing_bose[101698]:             "name": "ceph_lv1",
Oct 01 16:36:05 compute-0 musing_bose[101698]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:36:05 compute-0 musing_bose[101698]:             "tags": {
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "ceph.cluster_name": "ceph",
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "ceph.crush_device_class": "",
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "ceph.encrypted": "0",
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "ceph.osd_id": "1",
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "ceph.type": "block",
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "ceph.vdo": "0"
Oct 01 16:36:05 compute-0 musing_bose[101698]:             },
Oct 01 16:36:05 compute-0 musing_bose[101698]:             "type": "block",
Oct 01 16:36:05 compute-0 musing_bose[101698]:             "vg_name": "ceph_vg1"
Oct 01 16:36:05 compute-0 musing_bose[101698]:         }
Oct 01 16:36:05 compute-0 musing_bose[101698]:     ],
Oct 01 16:36:05 compute-0 musing_bose[101698]:     "2": [
Oct 01 16:36:05 compute-0 musing_bose[101698]:         {
Oct 01 16:36:05 compute-0 musing_bose[101698]:             "devices": [
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "/dev/loop5"
Oct 01 16:36:05 compute-0 musing_bose[101698]:             ],
Oct 01 16:36:05 compute-0 musing_bose[101698]:             "lv_name": "ceph_lv2",
Oct 01 16:36:05 compute-0 musing_bose[101698]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:36:05 compute-0 musing_bose[101698]:             "lv_size": "21470642176",
Oct 01 16:36:05 compute-0 musing_bose[101698]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:36:05 compute-0 musing_bose[101698]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:36:05 compute-0 musing_bose[101698]:             "name": "ceph_lv2",
Oct 01 16:36:05 compute-0 musing_bose[101698]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:36:05 compute-0 musing_bose[101698]:             "tags": {
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "ceph.cluster_name": "ceph",
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "ceph.crush_device_class": "",
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "ceph.encrypted": "0",
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "ceph.osd_id": "2",
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "ceph.type": "block",
Oct 01 16:36:05 compute-0 musing_bose[101698]:                 "ceph.vdo": "0"
Oct 01 16:36:05 compute-0 musing_bose[101698]:             },
Oct 01 16:36:05 compute-0 musing_bose[101698]:             "type": "block",
Oct 01 16:36:05 compute-0 musing_bose[101698]:             "vg_name": "ceph_vg2"
Oct 01 16:36:05 compute-0 musing_bose[101698]:         }
Oct 01 16:36:05 compute-0 musing_bose[101698]:     ]
Oct 01 16:36:05 compute-0 musing_bose[101698]: }
Oct 01 16:36:05 compute-0 systemd[1]: libpod-ad2dd3b007b056479d732c2c2f04d5fae6717506c4184ef80681f4d5e8171ab6.scope: Deactivated successfully.
Oct 01 16:36:05 compute-0 conmon[101698]: conmon ad2dd3b007b056479d73 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ad2dd3b007b056479d732c2c2f04d5fae6717506c4184ef80681f4d5e8171ab6.scope/container/memory.events
Oct 01 16:36:05 compute-0 podman[101681]: 2025-10-01 16:36:05.872599674 +0000 UTC m=+0.938788027 container died ad2dd3b007b056479d732c2c2f04d5fae6717506c4184ef80681f4d5e8171ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bose, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 01 16:36:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ee34ca1692d60c19ce2071e90c23ca71cfc9641b5d3fcad85620b48d87e230a-merged.mount: Deactivated successfully.
Oct 01 16:36:05 compute-0 podman[101681]: 2025-10-01 16:36:05.934056622 +0000 UTC m=+1.000245015 container remove ad2dd3b007b056479d732c2c2f04d5fae6717506c4184ef80681f4d5e8171ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bose, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 01 16:36:05 compute-0 systemd[1]: libpod-conmon-ad2dd3b007b056479d732c2c2f04d5fae6717506c4184ef80681f4d5e8171ab6.scope: Deactivated successfully.
Oct 01 16:36:05 compute-0 sudo[101518]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:06 compute-0 sudo[101751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:36:06 compute-0 sudo[101751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:06 compute-0 sudo[101751]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:06 compute-0 sudo[101776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:36:06 compute-0 sudo[101776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:06 compute-0 sudo[101776]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:06 compute-0 sudo[101801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:36:06 compute-0 sudo[101801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:06 compute-0 sudo[101801]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:06 compute-0 sudo[101826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 16:36:06 compute-0 sudo[101826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:06 compute-0 sudo[101874]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoqqwzuasxgagejqiynnngfjxkbkmpyn ; /usr/bin/python3'
Oct 01 16:36:06 compute-0 sudo[101874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:36:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Oct 01 16:36:06 compute-0 python3[101876]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:36:06 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/779073899' entity='client.rgw.rgw.compute-0.ktodly' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 01 16:36:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Oct 01 16:36:06 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Oct 01 16:36:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Oct 01 16:36:06 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/779073899' entity='client.rgw.rgw.compute-0.ktodly' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 01 16:36:06 compute-0 ceph-mon[74273]: pgmap v85: 10 pgs: 1 unknown, 9 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Oct 01 16:36:06 compute-0 ceph-mon[74273]: osdmap e38: 3 total, 3 up, 3 in
Oct 01 16:36:06 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/779073899' entity='client.rgw.rgw.compute-0.ktodly' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 01 16:36:06 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2613896551' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 01 16:36:06 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 39 pg[11.0( empty local-lis/les=38/39 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [1] r=0 lpr=38 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:06 compute-0 podman[101903]: 2025-10-01 16:36:06.496509393 +0000 UTC m=+0.044303105 container create 254c30a04160b08953740a14a80cea541fff0709c2f59ebf5f71df918ca16809 (image=quay.io/ceph/ceph:v18, name=naughty_snyder, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:36:06 compute-0 systemd[1]: Started libpod-conmon-254c30a04160b08953740a14a80cea541fff0709c2f59ebf5f71df918ca16809.scope.
Oct 01 16:36:06 compute-0 podman[101928]: 2025-10-01 16:36:06.547506073 +0000 UTC m=+0.038195865 container create ac3f6d9c050be127d9ad7fb7345518819ef01eb17608d6e39bf200a514f5fc56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hofstadter, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:36:06 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:36:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf0b6c6729862d5b83c665df3fba6a482a27ca47dd62ddaecd3ba203efd0ec35/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf0b6c6729862d5b83c665df3fba6a482a27ca47dd62ddaecd3ba203efd0ec35/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:06 compute-0 systemd[1]: Started libpod-conmon-ac3f6d9c050be127d9ad7fb7345518819ef01eb17608d6e39bf200a514f5fc56.scope.
Oct 01 16:36:06 compute-0 podman[101903]: 2025-10-01 16:36:06.474656613 +0000 UTC m=+0.022450355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:36:06 compute-0 podman[101903]: 2025-10-01 16:36:06.577668827 +0000 UTC m=+0.125462569 container init 254c30a04160b08953740a14a80cea541fff0709c2f59ebf5f71df918ca16809 (image=quay.io/ceph/ceph:v18, name=naughty_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 01 16:36:06 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:36:06 compute-0 podman[101903]: 2025-10-01 16:36:06.585823369 +0000 UTC m=+0.133617091 container start 254c30a04160b08953740a14a80cea541fff0709c2f59ebf5f71df918ca16809 (image=quay.io/ceph/ceph:v18, name=naughty_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 01 16:36:06 compute-0 podman[101903]: 2025-10-01 16:36:06.589327315 +0000 UTC m=+0.137121027 container attach 254c30a04160b08953740a14a80cea541fff0709c2f59ebf5f71df918ca16809 (image=quay.io/ceph/ceph:v18, name=naughty_snyder, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Oct 01 16:36:06 compute-0 podman[101928]: 2025-10-01 16:36:06.596349999 +0000 UTC m=+0.087039821 container init ac3f6d9c050be127d9ad7fb7345518819ef01eb17608d6e39bf200a514f5fc56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hofstadter, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:36:06 compute-0 podman[101928]: 2025-10-01 16:36:06.602407598 +0000 UTC m=+0.093097390 container start ac3f6d9c050be127d9ad7fb7345518819ef01eb17608d6e39bf200a514f5fc56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hofstadter, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 01 16:36:06 compute-0 vigilant_hofstadter[101951]: 167 167
Oct 01 16:36:06 compute-0 systemd[1]: libpod-ac3f6d9c050be127d9ad7fb7345518819ef01eb17608d6e39bf200a514f5fc56.scope: Deactivated successfully.
Oct 01 16:36:06 compute-0 podman[101928]: 2025-10-01 16:36:06.606991612 +0000 UTC m=+0.097681454 container attach ac3f6d9c050be127d9ad7fb7345518819ef01eb17608d6e39bf200a514f5fc56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hofstadter, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Oct 01 16:36:06 compute-0 podman[101928]: 2025-10-01 16:36:06.607350131 +0000 UTC m=+0.098039973 container died ac3f6d9c050be127d9ad7fb7345518819ef01eb17608d6e39bf200a514f5fc56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hofstadter, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:36:06 compute-0 podman[101928]: 2025-10-01 16:36:06.531007095 +0000 UTC m=+0.021696907 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:36:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-c834dc7f957eb43a5b24c54ba750a249a1bc4a29812d3eedf9247c34153c6a8b-merged.mount: Deactivated successfully.
Oct 01 16:36:06 compute-0 podman[101928]: 2025-10-01 16:36:06.640426927 +0000 UTC m=+0.131116719 container remove ac3f6d9c050be127d9ad7fb7345518819ef01eb17608d6e39bf200a514f5fc56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 01 16:36:06 compute-0 systemd[1]: libpod-conmon-ac3f6d9c050be127d9ad7fb7345518819ef01eb17608d6e39bf200a514f5fc56.scope: Deactivated successfully.
Oct 01 16:36:06 compute-0 podman[101975]: 2025-10-01 16:36:06.795003035 +0000 UTC m=+0.050428216 container create 8800e6f6dfb7f5ca01eb92e551f82f0ec205bb1c39cd0d4657760a514b522a61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_darwin, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:36:06 compute-0 systemd[1]: Started libpod-conmon-8800e6f6dfb7f5ca01eb92e551f82f0ec205bb1c39cd0d4657760a514b522a61.scope.
Oct 01 16:36:06 compute-0 podman[101975]: 2025-10-01 16:36:06.773682209 +0000 UTC m=+0.029107400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:36:06 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:36:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e170f1b36033eeaecbe8982db8f6370c4af82fa55820c2f1e367ffbe9bd6e727/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e170f1b36033eeaecbe8982db8f6370c4af82fa55820c2f1e367ffbe9bd6e727/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e170f1b36033eeaecbe8982db8f6370c4af82fa55820c2f1e367ffbe9bd6e727/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e170f1b36033eeaecbe8982db8f6370c4af82fa55820c2f1e367ffbe9bd6e727/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:06 compute-0 podman[101975]: 2025-10-01 16:36:06.914158308 +0000 UTC m=+0.169583469 container init 8800e6f6dfb7f5ca01eb92e551f82f0ec205bb1c39cd0d4657760a514b522a61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_darwin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:36:06 compute-0 podman[101975]: 2025-10-01 16:36:06.921366966 +0000 UTC m=+0.176792127 container start 8800e6f6dfb7f5ca01eb92e551f82f0ec205bb1c39cd0d4657760a514b522a61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_darwin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 01 16:36:06 compute-0 podman[101975]: 2025-10-01 16:36:06.92556326 +0000 UTC m=+0.180988421 container attach 8800e6f6dfb7f5ca01eb92e551f82f0ec205bb1c39cd0d4657760a514b522a61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_darwin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 01 16:36:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Oct 01 16:36:07 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/942864111' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Oct 01 16:36:07 compute-0 naughty_snyder[101944]: mimic
Oct 01 16:36:07 compute-0 systemd[1]: libpod-254c30a04160b08953740a14a80cea541fff0709c2f59ebf5f71df918ca16809.scope: Deactivated successfully.
Oct 01 16:36:07 compute-0 podman[102018]: 2025-10-01 16:36:07.193487797 +0000 UTC m=+0.022328843 container died 254c30a04160b08953740a14a80cea541fff0709c2f59ebf5f71df918ca16809 (image=quay.io/ceph/ceph:v18, name=naughty_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 01 16:36:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf0b6c6729862d5b83c665df3fba6a482a27ca47dd62ddaecd3ba203efd0ec35-merged.mount: Deactivated successfully.
Oct 01 16:36:07 compute-0 podman[102018]: 2025-10-01 16:36:07.229142597 +0000 UTC m=+0.057983613 container remove 254c30a04160b08953740a14a80cea541fff0709c2f59ebf5f71df918ca16809 (image=quay.io/ceph/ceph:v18, name=naughty_snyder, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:36:07 compute-0 systemd[1]: libpod-conmon-254c30a04160b08953740a14a80cea541fff0709c2f59ebf5f71df918ca16809.scope: Deactivated successfully.
Oct 01 16:36:07 compute-0 sudo[101874]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:07 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v88: 11 pgs: 2 unknown, 9 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s wr, 10 op/s
Oct 01 16:36:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Oct 01 16:36:07 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/779073899' entity='client.rgw.rgw.compute-0.ktodly' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 01 16:36:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Oct 01 16:36:07 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Oct 01 16:36:07 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/779073899' entity='client.rgw.rgw.compute-0.ktodly' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 01 16:36:07 compute-0 ceph-mon[74273]: osdmap e39: 3 total, 3 up, 3 in
Oct 01 16:36:07 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/779073899' entity='client.rgw.rgw.compute-0.ktodly' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 01 16:36:07 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/942864111' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Oct 01 16:36:07 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/779073899' entity='client.rgw.rgw.compute-0.ktodly' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 01 16:36:07 compute-0 ceph-mon[74273]: osdmap e40: 3 total, 3 up, 3 in
Oct 01 16:36:07 compute-0 radosgw[100177]: LDAP not started since no server URIs were provided in the configuration.
Oct 01 16:36:07 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-rgw-rgw-compute-0-ktodly[100163]: 2025-10-01T16:36:07.644+0000 7f41f3c4c940 -1 LDAP not started since no server URIs were provided in the configuration.
Oct 01 16:36:07 compute-0 radosgw[100177]: framework: beast
Oct 01 16:36:07 compute-0 radosgw[100177]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Oct 01 16:36:07 compute-0 radosgw[100177]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Oct 01 16:36:07 compute-0 radosgw[100177]: starting handler: beast
Oct 01 16:36:07 compute-0 radosgw[100177]: set uid:gid to 167:167 (ceph:ceph)
Oct 01 16:36:07 compute-0 radosgw[100177]: mgrc service_daemon_register rgw.14273 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.ktodly,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864100,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=1ec5f351-701a-45ec-b38c-bfb913f22f77,zone_name=default,zonegroup_id=ee91f502-589a-466c-9071-0e203e0d3601,zonegroup_name=default}
Oct 01 16:36:07 compute-0 wizardly_darwin[101992]: {
Oct 01 16:36:07 compute-0 wizardly_darwin[101992]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 16:36:07 compute-0 wizardly_darwin[101992]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:36:07 compute-0 wizardly_darwin[101992]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 16:36:07 compute-0 wizardly_darwin[101992]:         "osd_id": 2,
Oct 01 16:36:07 compute-0 wizardly_darwin[101992]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:36:07 compute-0 wizardly_darwin[101992]:         "type": "bluestore"
Oct 01 16:36:07 compute-0 wizardly_darwin[101992]:     },
Oct 01 16:36:07 compute-0 wizardly_darwin[101992]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 16:36:07 compute-0 wizardly_darwin[101992]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:36:07 compute-0 wizardly_darwin[101992]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 16:36:07 compute-0 wizardly_darwin[101992]:         "osd_id": 0,
Oct 01 16:36:07 compute-0 wizardly_darwin[101992]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:36:07 compute-0 wizardly_darwin[101992]:         "type": "bluestore"
Oct 01 16:36:07 compute-0 wizardly_darwin[101992]:     },
Oct 01 16:36:07 compute-0 wizardly_darwin[101992]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 16:36:07 compute-0 wizardly_darwin[101992]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:36:07 compute-0 wizardly_darwin[101992]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 16:36:07 compute-0 wizardly_darwin[101992]:         "osd_id": 1,
Oct 01 16:36:07 compute-0 wizardly_darwin[101992]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:36:07 compute-0 wizardly_darwin[101992]:         "type": "bluestore"
Oct 01 16:36:07 compute-0 wizardly_darwin[101992]:     }
Oct 01 16:36:07 compute-0 wizardly_darwin[101992]: }
Oct 01 16:36:07 compute-0 systemd[1]: libpod-8800e6f6dfb7f5ca01eb92e551f82f0ec205bb1c39cd0d4657760a514b522a61.scope: Deactivated successfully.
Oct 01 16:36:07 compute-0 podman[101975]: 2025-10-01 16:36:07.870503867 +0000 UTC m=+1.125929018 container died 8800e6f6dfb7f5ca01eb92e551f82f0ec205bb1c39cd0d4657760a514b522a61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_darwin, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:36:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-e170f1b36033eeaecbe8982db8f6370c4af82fa55820c2f1e367ffbe9bd6e727-merged.mount: Deactivated successfully.
Oct 01 16:36:07 compute-0 podman[101975]: 2025-10-01 16:36:07.925018733 +0000 UTC m=+1.180443884 container remove 8800e6f6dfb7f5ca01eb92e551f82f0ec205bb1c39cd0d4657760a514b522a61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 01 16:36:07 compute-0 systemd[1]: libpod-conmon-8800e6f6dfb7f5ca01eb92e551f82f0ec205bb1c39cd0d4657760a514b522a61.scope: Deactivated successfully.
Oct 01 16:36:07 compute-0 sudo[101826]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:36:07 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:36:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:36:07 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:36:07 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev ff813aea-e855-4075-aa51-0d12a6a8e907 does not exist
Oct 01 16:36:07 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 68c41808-970b-40e0-9202-b0a5c1b20368 does not exist
Oct 01 16:36:07 compute-0 sudo[102639]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uedhmtfxtnjrjkcpjakmgadjaqgnesxq ; /usr/bin/python3'
Oct 01 16:36:07 compute-0 sudo[102639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:36:08 compute-0 sudo[102640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:36:08 compute-0 sudo[102640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:08 compute-0 sudo[102640]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:08 compute-0 sudo[102667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 16:36:08 compute-0 sudo[102667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:08 compute-0 sudo[102667]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:08 compute-0 sudo[102692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:36:08 compute-0 sudo[102692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:08 compute-0 sudo[102692]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:08 compute-0 python3[102653]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:36:08 compute-0 podman[102718]: 2025-10-01 16:36:08.196235882 +0000 UTC m=+0.038260926 container create 7dcedc7614c2cfe6b6f570afca3f56fabb2aadac3f15a2674664e814f31f1efe (image=quay.io/ceph/ceph:v18, name=nice_hermann, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:36:08 compute-0 sudo[102717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:36:08 compute-0 sudo[102717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:08 compute-0 sudo[102717]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:08 compute-0 systemd[1]: Started libpod-conmon-7dcedc7614c2cfe6b6f570afca3f56fabb2aadac3f15a2674664e814f31f1efe.scope.
Oct 01 16:36:08 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:36:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e00ea5cbd17a76b75671b57e9333f3296eea7341232a2b9b695a67805cf26274/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e00ea5cbd17a76b75671b57e9333f3296eea7341232a2b9b695a67805cf26274/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:08 compute-0 sudo[102757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:36:08 compute-0 sudo[102757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:08 compute-0 podman[102718]: 2025-10-01 16:36:08.180563325 +0000 UTC m=+0.022588379 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:36:08 compute-0 podman[102718]: 2025-10-01 16:36:08.277177101 +0000 UTC m=+0.119202145 container init 7dcedc7614c2cfe6b6f570afca3f56fabb2aadac3f15a2674664e814f31f1efe (image=quay.io/ceph/ceph:v18, name=nice_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 01 16:36:08 compute-0 sudo[102757]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:08 compute-0 podman[102718]: 2025-10-01 16:36:08.285993268 +0000 UTC m=+0.128018322 container start 7dcedc7614c2cfe6b6f570afca3f56fabb2aadac3f15a2674664e814f31f1efe (image=quay.io/ceph/ceph:v18, name=nice_hermann, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:36:08 compute-0 podman[102718]: 2025-10-01 16:36:08.289265769 +0000 UTC m=+0.131290813 container attach 7dcedc7614c2cfe6b6f570afca3f56fabb2aadac3f15a2674664e814f31f1efe (image=quay.io/ceph/ceph:v18, name=nice_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 01 16:36:08 compute-0 sudo[102786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 01 16:36:08 compute-0 sudo[102786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:08 compute-0 ceph-mon[74273]: pgmap v88: 11 pgs: 2 unknown, 9 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s wr, 10 op/s
Oct 01 16:36:08 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:36:08 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:36:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:36:08 compute-0 podman[102902]: 2025-10-01 16:36:08.811270712 +0000 UTC m=+0.052785445 container exec bfdaa9b78cc1558959452c7020a00aa78f3da27e3ededf3766f2f88165c2443b (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Oct 01 16:36:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Oct 01 16:36:08 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3657018888' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Oct 01 16:36:08 compute-0 nice_hermann[102770]: 
Oct 01 16:36:08 compute-0 nice_hermann[102770]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":6}}
Oct 01 16:36:08 compute-0 systemd[1]: libpod-7dcedc7614c2cfe6b6f570afca3f56fabb2aadac3f15a2674664e814f31f1efe.scope: Deactivated successfully.
Oct 01 16:36:08 compute-0 podman[102718]: 2025-10-01 16:36:08.897881561 +0000 UTC m=+0.739906595 container died 7dcedc7614c2cfe6b6f570afca3f56fabb2aadac3f15a2674664e814f31f1efe (image=quay.io/ceph/ceph:v18, name=nice_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 01 16:36:08 compute-0 podman[102902]: 2025-10-01 16:36:08.910214525 +0000 UTC m=+0.151729248 container exec_died bfdaa9b78cc1558959452c7020a00aa78f3da27e3ededf3766f2f88165c2443b (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:36:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-e00ea5cbd17a76b75671b57e9333f3296eea7341232a2b9b695a67805cf26274-merged.mount: Deactivated successfully.
Oct 01 16:36:08 compute-0 podman[102718]: 2025-10-01 16:36:08.950924291 +0000 UTC m=+0.792949325 container remove 7dcedc7614c2cfe6b6f570afca3f56fabb2aadac3f15a2674664e814f31f1efe (image=quay.io/ceph/ceph:v18, name=nice_hermann, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:36:08 compute-0 systemd[1]: libpod-conmon-7dcedc7614c2cfe6b6f570afca3f56fabb2aadac3f15a2674664e814f31f1efe.scope: Deactivated successfully.
Oct 01 16:36:08 compute-0 sudo[102639]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:09 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v90: 11 pgs: 11 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 7.0 KiB/s wr, 86 op/s
Oct 01 16:36:09 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3657018888' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Oct 01 16:36:09 compute-0 sudo[102786]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:36:09 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:36:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:36:09 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:36:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:36:09 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:36:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 16:36:09 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:36:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 16:36:09 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:36:09 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev a2ceaabf-a71e-4116-a041-96f00b996f01 does not exist
Oct 01 16:36:09 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 68671c49-ea35-4f1f-a3c3-87c4ca2812ec does not exist
Oct 01 16:36:09 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev fa046f05-9715-4ebf-9d0a-4d76bc3ed8d0 does not exist
Oct 01 16:36:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 16:36:09 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:36:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 16:36:09 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:36:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:36:09 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:36:09 compute-0 sudo[103074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:36:09 compute-0 sudo[103074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:09 compute-0 sudo[103074]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:09 compute-0 sudo[103099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:36:09 compute-0 sudo[103099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:09 compute-0 sudo[103099]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:09 compute-0 sudo[103124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:36:09 compute-0 sudo[103124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:09 compute-0 sudo[103124]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:09 compute-0 sudo[103149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 16:36:09 compute-0 sudo[103149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:10 compute-0 podman[103215]: 2025-10-01 16:36:10.259064799 +0000 UTC m=+0.050614891 container create ec4bc6196277028873c4083e9507b0b065b8d1804ac5cc3d5191f41e76e07606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_morse, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 01 16:36:10 compute-0 systemd[1]: Started libpod-conmon-ec4bc6196277028873c4083e9507b0b065b8d1804ac5cc3d5191f41e76e07606.scope.
Oct 01 16:36:10 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:36:10 compute-0 podman[103215]: 2025-10-01 16:36:10.233706733 +0000 UTC m=+0.025256855 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:36:10 compute-0 podman[103215]: 2025-10-01 16:36:10.349733918 +0000 UTC m=+0.141284030 container init ec4bc6196277028873c4083e9507b0b065b8d1804ac5cc3d5191f41e76e07606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:36:10 compute-0 podman[103215]: 2025-10-01 16:36:10.355756687 +0000 UTC m=+0.147306789 container start ec4bc6196277028873c4083e9507b0b065b8d1804ac5cc3d5191f41e76e07606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 01 16:36:10 compute-0 tender_morse[103232]: 167 167
Oct 01 16:36:10 compute-0 systemd[1]: libpod-ec4bc6196277028873c4083e9507b0b065b8d1804ac5cc3d5191f41e76e07606.scope: Deactivated successfully.
Oct 01 16:36:10 compute-0 podman[103215]: 2025-10-01 16:36:10.37612598 +0000 UTC m=+0.167676102 container attach ec4bc6196277028873c4083e9507b0b065b8d1804ac5cc3d5191f41e76e07606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_morse, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 01 16:36:10 compute-0 podman[103215]: 2025-10-01 16:36:10.376745955 +0000 UTC m=+0.168296087 container died ec4bc6196277028873c4083e9507b0b065b8d1804ac5cc3d5191f41e76e07606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 01 16:36:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-00b7adbf2a703c38fc375cb16f90791ffe627381cdc3f8750779c6a0a607a235-merged.mount: Deactivated successfully.
Oct 01 16:36:10 compute-0 podman[103215]: 2025-10-01 16:36:10.45832289 +0000 UTC m=+0.249872992 container remove ec4bc6196277028873c4083e9507b0b065b8d1804ac5cc3d5191f41e76e07606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_morse, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 01 16:36:10 compute-0 systemd[1]: libpod-conmon-ec4bc6196277028873c4083e9507b0b065b8d1804ac5cc3d5191f41e76e07606.scope: Deactivated successfully.
Oct 01 16:36:10 compute-0 ceph-mon[74273]: pgmap v90: 11 pgs: 11 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 7.0 KiB/s wr, 86 op/s
Oct 01 16:36:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:36:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:36:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:36:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:36:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:36:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:36:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:36:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:36:10 compute-0 podman[103256]: 2025-10-01 16:36:10.610029707 +0000 UTC m=+0.047690159 container create 0ef62fb7c88be2cf201e293bd5c88609a32876fca35db9f87c1c3545451166ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ellis, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:36:10 compute-0 systemd[1]: Started libpod-conmon-0ef62fb7c88be2cf201e293bd5c88609a32876fca35db9f87c1c3545451166ed.scope.
Oct 01 16:36:10 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:36:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6122f4a2fa2a1690291f41751de806463869d37966d4e08af0ab1f803a80d751/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6122f4a2fa2a1690291f41751de806463869d37966d4e08af0ab1f803a80d751/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6122f4a2fa2a1690291f41751de806463869d37966d4e08af0ab1f803a80d751/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6122f4a2fa2a1690291f41751de806463869d37966d4e08af0ab1f803a80d751/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6122f4a2fa2a1690291f41751de806463869d37966d4e08af0ab1f803a80d751/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:10 compute-0 podman[103256]: 2025-10-01 16:36:10.582883547 +0000 UTC m=+0.020544089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:36:10 compute-0 podman[103256]: 2025-10-01 16:36:10.682034896 +0000 UTC m=+0.119695368 container init 0ef62fb7c88be2cf201e293bd5c88609a32876fca35db9f87c1c3545451166ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ellis, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:36:10 compute-0 podman[103256]: 2025-10-01 16:36:10.691556021 +0000 UTC m=+0.129216473 container start 0ef62fb7c88be2cf201e293bd5c88609a32876fca35db9f87c1c3545451166ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 01 16:36:10 compute-0 podman[103256]: 2025-10-01 16:36:10.699053826 +0000 UTC m=+0.136714298 container attach 0ef62fb7c88be2cf201e293bd5c88609a32876fca35db9f87c1c3545451166ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ellis, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_16:36:11
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'backups', '.rgw.root', '.mgr', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'images']
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v91: 11 pgs: 11 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 5.7 KiB/s wr, 70 op/s
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 1)
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 1)
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Oct 01 16:36:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Oct 01 16:36:11 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:36:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Oct 01 16:36:11 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct 01 16:36:11 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct 01 16:36:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Oct 01 16:36:11 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Oct 01 16:36:11 compute-0 ceph-mgr[74571]: [progress INFO root] update: starting ev 5da2de25-8eef-4718-8a48-bbd87e7d0fa3 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct 01 16:36:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Oct 01 16:36:11 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct 01 16:36:11 compute-0 adoring_ellis[103273]: --> passed data devices: 0 physical, 3 LVM
Oct 01 16:36:11 compute-0 adoring_ellis[103273]: --> relative data size: 1.0
Oct 01 16:36:11 compute-0 adoring_ellis[103273]: --> All data devices are unavailable
Oct 01 16:36:11 compute-0 systemd[1]: libpod-0ef62fb7c88be2cf201e293bd5c88609a32876fca35db9f87c1c3545451166ed.scope: Deactivated successfully.
Oct 01 16:36:11 compute-0 systemd[1]: libpod-0ef62fb7c88be2cf201e293bd5c88609a32876fca35db9f87c1c3545451166ed.scope: Consumed 1.071s CPU time.
Oct 01 16:36:11 compute-0 podman[103302]: 2025-10-01 16:36:11.846515825 +0000 UTC m=+0.025893531 container died 0ef62fb7c88be2cf201e293bd5c88609a32876fca35db9f87c1c3545451166ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ellis, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 01 16:36:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-6122f4a2fa2a1690291f41751de806463869d37966d4e08af0ab1f803a80d751-merged.mount: Deactivated successfully.
Oct 01 16:36:11 compute-0 podman[103302]: 2025-10-01 16:36:11.929314 +0000 UTC m=+0.108691666 container remove 0ef62fb7c88be2cf201e293bd5c88609a32876fca35db9f87c1c3545451166ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ellis, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 01 16:36:11 compute-0 systemd[1]: libpod-conmon-0ef62fb7c88be2cf201e293bd5c88609a32876fca35db9f87c1c3545451166ed.scope: Deactivated successfully.
Oct 01 16:36:11 compute-0 sudo[103149]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:12 compute-0 sudo[103317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:36:12 compute-0 sudo[103317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:12 compute-0 sudo[103317]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:12 compute-0 sudo[103342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:36:12 compute-0 sudo[103342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:12 compute-0 sudo[103342]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:12 compute-0 sudo[103367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:36:12 compute-0 sudo[103367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:12 compute-0 sudo[103367]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:12 compute-0 sudo[103392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 16:36:12 compute-0 sudo[103392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Oct 01 16:36:12 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct 01 16:36:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Oct 01 16:36:12 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Oct 01 16:36:12 compute-0 ceph-mgr[74571]: [progress INFO root] update: starting ev 3991c821-eefe-45b8-bace-9b453f88132c (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct 01 16:36:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Oct 01 16:36:12 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct 01 16:36:12 compute-0 ceph-mon[74273]: pgmap v91: 11 pgs: 11 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 5.7 KiB/s wr, 70 op/s
Oct 01 16:36:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct 01 16:36:12 compute-0 ceph-mon[74273]: osdmap e41: 3 total, 3 up, 3 in
Oct 01 16:36:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct 01 16:36:12 compute-0 podman[103457]: 2025-10-01 16:36:12.698982189 +0000 UTC m=+0.085593465 container create 567606d0b696e0ffc51ff66b504ce1d3d758c00f1f2cc6863cb3665f84b94d60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:36:12 compute-0 podman[103457]: 2025-10-01 16:36:12.642580106 +0000 UTC m=+0.029191482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:36:12 compute-0 systemd[1]: Started libpod-conmon-567606d0b696e0ffc51ff66b504ce1d3d758c00f1f2cc6863cb3665f84b94d60.scope.
Oct 01 16:36:12 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:36:12 compute-0 podman[103457]: 2025-10-01 16:36:12.847855686 +0000 UTC m=+0.234466992 container init 567606d0b696e0ffc51ff66b504ce1d3d758c00f1f2cc6863cb3665f84b94d60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mahavira, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:36:12 compute-0 podman[103457]: 2025-10-01 16:36:12.853958026 +0000 UTC m=+0.240569302 container start 567606d0b696e0ffc51ff66b504ce1d3d758c00f1f2cc6863cb3665f84b94d60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mahavira, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 01 16:36:12 compute-0 podman[103457]: 2025-10-01 16:36:12.85774302 +0000 UTC m=+0.244354326 container attach 567606d0b696e0ffc51ff66b504ce1d3d758c00f1f2cc6863cb3665f84b94d60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mahavira, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 01 16:36:12 compute-0 sweet_mahavira[103474]: 167 167
Oct 01 16:36:12 compute-0 systemd[1]: libpod-567606d0b696e0ffc51ff66b504ce1d3d758c00f1f2cc6863cb3665f84b94d60.scope: Deactivated successfully.
Oct 01 16:36:12 compute-0 podman[103457]: 2025-10-01 16:36:12.859133644 +0000 UTC m=+0.245744920 container died 567606d0b696e0ffc51ff66b504ce1d3d758c00f1f2cc6863cb3665f84b94d60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mahavira, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:36:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-a406c12fb42f4c1cc4719135e6469daad886167b2a5b7e4850f8728054c2038d-merged.mount: Deactivated successfully.
Oct 01 16:36:12 compute-0 podman[103457]: 2025-10-01 16:36:12.958172381 +0000 UTC m=+0.344783647 container remove 567606d0b696e0ffc51ff66b504ce1d3d758c00f1f2cc6863cb3665f84b94d60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mahavira, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 01 16:36:12 compute-0 systemd[1]: libpod-conmon-567606d0b696e0ffc51ff66b504ce1d3d758c00f1f2cc6863cb3665f84b94d60.scope: Deactivated successfully.
Oct 01 16:36:13 compute-0 podman[103499]: 2025-10-01 16:36:13.115041765 +0000 UTC m=+0.049947765 container create 1443044c4385bb7e322511a2eb42c3138d3394583890bc425158907d2253f06f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:36:13 compute-0 systemd[1]: Started libpod-conmon-1443044c4385bb7e322511a2eb42c3138d3394583890bc425158907d2253f06f.scope.
Oct 01 16:36:13 compute-0 podman[103499]: 2025-10-01 16:36:13.084607093 +0000 UTC m=+0.019513113 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:36:13 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:36:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78afc6cdb400a0f1f5b6ab2b993bf7dff3bdbe0926f5deb54b3406ba95b4799d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78afc6cdb400a0f1f5b6ab2b993bf7dff3bdbe0926f5deb54b3406ba95b4799d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78afc6cdb400a0f1f5b6ab2b993bf7dff3bdbe0926f5deb54b3406ba95b4799d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78afc6cdb400a0f1f5b6ab2b993bf7dff3bdbe0926f5deb54b3406ba95b4799d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:13 compute-0 podman[103499]: 2025-10-01 16:36:13.235082719 +0000 UTC m=+0.169988719 container init 1443044c4385bb7e322511a2eb42c3138d3394583890bc425158907d2253f06f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 16:36:13 compute-0 podman[103499]: 2025-10-01 16:36:13.240834831 +0000 UTC m=+0.175740831 container start 1443044c4385bb7e322511a2eb42c3138d3394583890bc425158907d2253f06f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Oct 01 16:36:13 compute-0 podman[103499]: 2025-10-01 16:36:13.244497232 +0000 UTC m=+0.179403262 container attach 1443044c4385bb7e322511a2eb42c3138d3394583890bc425158907d2253f06f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 01 16:36:13 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v94: 11 pgs: 11 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 5.7 KiB/s wr, 184 op/s
Oct 01 16:36:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 01 16:36:13 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 01 16:36:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 01 16:36:13 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 01 16:36:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Oct 01 16:36:13 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct 01 16:36:13 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 01 16:36:13 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct 01 16:36:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Oct 01 16:36:13 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Oct 01 16:36:13 compute-0 ceph-mgr[74571]: [progress INFO root] update: starting ev 5d55ea64-dbbe-4698-8594-71d14b4b3871 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct 01 16:36:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Oct 01 16:36:13 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct 01 16:36:13 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct 01 16:36:13 compute-0 ceph-mon[74273]: osdmap e42: 3 total, 3 up, 3 in
Oct 01 16:36:13 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct 01 16:36:13 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 01 16:36:13 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]: {
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:     "0": [
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:         {
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             "devices": [
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "/dev/loop3"
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             ],
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             "lv_name": "ceph_lv0",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             "lv_size": "21470642176",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             "name": "ceph_lv0",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             "tags": {
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "ceph.cluster_name": "ceph",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "ceph.crush_device_class": "",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "ceph.encrypted": "0",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "ceph.osd_id": "0",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "ceph.type": "block",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "ceph.vdo": "0"
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             },
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             "type": "block",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             "vg_name": "ceph_vg0"
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:         }
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:     ],
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:     "1": [
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:         {
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             "devices": [
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "/dev/loop4"
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             ],
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             "lv_name": "ceph_lv1",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             "lv_size": "21470642176",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             "name": "ceph_lv1",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             "tags": {
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "ceph.cluster_name": "ceph",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "ceph.crush_device_class": "",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "ceph.encrypted": "0",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "ceph.osd_id": "1",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "ceph.type": "block",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "ceph.vdo": "0"
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             },
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             "type": "block",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             "vg_name": "ceph_vg1"
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:         }
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:     ],
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:     "2": [
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:         {
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             "devices": [
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "/dev/loop5"
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             ],
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             "lv_name": "ceph_lv2",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             "lv_size": "21470642176",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             "name": "ceph_lv2",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             "tags": {
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "ceph.cluster_name": "ceph",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "ceph.crush_device_class": "",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "ceph.encrypted": "0",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "ceph.osd_id": "2",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "ceph.type": "block",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:                 "ceph.vdo": "0"
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             },
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             "type": "block",
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:             "vg_name": "ceph_vg2"
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:         }
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]:     ]
Oct 01 16:36:14 compute-0 admiring_leavitt[103515]: }
Oct 01 16:36:14 compute-0 systemd[1]: libpod-1443044c4385bb7e322511a2eb42c3138d3394583890bc425158907d2253f06f.scope: Deactivated successfully.
Oct 01 16:36:14 compute-0 podman[103524]: 2025-10-01 16:36:14.132830062 +0000 UTC m=+0.037965429 container died 1443044c4385bb7e322511a2eb42c3138d3394583890bc425158907d2253f06f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_leavitt, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 01 16:36:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-78afc6cdb400a0f1f5b6ab2b993bf7dff3bdbe0926f5deb54b3406ba95b4799d-merged.mount: Deactivated successfully.
Oct 01 16:36:14 compute-0 podman[103524]: 2025-10-01 16:36:14.225002838 +0000 UTC m=+0.130138175 container remove 1443044c4385bb7e322511a2eb42c3138d3394583890bc425158907d2253f06f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 01 16:36:14 compute-0 systemd[1]: libpod-conmon-1443044c4385bb7e322511a2eb42c3138d3394583890bc425158907d2253f06f.scope: Deactivated successfully.
Oct 01 16:36:14 compute-0 sudo[103392]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:14 compute-0 sudo[103539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:36:14 compute-0 sudo[103539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:14 compute-0 sudo[103539]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:14 compute-0 sudo[103564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:36:14 compute-0 sudo[103564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:14 compute-0 sudo[103564]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:14 compute-0 sudo[103589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:36:14 compute-0 sudo[103589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:14 compute-0 sudo[103589]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:14 compute-0 sudo[103614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 16:36:14 compute-0 sudo[103614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Oct 01 16:36:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct 01 16:36:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Oct 01 16:36:14 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Oct 01 16:36:14 compute-0 ceph-mgr[74571]: [progress INFO root] update: starting ev 20adff66-0ac3-4d2f-b91e-a826c3bd2bc9 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct 01 16:36:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0) v1
Oct 01 16:36:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Oct 01 16:36:14 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 43 pg[2.0( empty local-lis/les=18/19 n=0 ec=13/13 lis/c=18/18 les/c/f=19/19/0 sis=43 pruub=12.429223061s) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active pruub 70.926834106s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:14 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 44 pg[2.0( empty local-lis/les=18/19 n=0 ec=13/13 lis/c=18/18 les/c/f=19/19/0 sis=43 pruub=12.429223061s) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown pruub 70.926834106s@ mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:14 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:14 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 44 pg[2.c( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:14 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:14 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 44 pg[2.e( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:14 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:14 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 44 pg[2.10( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:14 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:14 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 44 pg[2.12( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:14 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:14 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 44 pg[2.14( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:14 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:14 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:14 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:14 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:14 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:14 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:14 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 44 pg[2.1a( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:14 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:14 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 44 pg[2.1e( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:14 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:14 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:14 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:14 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:14 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:14 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:14 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:14 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:14 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:14 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:14 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:14 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 44 pg[2.1( empty local-lis/les=18/19 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:14 compute-0 ceph-mon[74273]: pgmap v94: 11 pgs: 11 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 5.7 KiB/s wr, 184 op/s
Oct 01 16:36:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct 01 16:36:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 01 16:36:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct 01 16:36:14 compute-0 ceph-mon[74273]: osdmap e43: 3 total, 3 up, 3 in
Oct 01 16:36:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct 01 16:36:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct 01 16:36:14 compute-0 ceph-mon[74273]: osdmap e44: 3 total, 3 up, 3 in
Oct 01 16:36:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Oct 01 16:36:14 compute-0 podman[103680]: 2025-10-01 16:36:14.972772546 +0000 UTC m=+0.046183832 container create 4de56cc9b4e39ccd6fe2189e2e4befeb853e18ad92f18bf5a1273874e914779a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 01 16:36:15 compute-0 systemd[1]: Started libpod-conmon-4de56cc9b4e39ccd6fe2189e2e4befeb853e18ad92f18bf5a1273874e914779a.scope.
Oct 01 16:36:15 compute-0 podman[103680]: 2025-10-01 16:36:14.952591357 +0000 UTC m=+0.026002653 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:36:15 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:36:15 compute-0 podman[103680]: 2025-10-01 16:36:15.071929685 +0000 UTC m=+0.145340971 container init 4de56cc9b4e39ccd6fe2189e2e4befeb853e18ad92f18bf5a1273874e914779a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 16:36:15 compute-0 podman[103680]: 2025-10-01 16:36:15.077905282 +0000 UTC m=+0.151316568 container start 4de56cc9b4e39ccd6fe2189e2e4befeb853e18ad92f18bf5a1273874e914779a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_wozniak, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:36:15 compute-0 compassionate_wozniak[103697]: 167 167
Oct 01 16:36:15 compute-0 podman[103680]: 2025-10-01 16:36:15.08306132 +0000 UTC m=+0.156472636 container attach 4de56cc9b4e39ccd6fe2189e2e4befeb853e18ad92f18bf5a1273874e914779a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 01 16:36:15 compute-0 systemd[1]: libpod-4de56cc9b4e39ccd6fe2189e2e4befeb853e18ad92f18bf5a1273874e914779a.scope: Deactivated successfully.
Oct 01 16:36:15 compute-0 podman[103680]: 2025-10-01 16:36:15.08468534 +0000 UTC m=+0.158096616 container died 4de56cc9b4e39ccd6fe2189e2e4befeb853e18ad92f18bf5a1273874e914779a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 01 16:36:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-f067e95140b94ac15a6f624bbaf55fe3836b75c8fcc2469b7e214a4674eb4ff3-merged.mount: Deactivated successfully.
Oct 01 16:36:15 compute-0 podman[103680]: 2025-10-01 16:36:15.125409845 +0000 UTC m=+0.198821131 container remove 4de56cc9b4e39ccd6fe2189e2e4befeb853e18ad92f18bf5a1273874e914779a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 01 16:36:15 compute-0 systemd[1]: libpod-conmon-4de56cc9b4e39ccd6fe2189e2e4befeb853e18ad92f18bf5a1273874e914779a.scope: Deactivated successfully.
Oct 01 16:36:15 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v97: 73 pgs: 62 unknown, 11 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 0 B/s wr, 171 op/s
Oct 01 16:36:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 01 16:36:15 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 01 16:36:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 01 16:36:15 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 01 16:36:15 compute-0 podman[103721]: 2025-10-01 16:36:15.348762092 +0000 UTC m=+0.074420199 container create 41faa6bedd980600a04e6153b46256bc25ba82173b1d8ab431f35d6087b0522d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_franklin, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 01 16:36:15 compute-0 systemd[1]: Started libpod-conmon-41faa6bedd980600a04e6153b46256bc25ba82173b1d8ab431f35d6087b0522d.scope.
Oct 01 16:36:15 compute-0 podman[103721]: 2025-10-01 16:36:15.314833824 +0000 UTC m=+0.040491961 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:36:15 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:36:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8179c90483aea638ca21a10e1d3dbfe27e47645a0bbd2d2dfbcba7ede73c8aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8179c90483aea638ca21a10e1d3dbfe27e47645a0bbd2d2dfbcba7ede73c8aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8179c90483aea638ca21a10e1d3dbfe27e47645a0bbd2d2dfbcba7ede73c8aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8179c90483aea638ca21a10e1d3dbfe27e47645a0bbd2d2dfbcba7ede73c8aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:15 compute-0 podman[103721]: 2025-10-01 16:36:15.463861335 +0000 UTC m=+0.189519462 container init 41faa6bedd980600a04e6153b46256bc25ba82173b1d8ab431f35d6087b0522d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 01 16:36:15 compute-0 podman[103721]: 2025-10-01 16:36:15.471462552 +0000 UTC m=+0.197120639 container start 41faa6bedd980600a04e6153b46256bc25ba82173b1d8ab431f35d6087b0522d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:36:15 compute-0 podman[103721]: 2025-10-01 16:36:15.475231615 +0000 UTC m=+0.200889722 container attach 41faa6bedd980600a04e6153b46256bc25ba82173b1d8ab431f35d6087b0522d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 01 16:36:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Oct 01 16:36:15 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Oct 01 16:36:15 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct 01 16:36:15 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct 01 16:36:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Oct 01 16:36:15 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Oct 01 16:36:15 compute-0 ceph-mgr[74571]: [progress INFO root] update: starting ev a21b5fbe-fdd1-40d4-b7dc-dfd8eb9c7c63 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Oct 01 16:36:15 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 45 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=45 pruub=10.395730019s) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active pruub 78.942207336s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Oct 01 16:36:15 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct 01 16:36:15 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 45 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=45 pruub=10.395730019s) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown pruub 78.942207336s@ mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:15 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 45 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=45 pruub=12.404707909s) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active pruub 71.920158386s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:15 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 45 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=45 pruub=12.404707909s) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown pruub 71.920158386s@ mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 01 16:36:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 01 16:36:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Oct 01 16:36:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct 01 16:36:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct 01 16:36:15 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 45 pg[2.1a( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:15 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:15 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:15 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 45 pg[2.14( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:15 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:15 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:15 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:15 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:15 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 45 pg[2.10( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:15 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:15 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 45 pg[2.e( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:15 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:15 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:15 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 45 pg[2.12( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:15 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:15 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 45 pg[2.c( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:15 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:15 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 45 pg[2.0( empty local-lis/les=43/45 n=0 ec=13/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:15 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 45 pg[2.1( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:15 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:15 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:15 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:15 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:15 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:15 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:15 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:15 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:15 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:15 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:15 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 45 pg[2.1e( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:15 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:15 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:15 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 43 pg[3.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=43 pruub=8.149170876s) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active pruub 72.542213440s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:15 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 45 pg[3.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=43 pruub=8.149170876s) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown pruub 72.542213440s@ mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:15 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:15 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:15 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 45 pg[3.10( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:15 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:15 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:15 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:15 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:15 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:15 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 45 pg[3.4( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:15 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:15 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:15 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 45 pg[3.b( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:15 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:15 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:15 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 45 pg[3.d( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:15 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:15 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 45 pg[3.2( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:15 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 45 pg[3.13( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:15 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:15 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 45 pg[3.14( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:15 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:15 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:15 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:15 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:15 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 45 pg[3.19( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:15 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 45 pg[3.1a( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:15 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:15 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 45 pg[3.1c( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:15 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:15 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:15 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=15/16 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-mgr[74571]: [progress WARNING root] Starting Global Recovery Event,124 pgs not in active + clean state
Oct 01 16:36:16 compute-0 flamboyant_franklin[103738]: {
Oct 01 16:36:16 compute-0 flamboyant_franklin[103738]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 16:36:16 compute-0 flamboyant_franklin[103738]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:36:16 compute-0 flamboyant_franklin[103738]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 16:36:16 compute-0 flamboyant_franklin[103738]:         "osd_id": 2,
Oct 01 16:36:16 compute-0 flamboyant_franklin[103738]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:36:16 compute-0 flamboyant_franklin[103738]:         "type": "bluestore"
Oct 01 16:36:16 compute-0 flamboyant_franklin[103738]:     },
Oct 01 16:36:16 compute-0 flamboyant_franklin[103738]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 16:36:16 compute-0 flamboyant_franklin[103738]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:36:16 compute-0 flamboyant_franklin[103738]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 16:36:16 compute-0 flamboyant_franklin[103738]:         "osd_id": 0,
Oct 01 16:36:16 compute-0 flamboyant_franklin[103738]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:36:16 compute-0 flamboyant_franklin[103738]:         "type": "bluestore"
Oct 01 16:36:16 compute-0 flamboyant_franklin[103738]:     },
Oct 01 16:36:16 compute-0 flamboyant_franklin[103738]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 16:36:16 compute-0 flamboyant_franklin[103738]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:36:16 compute-0 flamboyant_franklin[103738]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 16:36:16 compute-0 flamboyant_franklin[103738]:         "osd_id": 1,
Oct 01 16:36:16 compute-0 flamboyant_franklin[103738]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:36:16 compute-0 flamboyant_franklin[103738]:         "type": "bluestore"
Oct 01 16:36:16 compute-0 flamboyant_franklin[103738]:     }
Oct 01 16:36:16 compute-0 flamboyant_franklin[103738]: }
Oct 01 16:36:16 compute-0 systemd[1]: libpod-41faa6bedd980600a04e6153b46256bc25ba82173b1d8ab431f35d6087b0522d.scope: Deactivated successfully.
Oct 01 16:36:16 compute-0 podman[103721]: 2025-10-01 16:36:16.422233864 +0000 UTC m=+1.147891961 container died 41faa6bedd980600a04e6153b46256bc25ba82173b1d8ab431f35d6087b0522d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_franklin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:36:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8179c90483aea638ca21a10e1d3dbfe27e47645a0bbd2d2dfbcba7ede73c8aa-merged.mount: Deactivated successfully.
Oct 01 16:36:16 compute-0 podman[103721]: 2025-10-01 16:36:16.542468084 +0000 UTC m=+1.268126181 container remove 41faa6bedd980600a04e6153b46256bc25ba82173b1d8ab431f35d6087b0522d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 01 16:36:16 compute-0 systemd[1]: libpod-conmon-41faa6bedd980600a04e6153b46256bc25ba82173b1d8ab431f35d6087b0522d.scope: Deactivated successfully.
Oct 01 16:36:16 compute-0 sudo[103614]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:36:16 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:36:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:36:16 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:36:16 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 3c171bb9-5b78-41e9-88e4-b17ea72d1781 does not exist
Oct 01 16:36:16 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 8cfc2092-5159-4f0d-a56f-f5f3cd48aa60 does not exist
Oct 01 16:36:16 compute-0 sudo[103785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:36:16 compute-0 sudo[103785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:16 compute-0 sudo[103785]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Oct 01 16:36:16 compute-0 sudo[103810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 16:36:16 compute-0 sudo[103810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:36:16 compute-0 sudo[103810]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:16 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct 01 16:36:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Oct 01 16:36:16 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Oct 01 16:36:16 compute-0 ceph-mgr[74571]: [progress INFO root] update: starting ev 6089ec68-5fb2-44c5-bd3c-d939db9532af (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct 01 16:36:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Oct 01 16:36:16 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct 01 16:36:16 compute-0 ceph-mon[74273]: pgmap v97: 73 pgs: 62 unknown, 11 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 0 B/s wr, 171 op/s
Oct 01 16:36:16 compute-0 ceph-mon[74273]: osdmap e45: 3 total, 3 up, 3 in
Oct 01 16:36:16 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct 01 16:36:16 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:36:16 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.1e( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.10( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.1f( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.1e( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.1d( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.11( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.1d( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.1f( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.12( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.13( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.14( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.15( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.16( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.17( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.8( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.9( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.a( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.b( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.c( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.7( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.f( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.1c( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.8( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.7( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.6( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.6( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.b( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.4( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.1b( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.a( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.5( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.3( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.9( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.4( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.1a( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.2( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.19( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.3( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.1( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.1( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.5( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.2( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.c( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.d( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.e( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.e( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.d( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.1c( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.1b( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.1a( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.19( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.18( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.f( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.10( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.11( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.12( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.13( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.14( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.15( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.16( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.17( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.18( empty local-lis/les=17/18 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:16 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 46 pg[3.1e( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 46 pg[3.1f( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.1f( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.1e( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.11( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.10( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.1f( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.12( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.13( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 46 pg[3.1c( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 46 pg[3.1a( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 46 pg[3.1b( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 46 pg[3.18( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 46 pg[3.7( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 46 pg[3.19( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 46 pg[3.3( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 46 pg[3.5( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 46 pg[3.1( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 46 pg[3.6( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 46 pg[3.8( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 46 pg[3.b( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 46 pg[3.a( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 46 pg[3.4( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 46 pg[3.0( empty local-lis/les=43/46 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 46 pg[3.9( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 46 pg[3.2( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 46 pg[3.d( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 46 pg[3.c( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 46 pg[3.11( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 46 pg[3.10( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 46 pg[3.e( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 46 pg[3.12( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 46 pg[3.f( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 46 pg[3.13( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 46 pg[3.1d( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 46 pg[3.15( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 46 pg[3.16( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 46 pg[3.14( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 46 pg[3.17( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=15/15 les/c/f=16/16/0 sis=43) [1] r=0 lpr=43 pi=[15,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.1d( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.7( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.15( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.1c( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.16( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.14( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.1d( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.17( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.9( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.a( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.b( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.0( empty local-lis/les=45/46 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.8( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.b( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.c( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.f( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.6( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.7( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.6( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.4( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.8( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.3( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.1( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.5( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.d( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.5( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.1b( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.a( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.4( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.9( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.e( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.1a( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.1c( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.3( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.1b( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.1( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.1a( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.0( empty local-lis/les=45/46 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.19( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.2( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.c( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.2( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 46 pg[5.18( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.19( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.e( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.d( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.f( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.11( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.10( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.12( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.14( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.13( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.15( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.16( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.17( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.18( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:16 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 46 pg[4.1e( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=17/17 les/c/f=18/18/0 sis=45) [0] r=0 lpr=45 pi=[17,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:17 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Oct 01 16:36:17 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Oct 01 16:36:17 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v100: 135 pgs: 124 unknown, 11 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:36:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 01 16:36:17 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 01 16:36:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Oct 01 16:36:17 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Oct 01 16:36:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Oct 01 16:36:17 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct 01 16:36:17 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct 01 16:36:17 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Oct 01 16:36:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Oct 01 16:36:17 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Oct 01 16:36:17 compute-0 ceph-mgr[74571]: [progress INFO root] update: starting ev e9c961b5-daae-474c-8208-e80feed178a2 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct 01 16:36:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Oct 01 16:36:17 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct 01 16:36:17 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct 01 16:36:17 compute-0 ceph-mon[74273]: osdmap e46: 3 total, 3 up, 3 in
Oct 01 16:36:17 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct 01 16:36:17 compute-0 ceph-mon[74273]: 3.1 scrub starts
Oct 01 16:36:17 compute-0 ceph-mon[74273]: 3.1 scrub ok
Oct 01 16:36:17 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 01 16:36:17 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Oct 01 16:36:17 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 47 pg[7.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=47 pruub=14.449999809s) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active pruub 80.724113464s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:17 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 47 pg[7.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=47 pruub=14.449999809s) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown pruub 80.724113464s@ mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:36:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Oct 01 16:36:18 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct 01 16:36:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Oct 01 16:36:18 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Oct 01 16:36:18 compute-0 ceph-mgr[74571]: [progress INFO root] update: starting ev 8a0f33f6-ba7e-44fb-bbc2-1c361bd88a57 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.13( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.12( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.10( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.11( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.17( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.16( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.15( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.14( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.b( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.a( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.9( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.d( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.6( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.4( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.f( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.e( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.c( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.8( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.5( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.7( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.1( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.2( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.3( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.1c( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.1d( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.1e( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.1f( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.18( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.1a( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.1b( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.19( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Oct 01 16:36:18 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.12( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.13( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:18 compute-0 ceph-mon[74273]: pgmap v100: 135 pgs: 124 unknown, 11 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:36:18 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct 01 16:36:18 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct 01 16:36:18 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Oct 01 16:36:18 compute-0 ceph-mon[74273]: osdmap e47: 3 total, 3 up, 3 in
Oct 01 16:36:18 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct 01 16:36:18 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct 01 16:36:18 compute-0 ceph-mon[74273]: osdmap e48: 3 total, 3 up, 3 in
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.11( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.16( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.17( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.15( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.14( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.9( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.d( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.4( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.6( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.0( empty local-lis/les=47/48 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.10( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.8( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.7( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.1( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.2( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.3( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.1c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.1d( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.5( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.18( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.1f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.1e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.1a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.1b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:18 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 48 pg[7.19( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:19 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Oct 01 16:36:19 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Oct 01 16:36:19 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v103: 181 pgs: 1 peering, 46 unknown, 134 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:36:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 01 16:36:19 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 01 16:36:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 01 16:36:19 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 01 16:36:19 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 47 pg[6.0( v 37'39 (0'0,37'39] local-lis/les=21/22 n=22 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=47 pruub=10.871764183s) [0] r=0 lpr=47 pi=[21,47)/1 crt=37'39 lcod 33'38 mlcod 33'38 active pruub 82.998924255s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:19 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 48 pg[6.0( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=47 pruub=10.871764183s) [0] r=0 lpr=47 pi=[21,47)/1 crt=37'39 lcod 33'38 mlcod 0'0 unknown pruub 82.998924255s@ mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:19 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 48 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=2 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:19 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 48 pg[6.4( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=2 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:19 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 48 pg[6.7( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:19 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 48 pg[6.8( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:19 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 48 pg[6.9( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:19 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 48 pg[6.a( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:19 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 48 pg[6.5( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=2 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:19 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 48 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=2 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:19 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 48 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:19 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 48 pg[6.2( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=2 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:19 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 48 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:19 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 48 pg[6.c( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:19 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 48 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:19 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 48 pg[6.e( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:19 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 48 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=21/22 n=2 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Oct 01 16:36:19 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct 01 16:36:19 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct 01 16:36:19 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct 01 16:36:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Oct 01 16:36:19 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Oct 01 16:36:19 compute-0 ceph-mgr[74571]: [progress INFO root] update: starting ev cafd4b43-4172-4234-b71d-22b136eba0a0 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct 01 16:36:19 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 49 pg[8.0( v 33'4 (0'0,33'4] local-lis/les=32/33 n=4 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=49 pruub=12.569817543s) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 33'3 mlcod 33'3 active pruub 80.850631714s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:19 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 49 pg[9.0( v 40'385 (0'0,40'385] local-lis/les=34/35 n=177 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=49 pruub=14.622879028s) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 lcod 40'384 mlcod 40'384 active pruub 82.903709412s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Oct 01 16:36:19 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 01 16:36:19 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 49 pg[8.0( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=49 pruub=12.569817543s) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 33'3 mlcod 0'0 unknown pruub 80.850631714s@ mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:19 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 49 pg[9.0( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=5 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=49 pruub=14.622879028s) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 lcod 40'384 mlcod 0'0 unknown pruub 82.903709412s@ mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:19 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct 01 16:36:19 compute-0 ceph-mon[74273]: 3.2 scrub starts
Oct 01 16:36:19 compute-0 ceph-mon[74273]: 3.2 scrub ok
Oct 01 16:36:19 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 01 16:36:19 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 01 16:36:19 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct 01 16:36:19 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct 01 16:36:19 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct 01 16:36:19 compute-0 ceph-mon[74273]: osdmap e49: 3 total, 3 up, 3 in
Oct 01 16:36:19 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 01 16:36:19 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 49 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=47/49 n=2 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:19 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 49 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=47/49 n=1 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:19 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 49 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=47/49 n=2 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:19 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 49 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=47/49 n=1 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:19 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 49 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=47/49 n=1 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:19 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 49 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=47/49 n=1 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:19 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 49 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=47/49 n=1 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:19 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 49 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=47/49 n=2 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:19 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 49 pg[6.0( v 37'39 (0'0,37'39] local-lis/les=47/49 n=1 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=37'39 lcod 33'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:19 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 49 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=47/49 n=2 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:19 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 49 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=47/49 n=2 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:19 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 49 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=47/49 n=1 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:19 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 49 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=47/49 n=1 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:19 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 49 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=47/49 n=2 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:19 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 49 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=47/49 n=1 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:19 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 49 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=47/49 n=1 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [0] r=0 lpr=47 pi=[21,47)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Oct 01 16:36:20 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Oct 01 16:36:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Oct 01 16:36:20 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct 01 16:36:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Oct 01 16:36:20 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Oct 01 16:36:20 compute-0 ceph-mgr[74571]: [progress INFO root] update: starting ev 5f994a6a-e617-4381-be2c-ec893ffa0bf1 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct 01 16:36:20 compute-0 ceph-mgr[74571]: [progress INFO root] complete: finished ev 5da2de25-8eef-4718-8a48-bbd87e7d0fa3 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct 01 16:36:20 compute-0 ceph-mgr[74571]: [progress INFO root] Completed event 5da2de25-8eef-4718-8a48-bbd87e7d0fa3 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 9 seconds
Oct 01 16:36:20 compute-0 ceph-mgr[74571]: [progress INFO root] complete: finished ev 3991c821-eefe-45b8-bace-9b453f88132c (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct 01 16:36:20 compute-0 ceph-mgr[74571]: [progress INFO root] Completed event 3991c821-eefe-45b8-bace-9b453f88132c (PG autoscaler increasing pool 3 PGs from 1 to 32) in 8 seconds
Oct 01 16:36:20 compute-0 ceph-mgr[74571]: [progress INFO root] complete: finished ev 5d55ea64-dbbe-4698-8594-71d14b4b3871 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct 01 16:36:20 compute-0 ceph-mgr[74571]: [progress INFO root] Completed event 5d55ea64-dbbe-4698-8594-71d14b4b3871 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 7 seconds
Oct 01 16:36:20 compute-0 ceph-mgr[74571]: [progress INFO root] complete: finished ev 20adff66-0ac3-4d2f-b91e-a826c3bd2bc9 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct 01 16:36:20 compute-0 ceph-mgr[74571]: [progress INFO root] Completed event 20adff66-0ac3-4d2f-b91e-a826c3bd2bc9 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 6 seconds
Oct 01 16:36:20 compute-0 ceph-mgr[74571]: [progress INFO root] complete: finished ev a21b5fbe-fdd1-40d4-b7dc-dfd8eb9c7c63 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Oct 01 16:36:20 compute-0 ceph-mgr[74571]: [progress INFO root] Completed event a21b5fbe-fdd1-40d4-b7dc-dfd8eb9c7c63 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 5 seconds
Oct 01 16:36:20 compute-0 ceph-mgr[74571]: [progress INFO root] complete: finished ev 6089ec68-5fb2-44c5-bd3c-d939db9532af (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct 01 16:36:20 compute-0 ceph-mgr[74571]: [progress INFO root] Completed event 6089ec68-5fb2-44c5-bd3c-d939db9532af (PG autoscaler increasing pool 7 PGs from 1 to 32) in 4 seconds
Oct 01 16:36:20 compute-0 ceph-mgr[74571]: [progress INFO root] complete: finished ev e9c961b5-daae-474c-8208-e80feed178a2 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct 01 16:36:20 compute-0 ceph-mgr[74571]: [progress INFO root] Completed event e9c961b5-daae-474c-8208-e80feed178a2 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Oct 01 16:36:20 compute-0 ceph-mgr[74571]: [progress INFO root] complete: finished ev 8a0f33f6-ba7e-44fb-bbc2-1c361bd88a57 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct 01 16:36:20 compute-0 ceph-mgr[74571]: [progress INFO root] Completed event 8a0f33f6-ba7e-44fb-bbc2-1c361bd88a57 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Oct 01 16:36:20 compute-0 ceph-mgr[74571]: [progress INFO root] complete: finished ev cafd4b43-4172-4234-b71d-22b136eba0a0 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct 01 16:36:20 compute-0 ceph-mgr[74571]: [progress INFO root] Completed event cafd4b43-4172-4234-b71d-22b136eba0a0 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Oct 01 16:36:20 compute-0 ceph-mgr[74571]: [progress INFO root] complete: finished ev 5f994a6a-e617-4381-be2c-ec893ffa0bf1 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct 01 16:36:20 compute-0 ceph-mgr[74571]: [progress INFO root] Completed event 5f994a6a-e617-4381-be2c-ec893ffa0bf1 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.15( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.15( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.14( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.16( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.14( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.17( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.17( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.16( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.10( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.11( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.11( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.10( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.12( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.13( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.12( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.13( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.d( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.c( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.e( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.f( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.8( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.9( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.a( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.b( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.3( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=1 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.2( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.1( v 33'4 (0'0,33'4] local-lis/les=32/33 n=1 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.1( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.e( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.a( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.9( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.8( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.2( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=1 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.3( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.7( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.6( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.6( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.7( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.5( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.4( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=1 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.5( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.4( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.1a( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.1b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.1a( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.1b( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.19( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.18( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.18( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.19( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.1e( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.1f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.1f( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.1e( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.1d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.1c( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.1d( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.1c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.14( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.16( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.17( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.11( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.10( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.12( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.d( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.13( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.c( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.8( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.a( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.9( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.b( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.0( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 lcod 40'384 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.3( v 33'4 (0'0,33'4] local-lis/les=49/50 n=1 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.1( v 33'4 (0'0,33'4] local-lis/les=49/50 n=1 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.0( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 33'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.2( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.1( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.e( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.a( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=49/50 n=1 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.7( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.3( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.5( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.6( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=49/50 n=1 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.4( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.1a( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.5( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.19( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.8( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.1b( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.18( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.1e( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [1] r=0 lpr=49 pi=[32,49)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.1d( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 50 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [1] r=0 lpr=49 pi=[34,49)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:20 compute-0 ceph-mon[74273]: pgmap v103: 181 pgs: 1 peering, 46 unknown, 134 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:36:20 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct 01 16:36:20 compute-0 ceph-mon[74273]: osdmap e50: 3 total, 3 up, 3 in
Oct 01 16:36:21 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v106: 243 pgs: 1 peering, 108 unknown, 134 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:36:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 01 16:36:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 01 16:36:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 01 16:36:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 01 16:36:21 compute-0 ceph-mgr[74571]: [progress INFO root] Writing back 15 completed events
Oct 01 16:36:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 01 16:36:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:36:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Oct 01 16:36:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct 01 16:36:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 01 16:36:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Oct 01 16:36:21 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Oct 01 16:36:21 compute-0 ceph-mon[74273]: 2.1 scrub starts
Oct 01 16:36:21 compute-0 ceph-mon[74273]: 2.1 scrub ok
Oct 01 16:36:21 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 01 16:36:21 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 01 16:36:21 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:36:22 compute-0 ceph-mon[74273]: pgmap v106: 243 pgs: 1 peering, 108 unknown, 134 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:36:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct 01 16:36:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 01 16:36:22 compute-0 ceph-mon[74273]: osdmap e51: 3 total, 3 up, 3 in
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 51 pg[11.0( empty local-lis/les=38/39 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=51 pruub=15.414858818s) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active pruub 86.938354492s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 51 pg[11.0( empty local-lis/les=38/39 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=51 pruub=15.414858818s) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown pruub 86.938354492s@ mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v108: 305 pgs: 77 unknown, 228 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 51 pg[10.0( v 37'16 (0'0,37'16] local-lis/les=36/37 n=8 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=51 pruub=13.062979698s) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 37'15 mlcod 37'15 active pruub 80.200828552s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 51 pg[10.0( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=51 pruub=13.062979698s) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 37'15 mlcod 0'0 unknown pruub 80.200828552s@ mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Oct 01 16:36:23 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Oct 01 16:36:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:36:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Oct 01 16:36:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Oct 01 16:36:23 compute-0 ceph-mon[74273]: pgmap v108: 305 pgs: 77 unknown, 228 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:36:23 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.17( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.16( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.15( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.14( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.13( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.12( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.11( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.10( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.f( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.e( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.d( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.b( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.9( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.12( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.2( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.1f( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.10( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.1e( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.1d( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.11( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.1c( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.1b( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.1a( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.18( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.3( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.c( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.7( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=1 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.6( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=1 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.5( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=1 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.19( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.4( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=1 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.3( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=1 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.8( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=1 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.f( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.a( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.9( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.b( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.c( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.d( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.e( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.1( v 37'16 (0'0,37'16] local-lis/les=36/37 n=1 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.2( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=1 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.14( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.13( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.8( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.a( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.1( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.4( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.5( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.6( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.7( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.18( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.19( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.1a( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.1b( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.1c( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.1d( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.1e( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.1f( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.15( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.16( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.17( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.17( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.16( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.14( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.13( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.15( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.12( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.1f( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.1e( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.10( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.1c( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.1b( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.1d( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.18( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.7( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.6( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.5( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.1a( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.19( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.4( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.f( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.8( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.3( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.9( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.0( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 37'15 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.b( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.c( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.e( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.1( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.11( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.14( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.d( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.a( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.2( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.13( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.16( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.15( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 52 pg[10.17( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=36/36 les/c/f=37/37/0 sis=51) [2] r=0 lpr=51 pi=[36,51)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.11( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.10( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.b( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.9( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.e( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.2( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.d( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.a( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.c( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.1( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.5( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.4( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.f( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.0( empty local-lis/les=51/52 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.7( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.19( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.6( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.18( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.1d( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.1f( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:23 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 52 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:24 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 2.3 deep-scrub starts
Oct 01 16:36:24 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 2.3 deep-scrub ok
Oct 01 16:36:24 compute-0 ceph-mon[74273]: 2.2 scrub starts
Oct 01 16:36:24 compute-0 ceph-mon[74273]: 2.2 scrub ok
Oct 01 16:36:24 compute-0 ceph-mon[74273]: osdmap e52: 3 total, 3 up, 3 in
Oct 01 16:36:25 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Oct 01 16:36:25 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Oct 01 16:36:25 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v110: 305 pgs: 62 unknown, 243 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:36:25 compute-0 ceph-mon[74273]: 2.3 deep-scrub starts
Oct 01 16:36:25 compute-0 ceph-mon[74273]: 2.3 deep-scrub ok
Oct 01 16:36:25 compute-0 ceph-mon[74273]: 3.3 scrub starts
Oct 01 16:36:25 compute-0 ceph-mon[74273]: 3.3 scrub ok
Oct 01 16:36:25 compute-0 ceph-mon[74273]: pgmap v110: 305 pgs: 62 unknown, 243 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:36:26 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Oct 01 16:36:26 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Oct 01 16:36:27 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Oct 01 16:36:27 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Oct 01 16:36:27 compute-0 ceph-mon[74273]: 2.4 scrub starts
Oct 01 16:36:27 compute-0 ceph-mon[74273]: 2.4 scrub ok
Oct 01 16:36:27 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v111: 305 pgs: 62 unknown, 243 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:36:28 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Oct 01 16:36:28 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Oct 01 16:36:28 compute-0 ceph-mon[74273]: 3.4 scrub starts
Oct 01 16:36:28 compute-0 ceph-mon[74273]: 3.4 scrub ok
Oct 01 16:36:28 compute-0 ceph-mon[74273]: pgmap v111: 305 pgs: 62 unknown, 243 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:36:28 compute-0 ceph-mon[74273]: 4.1 scrub starts
Oct 01 16:36:28 compute-0 ceph-mon[74273]: 4.1 scrub ok
Oct 01 16:36:28 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Oct 01 16:36:28 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Oct 01 16:36:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:36:29 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v112: 305 pgs: 305 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:36:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 01 16:36:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 01 16:36:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 01 16:36:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 01 16:36:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 01 16:36:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 01 16:36:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0) v1
Oct 01 16:36:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 01 16:36:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 01 16:36:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 01 16:36:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Oct 01 16:36:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 01 16:36:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 01 16:36:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 01 16:36:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 01 16:36:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 01 16:36:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 01 16:36:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 01 16:36:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 01 16:36:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 01 16:36:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Oct 01 16:36:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 01 16:36:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 01 16:36:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 01 16:36:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 01 16:36:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 01 16:36:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 01 16:36:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 01 16:36:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 01 16:36:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 01 16:36:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 01 16:36:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Oct 01 16:36:29 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.12( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.584648132s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 83.689956665s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.12( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.584595680s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.689956665s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.1d( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.524821281s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.630180359s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.19( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.420028687s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 83.525421143s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.19( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.419935226s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.525421143s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.1d( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.524738312s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.630180359s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.1e( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.514585495s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.620193481s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.10( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.584471703s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 83.690048218s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.10( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.584357262s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.690048218s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.18( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.419625282s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 83.525337219s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.1e( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.514499664s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.620193481s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.17( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.419610023s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 83.525360107s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.18( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.419598579s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.525337219s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.1e( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.584214211s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 83.690040588s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.11( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.594698906s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 83.700508118s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.17( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.419576645s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.525360107s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.1e( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.584117889s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.690040588s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.11( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.594570160s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.700508118s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.16( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.419397354s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 83.525405884s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.16( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.419306755s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.525405884s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.11( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.514068604s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.620223999s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.11( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.514047623s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.620223999s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.13( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.513737679s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.620315552s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.13( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.418774605s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 83.525352478s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.13( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.513715744s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.620315552s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.13( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.418742180s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.525352478s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.14( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.523477554s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.630104065s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.14( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.523438454s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.630104065s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.1a( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.593257904s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 83.699989319s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.15( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.523337364s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.630104065s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.1a( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.593196869s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.699989319s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.12( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.513498306s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.620323181s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.15( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.418482780s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 83.525360107s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.12( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.513449669s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.620323181s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.11( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.418480873s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 83.525428772s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.19( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.593061447s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 83.700012207s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.11( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.418445587s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.525428772s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.15( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.522988319s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.630104065s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.16( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.522970200s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.630119324s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.16( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.522953987s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.630119324s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.15( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.418094635s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.525360107s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.7( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.592620850s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 83.699928284s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.f( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.418091774s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 83.525451660s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.7( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.592585564s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.699928284s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.f( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.418073654s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.525451660s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.9( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.522703171s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.630195618s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.9( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.522683144s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.630195618s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.19( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.593029022s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.700012207s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.d( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.417884827s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 83.525459290s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.d( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.417868614s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.525459290s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.6( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.592494965s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 83.699935913s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.b( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.417935371s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 83.525672913s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.b( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.417912483s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.525672913s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.6( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.592182159s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.699935913s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.c( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.522510529s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.630416870s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.c( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.522488594s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.630416870s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.7( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.522489548s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.630439758s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.7( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.522466660s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.630439758s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.8( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.592366219s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 83.700294495s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.7( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.417576790s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 83.525688171s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.8( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.425547600s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 83.533683777s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.8( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.425523758s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.533683777s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.7( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.417531967s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.525688171s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.4( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.591870308s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 83.700065613s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.f( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.591877937s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 83.700065613s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.4( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.591820717s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.700065613s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.f( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.522140503s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.630439758s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.f( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.591807365s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.700065613s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.f( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.522116661s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.630439758s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.9( v 52'17 (0'0,52'17] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.592040062s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 37'16 mlcod 37'16 active pruub 83.700386047s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.9( v 52'17 (0'0,52'17] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.591957092s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 37'16 mlcod 0'0 unknown NOTIFY pruub 83.700386047s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.2( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.425008774s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 83.533638000s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.2( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.424987793s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.533638000s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.b( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.591728210s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 83.700439453s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.3( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.424953461s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 83.533721924s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.4( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.521681786s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.630470276s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.3( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.424911499s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.533721924s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.5( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.521833420s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.630500793s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.4( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.521646500s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.630470276s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.b( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.591678619s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.700439453s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.8( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.592231750s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.700294495s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.3( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.521523476s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.630508423s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.3( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.521502495s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.630508423s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.5( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.521455765s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.630500793s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.4( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.424552917s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 83.533653259s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.4( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.424521446s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.533653259s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.2( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.521503448s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.630760193s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.2( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.521483421s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.630760193s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.e( v 52'17 (0'0,52'17] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.591158867s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 37'16 mlcod 37'16 active pruub 83.700462341s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.5( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.424459457s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 83.533737183s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.e( v 52'17 (0'0,52'17] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.591131210s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 37'16 mlcod 0'0 unknown NOTIFY pruub 83.700462341s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.5( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.424363136s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.533737183s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.d( v 52'17 (0'0,52'17] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.591118813s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 37'16 mlcod 37'16 active pruub 83.700515747s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-mon[74273]: 2.5 scrub starts
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.1( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.591026306s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 83.700500488s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-mon[74273]: 2.5 scrub ok
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.d( v 52'17 (0'0,52'17] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.590972900s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 37'16 mlcod 0'0 unknown NOTIFY pruub 83.700515747s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.9( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.423975945s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 83.533683777s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.1( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.590741158s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.700500488s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 01 16:36:29 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.1( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.520709991s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.630493164s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 01 16:36:29 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.1( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.520683289s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.630493164s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 01 16:36:29 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.9( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.423896790s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.533683777s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.2( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.590623856s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 83.700561523s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.2( v 37'16 (0'0,37'16] local-lis/les=51/52 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.590607643s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.700561523s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.a( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.423894882s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 83.533874512s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.a( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.423874855s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.533874512s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.13( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.590500832s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 83.700592041s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.13( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.590487480s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.700592041s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.1b( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.423576355s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 83.533714294s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.1b( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.423558235s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.533714294s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.14( v 52'17 (0'0,52'17] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.590326309s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 37'16 mlcod 37'16 active pruub 83.700531006s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.14( v 52'17 (0'0,52'17] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.590303421s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 37'16 mlcod 0'0 unknown NOTIFY pruub 83.700531006s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.15( v 52'17 (0'0,52'17] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.590295792s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 37'16 mlcod 37'16 active pruub 83.700607300s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.1c( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.423522949s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 83.533851624s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.15( v 52'17 (0'0,52'17] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.590267181s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 37'16 mlcod 0'0 unknown NOTIFY pruub 83.700607300s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.1c( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.423486710s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.533851624s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.1a( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.520301819s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.630722046s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.1d( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.423339844s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 83.533866882s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.1d( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.423321724s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.533866882s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.16( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.589963913s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 83.700592041s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.1a( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.520253181s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.630722046s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.16( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.589931488s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.700592041s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.19( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.520020485s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.630729675s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.17( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.589838028s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 83.700622559s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.19( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.519946098s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.630729675s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.1f( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.423087120s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 83.533889771s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.1f( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.423061371s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.533889771s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[10.17( v 37'16 (0'0,37'16] local-lis/les=51/52 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.589794159s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.700622559s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.18( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.519860268s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 84.630775452s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[5.18( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.519814491s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.630775452s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.6( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.420583725s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 83.533668518s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[2.6( empty local-lis/les=43/45 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=10.420533180s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.533668518s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[5.1e( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[2.19( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[2.18( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[5.11( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[10.9( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[5.7( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[2.17( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[10.8( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[10.15( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[2.1d( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[10.4( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[5.4( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[2.1c( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[5.13( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[5.12( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[10.7( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[2.15( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[2.f( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[2.2( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[5.5( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[2.1f( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[10.17( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[10.d( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[5.2( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[10.1a( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[5.16( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[10.19( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[5.9( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[10.6( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[2.d( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[5.f( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[5.3( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[2.3( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[10.b( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[2.5( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[10.2( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[2.a( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[5.c( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[2.9( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[2.4( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[2.7( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[10.f( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[10.e( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[5.1( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[2.b( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[2.6( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[2.8( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[10.1( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[10.11( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[2.16( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[10.1e( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[10.10( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[5.15( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[10.13( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[5.14( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[2.1b( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[2.13( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[2.11( empty local-lis/les=0/0 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[10.12( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[10.16( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.18( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.495881081s) [2] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 93.665458679s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.18( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.495829582s) [2] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.665458679s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.13( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.495417595s) [2] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 93.665199280s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.13( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.495391846s) [2] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.665199280s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.11( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.495204926s) [2] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 93.665092468s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.11( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.495119095s) [2] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.665092468s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.10( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.495086670s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 93.665100098s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.f( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.494876862s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 93.665077209s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.10( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.495039940s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.665100098s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.f( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.494811058s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.665077209s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=47/49 n=1 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=53 pruub=14.477175713s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 96.647651672s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=47/49 n=1 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=53 pruub=14.477118492s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.647651672s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.e( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.494462013s) [2] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 93.665023804s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.14( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.495474815s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 93.665191650s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.e( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.494359016s) [2] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.665023804s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.14( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.494491577s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.665191650s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.d( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.494221687s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 93.665054321s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=47/49 n=1 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=53 pruub=14.476615906s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 96.647613525s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.d( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.494025230s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.665054321s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=47/49 n=1 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=53 pruub=14.476552963s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.647613525s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.1( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.493710518s) [2] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 93.664978027s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=47/49 n=2 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=53 pruub=14.475901604s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 96.647216797s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.1( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.493669510s) [2] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.664978027s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=47/49 n=2 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=53 pruub=14.475815773s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.647216797s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.12( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.494589806s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 93.665176392s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.12( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.493677139s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.665176392s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.9( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.493265152s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 93.664916992s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.4( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.493243217s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 93.664901733s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.9( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.493245125s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.664916992s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.4( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.493202209s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.664901733s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=47/49 n=1 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=53 pruub=14.475343704s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 96.647148132s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=47/49 n=1 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=53 pruub=14.475319862s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.647148132s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=47/49 n=2 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=53 pruub=14.475547791s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 96.647613525s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=47/49 n=2 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=53 pruub=14.475505829s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.647613525s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.2( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.492819786s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 93.665008545s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.a( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.492487907s) [2] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 93.664871216s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.2( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.492757797s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.665008545s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.a( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.492443085s) [2] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.664871216s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=47/49 n=1 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=53 pruub=14.467017174s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 96.639816284s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.1b( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.492125511s) [2] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 93.664756775s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=47/49 n=1 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=53 pruub=14.466968536s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.639816284s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.1b( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.491868019s) [2] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.664756775s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.5( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.491825104s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 93.664810181s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.7( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.491532326s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 93.664558411s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.5( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.491792679s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.664810181s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.7( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.491506577s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.664558411s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.8( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.491467476s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 93.664611816s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=47/49 n=2 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=53 pruub=14.466748238s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 96.639900208s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.8( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.491446495s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.664611816s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=47/49 n=2 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=53 pruub=14.466714859s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.639900208s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.1c( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.491378784s) [2] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 93.664588928s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.1c( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.491351128s) [2] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.664588928s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.1a( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.491295815s) [2] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active pruub 93.664947510s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[4.1a( empty local-lis/les=45/46 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=11.491160393s) [2] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.664947510s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=47/49 n=1 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=53 pruub=14.473394394s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 96.647285461s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=47/49 n=1 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=53 pruub=14.472971916s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.647285461s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[5.1d( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[5.1a( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[10.14( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[5.18( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[5.19( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.17( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.549586296s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.387008667s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.17( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.549564362s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.387008667s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.1f( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.428140640s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 89.265686035s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.1f( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.428123474s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.265686035s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.1b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.453214645s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.290878296s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.1b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.453196526s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.290878296s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.466187477s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 93.303985596s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.466170311s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 93.303985596s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[4.18( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.465346336s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 93.303253174s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.465331078s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 93.303253174s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.1e( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.427536964s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 89.265571594s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.1e( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.427519798s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.265571594s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.1a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.452564240s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.290725708s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.1a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.452548981s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.290725708s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.465007782s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 93.303276062s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.464990616s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 93.303276062s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.15( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.548696518s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.387100220s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.15( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.548669815s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.387100220s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.1d( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.488517761s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 89.327064514s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.1d( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.488494873s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.327064514s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[4.11( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[4.13( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.18( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.451670647s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.290611267s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.18( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.451613426s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.290611267s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.14( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.548026085s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.387069702s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.14( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.547971725s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.387069702s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.1b( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.487143517s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 89.326370239s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.464004517s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 93.303268433s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.463977814s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 93.303268433s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[4.e( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.1f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.451397896s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.290702820s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.1f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.451328278s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.290702820s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.1b( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.487113953s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.326370239s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[9.11( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.464256287s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 93.303924561s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[4.1( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.463702202s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 93.303421021s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[9.11( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.464115143s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 93.303924561s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.463611603s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 93.303421021s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.11( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.559421539s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.399513245s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.463660240s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 93.303771973s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.11( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.559382439s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.399513245s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[4.a( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.463608742s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 93.303771973s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.463664055s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 93.303863525s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.463610649s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 93.303863525s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.463471413s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 93.303825378s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[4.1b( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[4.1c( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[4.1a( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[11.17( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[3.1e( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[7.1a( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[8.15( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[3.1f( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[7.1b( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[11.15( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[3.1d( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[11.11( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[8.14( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.463441849s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 93.303825378s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.10( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.558675766s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.399559021s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.1c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.449663162s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.290588379s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.10( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.558609009s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.399559021s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.1c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.449626923s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.290588379s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.f( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.558877945s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.400138855s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.f( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.558803558s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.400138855s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[8.11( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.3( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.449085236s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.290527344s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.3( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.449036598s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.290527344s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.7( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.484950066s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 89.326484680s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.7( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.484914780s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.326484680s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[9.d( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.462168694s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 93.303970337s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.462110519s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 93.303962708s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[9.d( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.462126732s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 93.303970337s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.462064743s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 93.303962708s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.e( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.557779312s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.399833679s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.e( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.557750702s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.399833679s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.18( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.484351158s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 89.326454163s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.2( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.448342323s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.290512085s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.2( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.448294640s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.290512085s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.6( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.484216690s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 89.326538086s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.6( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.484186172s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.326538086s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.461562157s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 93.304031372s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.d( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.557311058s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.399841309s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.d( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.557280540s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.399841309s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.461515427s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 93.304031372s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.18( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.483938217s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.326454163s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.5( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.483538628s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 89.326522827s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.461112022s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 93.304107666s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.461060524s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 93.304107666s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.5( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.483487129s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.326522827s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.b( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.556255341s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.399803162s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.460794449s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 93.304138184s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.b( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.556206703s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.399803162s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.460465431s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 93.304138184s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.3( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.482338905s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 89.326507568s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[9.9( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.459937096s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 93.304176331s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.3( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.482288361s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.326507568s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[9.9( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.459889412s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 93.304176331s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.9( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.555303574s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.399826050s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.9( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.555261612s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.399826050s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.5( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.445942879s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.290596008s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.5( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.445896149s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.290596008s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.554402351s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.399490356s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.554339409s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.399490356s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[9.b( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.458817482s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 93.304183960s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.444877625s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.290359497s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.444828987s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.290359497s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[9.b( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.458668709s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 93.304183960s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[8.12( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[7.1c( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.8( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.480870247s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 89.326629639s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.8( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.480837822s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.326629639s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.2( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.553980827s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.399848938s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.2( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.553939819s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.399848938s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.444864273s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.290840149s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.444836617s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.290840149s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.a( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.480554581s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 89.326667786s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.a( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.480361938s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.326667786s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.553465843s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.399887085s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.553433418s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.399887085s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[9.1( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.457661629s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 93.304237366s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[9.1( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.457631111s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 93.304237366s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.1( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.444386482s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.290504456s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.457424164s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 93.304222107s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.1( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.443702698s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.290504456s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.457395554s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 93.304222107s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.4( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.443195343s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.290214539s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.4( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.443164825s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.290214539s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.457152367s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 93.304389954s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.457125664s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 93.304389954s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.443999290s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.290390015s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.6( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.442792892s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.290245056s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.442934036s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.290390015s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.6( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.442755699s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.290245056s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.1( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.478866577s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 89.326522827s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.456676483s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 93.304405212s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.456609726s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 93.304405212s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.1( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.478810310s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.326522827s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.9( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.478654861s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 89.326667786s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.1( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.551912308s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.399948120s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.9( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.478625298s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.326667786s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=49/50 n=1 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.456367493s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 93.304428101s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=49/50 n=1 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.456321716s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 93.304428101s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.1( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.551858902s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.399948120s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[9.3( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.456243515s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 93.304435730s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[9.3( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.456216812s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 93.304435730s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.4( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.551620483s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.400054932s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.4( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.551573753s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.400054932s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.c( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.478306770s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 89.326866150s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[3.7( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[7.2( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.c( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.478198051s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.326866150s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.551197052s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.399894714s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.551115990s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.399894714s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.9( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.441313744s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.290145874s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.9( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.441267967s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.290145874s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.6( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.551166534s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.400192261s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.6( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.551115990s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.400192261s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.455319405s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 93.304420471s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[11.d( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.454973221s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 93.304435730s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.455271721s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 93.304420471s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.e( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.477389336s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 89.326889038s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.454933167s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 93.304435730s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.e( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.477356911s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.326889038s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=49/50 n=1 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.454684258s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 93.304473877s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[8.d( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.f( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.477212906s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 89.327033997s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=49/50 n=1 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.454646111s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 93.304473877s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.440344810s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.290237427s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[9.5( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.454283714s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 93.304466248s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.8( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.440047264s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.290420532s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.18( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.549358368s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.400192261s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[7.18( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[11.14( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[7.1f( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[3.1b( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[9.11( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[8.10( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[3.18( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.f( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.475337029s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.327033997s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.438455582s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.290237427s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[9.5( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.452673912s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 93.304466248s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.8( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.438602448s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.290420532s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[3.5( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.452946663s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 93.304992676s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.18( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.548148155s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.400192261s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.452911377s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 93.304992676s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.19( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.548068047s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.400184631s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.15( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.437981606s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.290115356s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[11.10( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.11( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.474703789s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 89.326873779s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.15( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.437950134s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.290115356s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.452264786s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 93.304588318s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[9.1b( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.452188492s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 93.304550171s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.452238083s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 93.304588318s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[11.b( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[11.f( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[11.9( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[9.1b( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.452152252s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 93.304550171s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.547711372s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.400222778s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.547685623s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.400222778s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.12( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.474430084s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 89.327018738s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[7.5( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[7.3( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[9.d( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[11.12( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[7.c( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.12( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.474401474s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.327018738s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[3.8( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.11( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.474334717s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.326873779s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.547466278s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.400238037s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.547439575s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.400238037s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.19( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.547222137s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.400184631s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[11.2( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.449914932s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 93.304954529s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.449507713s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 93.304603577s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.449832916s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 93.304954529s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.449463844s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 93.304603577s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.545113564s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.400283813s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.545066833s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.400283813s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.449435234s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 93.304809570s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.449448586s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 93.304817200s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.11( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.434594154s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.290000916s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.449403763s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 93.304809570s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.449415207s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 93.304817200s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.11( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.434567451s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.290000916s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.15( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.471428871s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 89.327102661s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.544565201s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.400253296s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[7.e( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.15( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.471392632s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.327102661s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.544533730s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.400253296s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.448950768s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 93.304878235s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.1f( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.544292450s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 88.400276184s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.448920250s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 93.304878235s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[11.1f( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=10.544263840s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.400276184s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[9.1d( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.448785782s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 93.305015564s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[9.1d( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.448745728s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 93.305015564s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.13( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.423942566s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 91.280303955s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[7.13( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.423911095s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.280303955s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.17( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.470656395s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 89.327148438s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.17( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.470609665s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.327148438s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.448276520s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 93.304931641s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[8.c( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=15.448241234s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 93.304931641s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.16( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.470381737s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active pruub 89.327148438s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[3.16( empty local-lis/les=43/46 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53 pruub=11.470326424s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.327148438s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[11.3( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[11.e( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[3.6( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[7.1( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[4.10( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[4.f( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[8.2( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[6.d( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[8.e( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[4.14( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[4.d( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[6.f( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[3.3( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[9.9( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[11.8( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[6.1( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[9.b( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[4.12( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[3.e( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[4.9( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[3.a( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[8.4( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[4.4( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[9.1( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[7.a( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[7.8( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[6.b( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[8.f( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[11.18( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[7.4( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[7.15( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[6.3( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[8.b( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[7.f( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[11.1a( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[3.11( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[7.6( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[4.2( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[6.9( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[11.1b( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[8.9( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[8.1b( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[3.1( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[4.5( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[3.9( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[11.1c( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[4.7( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[4.8( empty local-lis/les=0/0 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[11.1( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[7.11( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[6.5( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[9.3( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[11.1e( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 53 pg[6.7( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[11.4( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[11.1f( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[8.1c( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 53 pg[3.16( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[3.c( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[7.9( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[11.6( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[8.6( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[3.f( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[9.5( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[8.1a( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[9.1b( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[3.12( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[11.19( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[8.1f( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[3.15( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[8.18( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[8.1d( empty local-lis/les=0/0 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[9.1d( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[7.13( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:29 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 53 pg[3.17( empty local-lis/les=0/0 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 3.b deep-scrub starts
Oct 01 16:36:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 3.b deep-scrub ok
Oct 01 16:36:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Oct 01 16:36:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Oct 01 16:36:30 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[9.11( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[9.11( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[9.5( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[9.5( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:30 compute-0 ceph-mon[74273]: pgmap v112: 305 pgs: 305 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:36:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 01 16:36:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 01 16:36:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 01 16:36:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 01 16:36:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 01 16:36:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 01 16:36:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 01 16:36:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 01 16:36:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 01 16:36:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 01 16:36:30 compute-0 ceph-mon[74273]: osdmap e53: 3 total, 3 up, 3 in
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[9.b( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[9.b( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[9.9( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[9.11( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[9.11( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[9.9( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[9.d( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[9.d( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[9.d( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[9.9( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[9.9( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[9.1( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[9.d( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[9.b( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[9.1( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[9.b( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[9.1( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[9.1( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[3.1e( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[9.3( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[9.3( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[4.18( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[4.1a( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[4.1b( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[9.5( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[7.1a( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[9.5( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[9.1b( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[9.1b( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[9.1d( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[9.1d( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[9.3( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[9.3( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[5.13( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[9.1d( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[9.1d( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[9.1b( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[9.1b( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[5.15( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[10.16( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[5.14( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[2.13( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[10.1( v 37'16 (0'0,37'16] local-lis/les=53/54 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[2.16( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[2.b( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[2.8( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[10.e( v 52'17 lc 37'7 (0'0,52'17] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=52'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[5.3( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[2.1f( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[10.17( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[5.5( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[10.d( v 52'17 lc 37'9 (0'0,52'17] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=52'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[2.2( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[5.2( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[10.7( v 37'16 (0'0,37'16] local-lis/les=53/54 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[2.f( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[10.4( v 37'16 (0'0,37'16] local-lis/les=53/54 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[2.11( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[5.4( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[2.1d( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[10.15( v 52'17 lc 37'5 (0'0,52'17] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=52'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[5.7( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[5.1e( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[10.9( v 52'17 lc 37'15 (0'0,52'17] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=52'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[2.1c( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[2.19( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[2.18( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[11.10( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[7.1f( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[3.f( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[10.8( v 37'16 (0'0,37'16] local-lis/les=53/54 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[3.1( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[7.18( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[3.c( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[3.1b( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[11.15( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[11.12( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[3.1d( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[11.3( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[7.c( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[3.7( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[7.1( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[11.8( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[3.5( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[11.d( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=53/54 n=1 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[11.b( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[8.d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=53/54 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=33'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[4.1( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[7.2( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[7.5( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[7.e( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[3.8( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[11.9( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[11.2( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[4.a( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[7.8( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=53/54 n=1 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[3.e( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[3.11( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[7.15( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[7.a( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[11.18( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[11.1b( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[11.1a( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[11.1c( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[7.11( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[4.13( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[3.16( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[11.1e( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[11.1f( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[4.e( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[4.11( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[11.11( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[3.18( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[7.1c( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 54 pg[4.1c( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[2.17( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[5.12( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[5.11( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[2.15( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[10.1a( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[10.6( v 37'16 (0'0,37'16] local-lis/les=53/54 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[10.19( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[5.16( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[2.d( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[2.3( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[5.f( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[10.b( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[10.2( v 37'16 (0'0,37'16] local-lis/les=53/54 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[5.9( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[2.5( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[2.a( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[5.c( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[2.9( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[2.4( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[2.7( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[2.6( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[5.1( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[10.f( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[10.11( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[2.1b( empty local-lis/les=53/54 n=0 ec=43/13 lis/c=43/43 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[10.10( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[5.1d( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[5.1a( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[10.12( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[10.13( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[5.18( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[10.14( v 52'17 lc 37'13 (0'0,52'17] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[51,53)/1 crt=52'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=53/54 n=2 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=37'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[4.4( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[5.19( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[11.14( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[8.6( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=53/54 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=33'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[7.9( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[7.6( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[3.3( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[11.6( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=53/54 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=33'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[11.e( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[3.6( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[7.3( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[7.f( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[11.4( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[3.a( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[3.9( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[3.17( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[7.4( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[10.1e( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[3.15( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[11.1( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[3.12( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[11.19( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[11.17( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[6.d( v 37'39 lc 33'12 (0'0,37'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[4.f( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[4.d( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[7.1b( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[3.1f( empty local-lis/les=53/54 n=0 ec=43/15 lis/c=43/43 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[7.13( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 54 pg[11.f( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=53/54 n=2 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[6.5( v 37'39 lc 33'10 (0'0,37'39] local-lis/les=53/54 n=2 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[4.7( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[6.7( v 37'39 lc 33'19 (0'0,37'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[4.5( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[4.2( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[4.8( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[4.9( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[4.14( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[4.10( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[4.12( empty local-lis/les=53/54 n=0 ec=45/17 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:30 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 54 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[47,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:31 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Oct 01 16:36:31 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v115: 305 pgs: 305 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:36:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0) v1
Oct 01 16:36:31 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 01 16:36:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Oct 01 16:36:31 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 01 16:36:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Oct 01 16:36:31 compute-0 ceph-mon[74273]: 3.b deep-scrub starts
Oct 01 16:36:31 compute-0 ceph-mon[74273]: 3.b deep-scrub ok
Oct 01 16:36:31 compute-0 ceph-mon[74273]: osdmap e54: 3 total, 3 up, 3 in
Oct 01 16:36:31 compute-0 ceph-mon[74273]: 4.3 scrub starts
Oct 01 16:36:31 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 01 16:36:31 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 01 16:36:31 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 01 16:36:31 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 01 16:36:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Oct 01 16:36:31 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Oct 01 16:36:31 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Oct 01 16:36:31 compute-0 ceph-mgr[74571]: [progress INFO root] Completed event e4f4ba38-469c-487d-aaa2-771ac4af1678 (Global Recovery Event) in 15 seconds
Oct 01 16:36:31 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 55 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=47/49 n=1 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=55 pruub=12.080504417s) [1] r=-1 lpr=55 pi=[47,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 96.639862061s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:31 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 55 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=47/49 n=1 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=55 pruub=12.080397606s) [1] r=-1 lpr=55 pi=[47,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.639862061s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:31 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 55 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=47/49 n=2 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=55 pruub=12.087366104s) [1] r=-1 lpr=55 pi=[47,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 96.647186279s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:31 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 55 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=47/49 n=2 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=55 pruub=12.087265968s) [1] r=-1 lpr=55 pi=[47,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.647186279s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:31 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 55 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=47/49 n=2 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=55 pruub=12.087166786s) [1] r=-1 lpr=55 pi=[47,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 96.647224426s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:31 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 55 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=47/49 n=1 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=55 pruub=12.087135315s) [1] r=-1 lpr=55 pi=[47,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 96.647293091s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:31 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 55 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=47/49 n=2 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=55 pruub=12.087095261s) [1] r=-1 lpr=55 pi=[47,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.647224426s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:31 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 55 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=47/49 n=1 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=55 pruub=12.087082863s) [1] r=-1 lpr=55 pi=[47,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.647293091s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:31 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:31 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:31 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:31 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:31 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 55 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:31 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 55 pg[9.1b( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:31 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 55 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:31 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 55 pg[9.1d( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:31 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 55 pg[9.d( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:31 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 55 pg[9.1( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:31 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 55 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:31 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 55 pg[9.3( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:31 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 55 pg[9.9( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:31 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 55 pg[9.b( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:31 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 55 pg[9.11( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:31 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 55 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:31 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 55 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:31 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 55 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:31 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 55 pg[9.5( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:31 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 55 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Oct 01 16:36:32 compute-0 ceph-mon[74273]: pgmap v115: 305 pgs: 305 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:36:32 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 01 16:36:32 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 01 16:36:32 compute-0 ceph-mon[74273]: osdmap e55: 3 total, 3 up, 3 in
Oct 01 16:36:32 compute-0 ceph-mon[74273]: 4.3 scrub ok
Oct 01 16:36:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Oct 01 16:36:32 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Oct 01 16:36:32 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 56 pg[9.d( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.352908134s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 96.235000610s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:32 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 56 pg[9.d( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.352828979s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.235000610s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:32 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 56 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.346683502s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 96.229003906s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:32 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 56 pg[9.1( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.352619171s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 96.235084534s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:32 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 56 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.346591949s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.229003906s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:32 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 56 pg[9.1( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.352551460s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.235084534s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:32 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 56 pg[9.3( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.352476120s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 96.235115051s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:32 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 56 pg[9.3( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.352256775s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.235115051s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:32 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 56 pg[9.1b( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.346003532s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 96.229034424s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:32 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 56 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.346146584s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 96.229179382s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:32 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 56 pg[9.1b( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.345975876s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.229034424s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:32 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 56 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.346024513s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.229179382s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:32 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 56 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=55/56 n=2 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:32 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 56 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=55/56 n=2 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:32 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 56 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=55/56 n=1 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:32 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 56 pg[6.e( v 37'39 lc 33'18 (0'0,37'39] local-lis/les=55/56 n=1 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:32 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 56 pg[9.3( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:32 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 56 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:32 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 56 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:32 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 56 pg[9.1b( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:32 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 56 pg[9.1b( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:32 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 56 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:32 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 56 pg[9.3( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:32 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 56 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:32 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 56 pg[9.1( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:32 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 56 pg[9.1( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:32 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 56 pg[9.d( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:32 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 56 pg[9.d( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:32 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 2.c deep-scrub starts
Oct 01 16:36:32 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 2.c deep-scrub ok
Oct 01 16:36:33 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v118: 305 pgs: 2 active+recovery_wait+remapped, 7 active+remapped, 1 active+recovering+remapped, 295 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 10/213 objects misplaced (4.695%); 786 B/s, 2 keys/s, 14 objects/s recovering
Oct 01 16:36:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0) v1
Oct 01 16:36:33 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 01 16:36:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Oct 01 16:36:33 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 01 16:36:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Oct 01 16:36:33 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 01 16:36:33 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 01 16:36:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Oct 01 16:36:33 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Oct 01 16:36:33 compute-0 ceph-mon[74273]: osdmap e56: 3 total, 3 up, 3 in
Oct 01 16:36:33 compute-0 ceph-mon[74273]: 2.c deep-scrub starts
Oct 01 16:36:33 compute-0 ceph-mon[74273]: 2.c deep-scrub ok
Oct 01 16:36:33 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 01 16:36:33 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 01 16:36:33 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 57 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.348153114s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 96.236389160s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:33 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 57 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.348084450s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.236389160s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:33 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 57 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.347249031s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 96.235694885s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:33 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 57 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.347189903s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.235694885s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:33 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 57 pg[9.9( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.346577644s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 96.235176086s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:33 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 57 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=53/54 n=2 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=12.963681221s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=37'39 mlcod 37'39 active pruub 94.852302551s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:33 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 57 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=53/54 n=2 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=12.963650703s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 94.852302551s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:33 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 57 pg[9.b( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.346576691s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 96.235244751s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:33 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 57 pg[9.9( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.346522331s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.235176086s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:33 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 57 pg[9.b( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.346538544s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.235244751s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:33 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 57 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/55/0 sis=57 pruub=12.966565132s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=37'39 mlcod 37'39 active pruub 94.855392456s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:33 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 57 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/55/0 sis=57 pruub=12.966540337s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 94.855392456s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:33 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 57 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=12.966525078s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=37'39 mlcod 37'39 active pruub 94.855499268s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:33 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 57 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=12.966493607s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 94.855499268s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:33 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 57 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.346278191s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 96.235313416s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:33 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 57 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=12.966506958s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=37'39 mlcod 37'39 active pruub 94.855560303s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:33 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 57 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.346207619s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.235313416s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:33 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 57 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=12.966455460s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 94.855560303s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:33 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 57 pg[9.5( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.346570015s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 96.235771179s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:33 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 57 pg[9.5( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.346519470s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.235771179s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:33 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 57 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.346124649s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 96.235443115s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:33 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 57 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.346294403s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 96.235092163s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:33 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 57 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.346072197s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.235443115s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:33 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 57 pg[9.1d( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.345458984s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 96.234886169s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:33 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 57 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.345661163s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.235092163s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:33 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 57 pg[9.1d( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.345427513s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.234886169s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:33 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 57 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:33 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 57 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:33 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 57 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:33 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 57 pg[9.11( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:33 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 57 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:33 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 57 pg[9.11( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:33 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 57 pg[9.5( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:33 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 57 pg[9.5( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:33 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 57 pg[9.b( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:33 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 57 pg[9.b( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:33 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 57 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:33 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 57 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:33 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 57 pg[9.9( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:33 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 57 pg[9.9( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:33 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 57 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:33 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 57 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:33 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 57 pg[9.1d( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:33 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 57 pg[9.1d( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:33 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 57 pg[9.11( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.345557213s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 96.235275269s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:33 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 57 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:33 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 57 pg[6.3( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:33 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 57 pg[9.11( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.345467567s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 96.235275269s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:33 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 57 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:33 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 57 pg[6.f( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:33 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 57 pg[6.7( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:33 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 57 pg[6.b( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:33 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 57 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=56/57 n=5 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:33 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 57 pg[9.d( v 40'385 (0'0,40'385] local-lis/les=56/57 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:33 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 57 pg[9.1b( v 40'385 (0'0,40'385] local-lis/les=56/57 n=5 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:33 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 57 pg[9.3( v 40'385 (0'0,40'385] local-lis/les=56/57 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:33 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 57 pg[9.1( v 40'385 (0'0,40'385] local-lis/les=56/57 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:33 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 57 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=56/57 n=5 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:33 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 2.e scrub starts
Oct 01 16:36:33 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 2.e scrub ok
Oct 01 16:36:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:36:34 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 3.d scrub starts
Oct 01 16:36:34 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 3.d scrub ok
Oct 01 16:36:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Oct 01 16:36:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Oct 01 16:36:34 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Oct 01 16:36:34 compute-0 ceph-mon[74273]: pgmap v118: 305 pgs: 2 active+recovery_wait+remapped, 7 active+remapped, 1 active+recovering+remapped, 295 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 10/213 objects misplaced (4.695%); 786 B/s, 2 keys/s, 14 objects/s recovering
Oct 01 16:36:34 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 01 16:36:34 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 01 16:36:34 compute-0 ceph-mon[74273]: osdmap e57: 3 total, 3 up, 3 in
Oct 01 16:36:34 compute-0 ceph-mon[74273]: 2.e scrub starts
Oct 01 16:36:34 compute-0 ceph-mon[74273]: 2.e scrub ok
Oct 01 16:36:34 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 58 pg[9.11( v 40'385 (0'0,40'385] local-lis/les=57/58 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:34 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 58 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=57/58 n=5 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:34 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 58 pg[9.5( v 40'385 (0'0,40'385] local-lis/les=57/58 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:34 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 58 pg[9.b( v 40'385 (0'0,40'385] local-lis/les=57/58 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:34 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 58 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=57/58 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:34 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 58 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=57/58 n=5 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:34 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 58 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:34 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 58 pg[6.7( v 37'39 lc 33'19 (0'0,37'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:34 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 58 pg[9.9( v 40'385 (0'0,40'385] local-lis/les=57/58 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:34 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 58 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/58 n=2 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=37'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:34 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 58 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=57/58 n=6 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:34 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 58 pg[9.1d( v 40'385 (0'0,40'385] local-lis/les=57/58 n=5 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:34 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 58 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=53/53 les/c/f=54/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:34 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 58 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=57/58 n=5 ec=49/34 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:35 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v121: 305 pgs: 14 peering, 291 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s, 2 keys/s, 27 objects/s recovering
Oct 01 16:36:35 compute-0 ceph-mon[74273]: 3.d scrub starts
Oct 01 16:36:35 compute-0 ceph-mon[74273]: 3.d scrub ok
Oct 01 16:36:35 compute-0 ceph-mon[74273]: osdmap e58: 3 total, 3 up, 3 in
Oct 01 16:36:36 compute-0 ceph-mgr[74571]: [progress INFO root] Writing back 16 completed events
Oct 01 16:36:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 01 16:36:36 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:36:36 compute-0 ceph-mon[74273]: pgmap v121: 305 pgs: 14 peering, 291 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s, 2 keys/s, 27 objects/s recovering
Oct 01 16:36:36 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:36:36 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Oct 01 16:36:36 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Oct 01 16:36:37 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v122: 305 pgs: 14 peering, 291 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 747 B/s, 1 keys/s, 18 objects/s recovering
Oct 01 16:36:37 compute-0 ceph-mon[74273]: 2.10 scrub starts
Oct 01 16:36:37 compute-0 ceph-mon[74273]: 2.10 scrub ok
Oct 01 16:36:37 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Oct 01 16:36:37 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Oct 01 16:36:38 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Oct 01 16:36:38 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Oct 01 16:36:38 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Oct 01 16:36:38 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Oct 01 16:36:38 compute-0 ceph-mon[74273]: pgmap v122: 305 pgs: 14 peering, 291 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 747 B/s, 1 keys/s, 18 objects/s recovering
Oct 01 16:36:38 compute-0 ceph-mon[74273]: 2.12 scrub starts
Oct 01 16:36:38 compute-0 ceph-mon[74273]: 2.12 scrub ok
Oct 01 16:36:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:36:38 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Oct 01 16:36:38 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Oct 01 16:36:39 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v123: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 733 B/s, 2 keys/s, 16 objects/s recovering
Oct 01 16:36:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0) v1
Oct 01 16:36:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 01 16:36:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Oct 01 16:36:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 01 16:36:39 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Oct 01 16:36:39 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Oct 01 16:36:39 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 2.1a deep-scrub starts
Oct 01 16:36:39 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 2.1a deep-scrub ok
Oct 01 16:36:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Oct 01 16:36:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 01 16:36:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 01 16:36:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Oct 01 16:36:39 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Oct 01 16:36:39 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 59 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=47/49 n=2 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=59 pruub=12.349481583s) [1] r=-1 lpr=59 pi=[47,59)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 104.640113831s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:39 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 59 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=47/49 n=2 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=59 pruub=12.349414825s) [1] r=-1 lpr=59 pi=[47,59)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.640113831s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:39 compute-0 ceph-mon[74273]: 3.10 scrub starts
Oct 01 16:36:39 compute-0 ceph-mon[74273]: 3.10 scrub ok
Oct 01 16:36:39 compute-0 ceph-mon[74273]: 2.14 scrub starts
Oct 01 16:36:39 compute-0 ceph-mon[74273]: 2.14 scrub ok
Oct 01 16:36:39 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 59 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=47/49 n=1 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=59 pruub=12.356596947s) [1] r=-1 lpr=59 pi=[47,59)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 104.647758484s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:39 compute-0 ceph-mon[74273]: 4.6 scrub starts
Oct 01 16:36:39 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 59 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=47/49 n=1 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=59 pruub=12.356567383s) [1] r=-1 lpr=59 pi=[47,59)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.647758484s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:39 compute-0 ceph-mon[74273]: 4.6 scrub ok
Oct 01 16:36:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 01 16:36:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 01 16:36:39 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 59 pg[6.c( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[47,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:39 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 59 pg[6.4( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[47,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:39 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 4.b scrub starts
Oct 01 16:36:40 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 4.b scrub ok
Oct 01 16:36:40 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Oct 01 16:36:40 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Oct 01 16:36:40 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 2.1e deep-scrub starts
Oct 01 16:36:40 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 2.1e deep-scrub ok
Oct 01 16:36:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Oct 01 16:36:40 compute-0 ceph-mon[74273]: pgmap v123: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 733 B/s, 2 keys/s, 16 objects/s recovering
Oct 01 16:36:40 compute-0 ceph-mon[74273]: 3.13 scrub starts
Oct 01 16:36:40 compute-0 ceph-mon[74273]: 3.13 scrub ok
Oct 01 16:36:40 compute-0 ceph-mon[74273]: 2.1a deep-scrub starts
Oct 01 16:36:40 compute-0 ceph-mon[74273]: 2.1a deep-scrub ok
Oct 01 16:36:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 01 16:36:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 01 16:36:40 compute-0 ceph-mon[74273]: osdmap e59: 3 total, 3 up, 3 in
Oct 01 16:36:40 compute-0 ceph-mon[74273]: 4.b scrub starts
Oct 01 16:36:40 compute-0 ceph-mon[74273]: 4.b scrub ok
Oct 01 16:36:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Oct 01 16:36:40 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Oct 01 16:36:40 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 60 pg[6.4( v 37'39 lc 33'14 (0'0,37'39] local-lis/les=59/60 n=2 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[47,59)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:40 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 60 pg[6.c( v 37'39 lc 33'16 (0'0,37'39] local-lis/les=59/60 n=1 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[47,59)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:40 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 4.c scrub starts
Oct 01 16:36:40 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 4.c scrub ok
Oct 01 16:36:41 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v126: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 92 B/s, 1 keys/s, 1 objects/s recovering
Oct 01 16:36:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0) v1
Oct 01 16:36:41 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 01 16:36:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Oct 01 16:36:41 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 01 16:36:41 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Oct 01 16:36:41 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Oct 01 16:36:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:36:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:36:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:36:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:36:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:36:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:36:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Oct 01 16:36:41 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 01 16:36:41 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 01 16:36:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Oct 01 16:36:41 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Oct 01 16:36:41 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 61 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=12.873309135s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=37'39 mlcod 37'39 active pruub 102.855201721s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:41 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 61 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=12.873242378s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 102.855201721s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:41 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 61 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=53/54 n=2 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=12.873353004s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=37'39 mlcod 37'39 active pruub 102.855506897s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:41 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 61 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=53/54 n=2 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=12.873298645s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 102.855506897s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:41 compute-0 ceph-mon[74273]: 3.14 scrub starts
Oct 01 16:36:41 compute-0 ceph-mon[74273]: 3.14 scrub ok
Oct 01 16:36:41 compute-0 ceph-mon[74273]: 2.1e deep-scrub starts
Oct 01 16:36:41 compute-0 ceph-mon[74273]: 2.1e deep-scrub ok
Oct 01 16:36:41 compute-0 ceph-mon[74273]: osdmap e60: 3 total, 3 up, 3 in
Oct 01 16:36:41 compute-0 ceph-mon[74273]: 4.c scrub starts
Oct 01 16:36:41 compute-0 ceph-mon[74273]: 4.c scrub ok
Oct 01 16:36:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 01 16:36:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 01 16:36:41 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 61 pg[6.d( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:41 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 61 pg[6.5( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:42 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Oct 01 16:36:42 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Oct 01 16:36:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Oct 01 16:36:42 compute-0 ceph-mon[74273]: pgmap v126: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 92 B/s, 1 keys/s, 1 objects/s recovering
Oct 01 16:36:42 compute-0 ceph-mon[74273]: 3.19 scrub starts
Oct 01 16:36:42 compute-0 ceph-mon[74273]: 3.19 scrub ok
Oct 01 16:36:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 01 16:36:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 01 16:36:42 compute-0 ceph-mon[74273]: osdmap e61: 3 total, 3 up, 3 in
Oct 01 16:36:42 compute-0 ceph-mon[74273]: 4.15 scrub starts
Oct 01 16:36:42 compute-0 ceph-mon[74273]: 4.15 scrub ok
Oct 01 16:36:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Oct 01 16:36:42 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Oct 01 16:36:42 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 62 pg[6.5( v 37'39 lc 33'10 (0'0,37'39] local-lis/les=61/62 n=2 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:42 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 62 pg[6.d( v 37'39 lc 33'12 (0'0,37'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:43 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v129: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 390 B/s, 1 objects/s recovering
Oct 01 16:36:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0) v1
Oct 01 16:36:43 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 01 16:36:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Oct 01 16:36:43 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 01 16:36:43 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Oct 01 16:36:43 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Oct 01 16:36:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Oct 01 16:36:43 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 01 16:36:43 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 01 16:36:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Oct 01 16:36:43 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Oct 01 16:36:43 compute-0 ceph-mon[74273]: osdmap e62: 3 total, 3 up, 3 in
Oct 01 16:36:43 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 01 16:36:43 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 01 16:36:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:36:43 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 63 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=8.876910210s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 101.303810120s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:43 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 63 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=8.876860619s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.303810120s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:43 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 63 pg[9.e( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=8.877018929s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 101.304611206s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:43 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 63 pg[9.e( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=8.876976967s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.304611206s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:43 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 63 pg[9.6( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=8.877055168s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 101.304733276s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:43 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 63 pg[9.6( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=8.877034187s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.304733276s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:43 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 63 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63) [2] r=0 lpr=63 pi=[49,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:43 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 63 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=8.876873970s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 101.305068970s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:43 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 63 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=8.876681328s) [2] r=-1 lpr=63 pi=[49,63)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.305068970s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:43 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 63 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63) [2] r=0 lpr=63 pi=[49,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:43 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 63 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63) [2] r=0 lpr=63 pi=[49,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:43 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 63 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63) [2] r=0 lpr=63 pi=[49,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:44 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Oct 01 16:36:44 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Oct 01 16:36:44 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Oct 01 16:36:44 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Oct 01 16:36:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Oct 01 16:36:44 compute-0 ceph-mon[74273]: pgmap v129: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 390 B/s, 1 objects/s recovering
Oct 01 16:36:44 compute-0 ceph-mon[74273]: 3.1a scrub starts
Oct 01 16:36:44 compute-0 ceph-mon[74273]: 3.1a scrub ok
Oct 01 16:36:44 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 01 16:36:44 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 01 16:36:44 compute-0 ceph-mon[74273]: osdmap e63: 3 total, 3 up, 3 in
Oct 01 16:36:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Oct 01 16:36:44 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Oct 01 16:36:44 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:44 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:44 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:44 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:44 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:44 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:44 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:44 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:44 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 64 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [2]/[1] r=0 lpr=64 pi=[49,64)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:44 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 64 pg[9.6( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [2]/[1] r=0 lpr=64 pi=[49,64)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:44 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 64 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [2]/[1] r=0 lpr=64 pi=[49,64)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:44 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 64 pg[9.e( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [2]/[1] r=0 lpr=64 pi=[49,64)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:44 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 64 pg[9.6( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [2]/[1] r=0 lpr=64 pi=[49,64)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:44 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 64 pg[9.e( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [2]/[1] r=0 lpr=64 pi=[49,64)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:44 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 64 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [2]/[1] r=0 lpr=64 pi=[49,64)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:44 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 64 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [2]/[1] r=0 lpr=64 pi=[49,64)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:45 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v132: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 430 B/s, 2 objects/s recovering
Oct 01 16:36:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0) v1
Oct 01 16:36:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 01 16:36:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Oct 01 16:36:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 01 16:36:45 compute-0 ceph-mon[74273]: 3.1c scrub starts
Oct 01 16:36:45 compute-0 ceph-mon[74273]: 3.1c scrub ok
Oct 01 16:36:45 compute-0 ceph-mon[74273]: 5.6 scrub starts
Oct 01 16:36:45 compute-0 ceph-mon[74273]: 5.6 scrub ok
Oct 01 16:36:45 compute-0 ceph-mon[74273]: osdmap e64: 3 total, 3 up, 3 in
Oct 01 16:36:45 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 01 16:36:45 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 01 16:36:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Oct 01 16:36:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 01 16:36:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 01 16:36:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Oct 01 16:36:45 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Oct 01 16:36:45 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 65 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65 pruub=12.860866547s) [2] r=-1 lpr=65 pi=[57,65)/1 crt=40'385 mlcod 0'0 active pruub 111.249176025s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:45 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 65 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65 pruub=12.860774040s) [2] r=-1 lpr=65 pi=[57,65)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 111.249176025s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:45 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 65 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=57/58 n=5 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65 pruub=12.860739708s) [2] r=-1 lpr=65 pi=[57,65)/1 crt=40'385 mlcod 0'0 active pruub 111.249168396s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:45 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 65 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=57/58 n=5 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65 pruub=12.860696793s) [2] r=-1 lpr=65 pi=[57,65)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 111.249168396s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:45 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 65 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65 pruub=12.860601425s) [2] r=-1 lpr=65 pi=[57,65)/1 crt=40'385 mlcod 0'0 active pruub 111.249320984s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:45 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 65 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65 pruub=12.860552788s) [2] r=-1 lpr=65 pi=[57,65)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 111.249320984s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:45 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 65 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=56/57 n=5 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=65 pruub=11.849000931s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=40'385 mlcod 0'0 active pruub 110.237968445s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:45 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 65 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=56/57 n=5 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=65 pruub=11.848944664s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 110.237968445s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:45 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=65) [2] r=0 lpr=65 pi=[56,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:45 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2] r=0 lpr=65 pi=[57,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:45 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2] r=0 lpr=65 pi=[57,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:45 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=65) [2] r=0 lpr=65 pi=[57,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:45 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 65 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=64/65 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[49,64)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:45 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 65 pg[9.6( v 40'385 (0'0,40'385] local-lis/les=64/65 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[49,64)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:45 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 65 pg[9.e( v 40'385 (0'0,40'385] local-lis/les=64/65 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[49,64)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:45 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 65 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=64/65 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[49,64)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:46 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Oct 01 16:36:46 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Oct 01 16:36:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Oct 01 16:36:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Oct 01 16:36:46 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Oct 01 16:36:46 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 66 pg[9.e( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=49/34 lis/c=64/49 les/c/f=65/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:46 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 66 pg[9.e( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=49/34 lis/c=64/49 les/c/f=65/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:46 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:46 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:46 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:46 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 66 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=64/49 les/c/f=65/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:46 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:46 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 66 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=64/49 les/c/f=65/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:46 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:46 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:46 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 66 pg[9.6( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=49/34 lis/c=64/49 les/c/f=65/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:46 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 66 pg[9.6( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=49/34 lis/c=64/49 les/c/f=65/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:46 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:46 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:46 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 66 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=64/49 les/c/f=65/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:46 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 66 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=64/49 les/c/f=65/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:46 compute-0 ceph-mon[74273]: pgmap v132: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 430 B/s, 2 objects/s recovering
Oct 01 16:36:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 01 16:36:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 01 16:36:46 compute-0 ceph-mon[74273]: osdmap e65: 3 total, 3 up, 3 in
Oct 01 16:36:46 compute-0 ceph-mon[74273]: 7.7 scrub starts
Oct 01 16:36:46 compute-0 ceph-mon[74273]: 7.7 scrub ok
Oct 01 16:36:46 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 66 pg[9.6( v 40'385 (0'0,40'385] local-lis/les=64/65 n=6 ec=49/34 lis/c=64/49 les/c/f=65/50/0 sis=66 pruub=15.006968498s) [2] async=[2] r=-1 lpr=66 pi=[49,66)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 110.073486328s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:46 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 66 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=64/65 n=5 ec=49/34 lis/c=64/49 les/c/f=65/50/0 sis=66 pruub=15.007040977s) [2] async=[2] r=-1 lpr=66 pi=[49,66)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 110.073577881s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:46 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 66 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=64/65 n=5 ec=49/34 lis/c=64/49 les/c/f=65/50/0 sis=66 pruub=15.006951332s) [2] r=-1 lpr=66 pi=[49,66)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.073577881s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:46 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 66 pg[9.6( v 40'385 (0'0,40'385] local-lis/les=64/65 n=6 ec=49/34 lis/c=64/49 les/c/f=65/50/0 sis=66 pruub=15.006846428s) [2] r=-1 lpr=66 pi=[49,66)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.073486328s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:46 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 66 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=66) [2]/[0] r=0 lpr=66 pi=[57,66)/1 crt=40'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:46 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 66 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=56/57 n=5 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=0 lpr=66 pi=[56,66)/1 crt=40'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:46 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 66 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=66) [2]/[0] r=0 lpr=66 pi=[57,66)/1 crt=40'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:46 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 66 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=56/57 n=5 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=0 lpr=66 pi=[56,66)/1 crt=40'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:46 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 66 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=66) [2]/[0] r=0 lpr=66 pi=[57,66)/1 crt=40'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:46 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 66 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=57/58 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=66) [2]/[0] r=0 lpr=66 pi=[57,66)/1 crt=40'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:46 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 66 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=57/58 n=5 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=66) [2]/[0] r=0 lpr=66 pi=[57,66)/1 crt=40'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:46 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 66 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=57/58 n=5 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=66) [2]/[0] r=0 lpr=66 pi=[57,66)/1 crt=40'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:46 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 66 pg[9.e( v 40'385 (0'0,40'385] local-lis/les=64/65 n=6 ec=49/34 lis/c=64/49 les/c/f=65/50/0 sis=66 pruub=15.006778717s) [2] async=[2] r=-1 lpr=66 pi=[49,66)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 110.073478699s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:46 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 66 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=64/65 n=5 ec=49/34 lis/c=64/49 les/c/f=65/50/0 sis=66 pruub=15.006640434s) [2] async=[2] r=-1 lpr=66 pi=[49,66)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 110.073425293s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:46 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 66 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=64/65 n=5 ec=49/34 lis/c=64/49 les/c/f=65/50/0 sis=66 pruub=15.006583214s) [2] r=-1 lpr=66 pi=[49,66)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.073425293s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:46 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 66 pg[9.e( v 40'385 (0'0,40'385] local-lis/les=64/65 n=6 ec=49/34 lis/c=64/49 les/c/f=65/50/0 sis=66 pruub=15.006642342s) [2] r=-1 lpr=66 pi=[49,66)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.073478699s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:47 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Oct 01 16:36:47 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Oct 01 16:36:47 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v135: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 39 B/s, 0 objects/s recovering
Oct 01 16:36:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0) v1
Oct 01 16:36:47 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 01 16:36:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Oct 01 16:36:47 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 01 16:36:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Oct 01 16:36:47 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 01 16:36:47 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 01 16:36:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Oct 01 16:36:47 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Oct 01 16:36:47 compute-0 ceph-mon[74273]: osdmap e66: 3 total, 3 up, 3 in
Oct 01 16:36:47 compute-0 ceph-mon[74273]: 4.16 scrub starts
Oct 01 16:36:47 compute-0 ceph-mon[74273]: 4.16 scrub ok
Oct 01 16:36:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 01 16:36:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 01 16:36:47 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 67 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=47/49 n=1 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=67 pruub=12.199593544s) [2] r=-1 lpr=67 pi=[47,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 112.647514343s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:47 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 67 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=47/49 n=1 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=67 pruub=12.199557304s) [2] r=-1 lpr=67 pi=[47,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.647514343s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:47 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 67 pg[6.8( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[47,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:47 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 67 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=66/67 n=5 ec=49/34 lis/c=64/49 les/c/f=65/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:47 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 67 pg[9.e( v 40'385 (0'0,40'385] local-lis/les=66/67 n=6 ec=49/34 lis/c=64/49 les/c/f=65/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:47 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 67 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=66/67 n=5 ec=49/34 lis/c=64/49 les/c/f=65/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:47 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 67 pg[9.6( v 40'385 (0'0,40'385] local-lis/les=66/67 n=6 ec=49/34 lis/c=64/49 les/c/f=65/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:47 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 67 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=66/67 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[57,66)/1 crt=40'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:47 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 67 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=66/67 n=6 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[57,66)/1 crt=40'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:47 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 67 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=66/67 n=5 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[56,66)/1 crt=40'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:47 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 67 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=66/67 n=5 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[57,66)/1 crt=40'385 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:47 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 67 pg[9.8( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67 pruub=13.011361122s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 109.305221558s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:47 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 67 pg[9.8( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67 pruub=13.011280060s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 109.305221558s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:47 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 67 pg[9.18( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67 pruub=13.010951042s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 109.305221558s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:47 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 67 pg[9.18( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67 pruub=13.010848045s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 109.305221558s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:47 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:47 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:48 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 7.b scrub starts
Oct 01 16:36:48 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 7.b scrub ok
Oct 01 16:36:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Oct 01 16:36:48 compute-0 ceph-mon[74273]: pgmap v135: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 39 B/s, 0 objects/s recovering
Oct 01 16:36:48 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 01 16:36:48 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 01 16:36:48 compute-0 ceph-mon[74273]: osdmap e67: 3 total, 3 up, 3 in
Oct 01 16:36:48 compute-0 ceph-mon[74273]: 7.b scrub starts
Oct 01 16:36:48 compute-0 ceph-mon[74273]: 7.b scrub ok
Oct 01 16:36:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Oct 01 16:36:48 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Oct 01 16:36:48 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 68 pg[9.8( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=68) [2]/[1] r=0 lpr=68 pi=[49,68)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:48 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[49,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:48 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[49,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:48 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 68 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=66/57 les/c/f=67/58/0 sis=68) [2] r=0 lpr=68 pi=[57,68)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:48 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 68 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=49/34 lis/c=66/57 les/c/f=67/58/0 sis=68) [2] r=0 lpr=68 pi=[57,68)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:48 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 68 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=49/34 lis/c=66/57 les/c/f=67/58/0 sis=68) [2] r=0 lpr=68 pi=[57,68)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:48 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 68 pg[9.8( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=68) [2]/[1] r=0 lpr=68 pi=[49,68)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:48 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 68 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=66/57 les/c/f=67/58/0 sis=68) [2] r=0 lpr=68 pi=[57,68)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:48 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 68 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=49/34 lis/c=66/57 les/c/f=67/58/0 sis=68) [2] r=0 lpr=68 pi=[57,68)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:48 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 68 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=49/34 lis/c=66/57 les/c/f=67/58/0 sis=68) [2] r=0 lpr=68 pi=[57,68)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:48 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 68 pg[9.18( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=68) [2]/[1] r=0 lpr=68 pi=[49,68)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:48 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 68 pg[9.18( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=68) [2]/[1] r=0 lpr=68 pi=[49,68)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:48 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 68 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=66/56 les/c/f=67/57/0 sis=68) [2] r=0 lpr=68 pi=[56,68)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:48 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[49,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:48 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 68 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=66/56 les/c/f=67/57/0 sis=68) [2] r=0 lpr=68 pi=[56,68)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:48 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[49,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:48 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 68 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=66/67 n=6 ec=49/34 lis/c=66/57 les/c/f=67/58/0 sis=68 pruub=14.847204208s) [2] async=[2] r=-1 lpr=68 pi=[57,68)/1 crt=40'385 mlcod 40'385 active pruub 116.466850281s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:48 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 68 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=66/67 n=5 ec=49/34 lis/c=66/57 les/c/f=67/58/0 sis=68 pruub=14.847185135s) [2] async=[2] r=-1 lpr=68 pi=[57,68)/1 crt=40'385 mlcod 40'385 active pruub 116.466873169s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:48 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 68 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=66/67 n=5 ec=49/34 lis/c=66/56 les/c/f=67/57/0 sis=68 pruub=14.847208023s) [2] async=[2] r=-1 lpr=68 pi=[56,68)/1 crt=40'385 mlcod 40'385 active pruub 116.466911316s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:48 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 68 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=66/67 n=5 ec=49/34 lis/c=66/56 les/c/f=67/57/0 sis=68 pruub=14.847144127s) [2] r=-1 lpr=68 pi=[56,68)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 116.466911316s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:48 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 68 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=66/67 n=6 ec=49/34 lis/c=66/57 les/c/f=67/58/0 sis=68 pruub=14.847131729s) [2] r=-1 lpr=68 pi=[57,68)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 116.466850281s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:48 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 68 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=66/67 n=5 ec=49/34 lis/c=66/57 les/c/f=67/58/0 sis=68 pruub=14.847083092s) [2] r=-1 lpr=68 pi=[57,68)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 116.466873169s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:48 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 68 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=66/67 n=6 ec=49/34 lis/c=66/57 les/c/f=67/58/0 sis=68 pruub=14.846551895s) [2] async=[2] r=-1 lpr=68 pi=[57,68)/1 crt=40'385 mlcod 40'385 active pruub 116.466842651s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:48 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 68 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=66/67 n=6 ec=49/34 lis/c=66/57 les/c/f=67/58/0 sis=68 pruub=14.846480370s) [2] r=-1 lpr=68 pi=[57,68)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 116.466842651s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:48 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 68 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=67/68 n=1 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[47,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:49 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v138: 305 pgs: 2 unknown, 1 peering, 302 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 4 objects/s recovering
Oct 01 16:36:49 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Oct 01 16:36:49 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Oct 01 16:36:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Oct 01 16:36:49 compute-0 ceph-mon[74273]: osdmap e68: 3 total, 3 up, 3 in
Oct 01 16:36:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Oct 01 16:36:49 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Oct 01 16:36:49 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 69 pg[9.18( v 40'385 (0'0,40'385] local-lis/les=68/69 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[49,68)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:49 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 69 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=68/69 n=6 ec=49/34 lis/c=66/57 les/c/f=67/58/0 sis=68) [2] r=0 lpr=68 pi=[57,68)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:49 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 69 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=68/69 n=5 ec=49/34 lis/c=66/57 les/c/f=67/58/0 sis=68) [2] r=0 lpr=68 pi=[57,68)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:49 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 69 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=68/69 n=6 ec=49/34 lis/c=66/57 les/c/f=67/58/0 sis=68) [2] r=0 lpr=68 pi=[57,68)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:49 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 69 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=68/69 n=5 ec=49/34 lis/c=66/56 les/c/f=67/57/0 sis=68) [2] r=0 lpr=68 pi=[56,68)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:49 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 69 pg[9.8( v 40'385 (0'0,40'385] local-lis/les=68/69 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[49,68)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Oct 01 16:36:50 compute-0 ceph-mon[74273]: pgmap v138: 305 pgs: 2 unknown, 1 peering, 302 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 4 objects/s recovering
Oct 01 16:36:50 compute-0 ceph-mon[74273]: 5.8 scrub starts
Oct 01 16:36:50 compute-0 ceph-mon[74273]: 5.8 scrub ok
Oct 01 16:36:50 compute-0 ceph-mon[74273]: osdmap e69: 3 total, 3 up, 3 in
Oct 01 16:36:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Oct 01 16:36:50 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Oct 01 16:36:50 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 70 pg[9.8( v 40'385 (0'0,40'385] local-lis/les=68/69 n=6 ec=49/34 lis/c=68/49 les/c/f=69/50/0 sis=70 pruub=14.963750839s) [2] async=[2] r=-1 lpr=70 pi=[49,70)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 114.265708923s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:50 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 70 pg[9.8( v 40'385 (0'0,40'385] local-lis/les=68/69 n=6 ec=49/34 lis/c=68/49 les/c/f=69/50/0 sis=70 pruub=14.963620186s) [2] r=-1 lpr=70 pi=[49,70)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 114.265708923s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:50 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 70 pg[9.18( v 40'385 (0'0,40'385] local-lis/les=68/69 n=5 ec=49/34 lis/c=68/49 les/c/f=69/50/0 sis=70 pruub=14.959170341s) [2] async=[2] r=-1 lpr=70 pi=[49,70)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 114.261482239s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:50 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 70 pg[9.18( v 40'385 (0'0,40'385] local-lis/les=68/69 n=5 ec=49/34 lis/c=68/49 les/c/f=69/50/0 sis=70 pruub=14.959095001s) [2] r=-1 lpr=70 pi=[49,70)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 114.261482239s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:50 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 70 pg[9.18( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=68/49 les/c/f=69/50/0 sis=70) [2] r=0 lpr=70 pi=[49,70)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:50 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 70 pg[9.8( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=49/34 lis/c=68/49 les/c/f=69/50/0 sis=70) [2] r=0 lpr=70 pi=[49,70)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:50 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 70 pg[9.18( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=68/49 les/c/f=69/50/0 sis=70) [2] r=0 lpr=70 pi=[49,70)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:50 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 70 pg[9.8( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=49/34 lis/c=68/49 les/c/f=69/50/0 sis=70) [2] r=0 lpr=70 pi=[49,70)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:51 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v141: 305 pgs: 2 unknown, 1 peering, 302 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 4 objects/s recovering
Oct 01 16:36:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Oct 01 16:36:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Oct 01 16:36:51 compute-0 ceph-mon[74273]: osdmap e70: 3 total, 3 up, 3 in
Oct 01 16:36:51 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Oct 01 16:36:51 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 71 pg[9.8( v 40'385 (0'0,40'385] local-lis/les=70/71 n=6 ec=49/34 lis/c=68/49 les/c/f=69/50/0 sis=70) [2] r=0 lpr=70 pi=[49,70)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:51 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 71 pg[9.18( v 40'385 (0'0,40'385] local-lis/les=70/71 n=5 ec=49/34 lis/c=68/49 les/c/f=69/50/0 sis=70) [2] r=0 lpr=70 pi=[49,70)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:52 compute-0 ceph-mon[74273]: pgmap v141: 305 pgs: 2 unknown, 1 peering, 302 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 4 objects/s recovering
Oct 01 16:36:52 compute-0 ceph-mon[74273]: osdmap e71: 3 total, 3 up, 3 in
Oct 01 16:36:52 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Oct 01 16:36:52 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Oct 01 16:36:53 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v143: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 6.1 KiB/s rd, 449 B/s wr, 15 op/s; 169 B/s, 6 objects/s recovering
Oct 01 16:36:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0) v1
Oct 01 16:36:53 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 01 16:36:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Oct 01 16:36:53 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 01 16:36:53 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 7.d scrub starts
Oct 01 16:36:53 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 7.d scrub ok
Oct 01 16:36:53 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 5.a scrub starts
Oct 01 16:36:53 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 5.a scrub ok
Oct 01 16:36:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:36:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Oct 01 16:36:53 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 01 16:36:53 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 01 16:36:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Oct 01 16:36:53 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Oct 01 16:36:53 compute-0 ceph-mon[74273]: 4.17 scrub starts
Oct 01 16:36:53 compute-0 ceph-mon[74273]: 4.17 scrub ok
Oct 01 16:36:53 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 01 16:36:53 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 01 16:36:53 compute-0 ceph-mon[74273]: 7.d scrub starts
Oct 01 16:36:53 compute-0 ceph-mon[74273]: 7.d scrub ok
Oct 01 16:36:53 compute-0 sudo[103858]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxcwtpjjlwhftpphusnlaqobmgeokqwq ; /usr/bin/python3'
Oct 01 16:36:53 compute-0 sudo[103858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:36:54 compute-0 python3[103860]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:36:54 compute-0 podman[103861]: 2025-10-01 16:36:54.185259897 +0000 UTC m=+0.088850215 container create 3fd6d1c8ffac635c0e2652c4bf19f0d6256d50e3903ef186f08a3f8e412fcf17 (image=quay.io/ceph/ceph:v18, name=condescending_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Oct 01 16:36:54 compute-0 systemd[75900]: Starting Mark boot as successful...
Oct 01 16:36:54 compute-0 systemd[75900]: Finished Mark boot as successful.
Oct 01 16:36:54 compute-0 podman[103861]: 2025-10-01 16:36:54.119604415 +0000 UTC m=+0.023194763 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:36:54 compute-0 systemd[1]: Started libpod-conmon-3fd6d1c8ffac635c0e2652c4bf19f0d6256d50e3903ef186f08a3f8e412fcf17.scope.
Oct 01 16:36:54 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 72 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=72 pruub=8.122892380s) [0] r=-1 lpr=72 pi=[53,72)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 110.856094360s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:54 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 72 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=72 pruub=8.122835159s) [0] r=-1 lpr=72 pi=[53,72)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.856094360s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:54 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 72 pg[6.9( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=72) [0] r=0 lpr=72 pi=[53,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:54 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:36:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a270a855fd47aaa0dabaab68fb9ce2ceaca848ab8f2b610d93a0a379910ab7a2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a270a855fd47aaa0dabaab68fb9ce2ceaca848ab8f2b610d93a0a379910ab7a2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:54 compute-0 podman[103861]: 2025-10-01 16:36:54.283253017 +0000 UTC m=+0.186843355 container init 3fd6d1c8ffac635c0e2652c4bf19f0d6256d50e3903ef186f08a3f8e412fcf17 (image=quay.io/ceph/ceph:v18, name=condescending_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 01 16:36:54 compute-0 podman[103861]: 2025-10-01 16:36:54.289340678 +0000 UTC m=+0.192930996 container start 3fd6d1c8ffac635c0e2652c4bf19f0d6256d50e3903ef186f08a3f8e412fcf17 (image=quay.io/ceph/ceph:v18, name=condescending_mcclintock, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 01 16:36:54 compute-0 podman[103861]: 2025-10-01 16:36:54.293115761 +0000 UTC m=+0.196706099 container attach 3fd6d1c8ffac635c0e2652c4bf19f0d6256d50e3903ef186f08a3f8e412fcf17 (image=quay.io/ceph/ceph:v18, name=condescending_mcclintock, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 01 16:36:54 compute-0 condescending_mcclintock[103878]: could not fetch user info: no user info saved
Oct 01 16:36:54 compute-0 systemd[1]: libpod-3fd6d1c8ffac635c0e2652c4bf19f0d6256d50e3903ef186f08a3f8e412fcf17.scope: Deactivated successfully.
Oct 01 16:36:54 compute-0 podman[103861]: 2025-10-01 16:36:54.474315536 +0000 UTC m=+0.377905864 container died 3fd6d1c8ffac635c0e2652c4bf19f0d6256d50e3903ef186f08a3f8e412fcf17 (image=quay.io/ceph/ceph:v18, name=condescending_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:36:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-a270a855fd47aaa0dabaab68fb9ce2ceaca848ab8f2b610d93a0a379910ab7a2-merged.mount: Deactivated successfully.
Oct 01 16:36:54 compute-0 podman[103861]: 2025-10-01 16:36:54.512258183 +0000 UTC m=+0.415848501 container remove 3fd6d1c8ffac635c0e2652c4bf19f0d6256d50e3903ef186f08a3f8e412fcf17 (image=quay.io/ceph/ceph:v18, name=condescending_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:36:54 compute-0 systemd[1]: libpod-conmon-3fd6d1c8ffac635c0e2652c4bf19f0d6256d50e3903ef186f08a3f8e412fcf17.scope: Deactivated successfully.
Oct 01 16:36:54 compute-0 sudo[103858]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:54 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 5.b deep-scrub starts
Oct 01 16:36:54 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 5.b deep-scrub ok
Oct 01 16:36:54 compute-0 sudo[103998]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztnhppahqgpqxztpuwyvzdluowtleiqw ; /usr/bin/python3'
Oct 01 16:36:54 compute-0 sudo[103998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:36:54 compute-0 python3[104000]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:36:54 compute-0 podman[104001]: 2025-10-01 16:36:54.833826745 +0000 UTC m=+0.038796319 container create c0317696e86a49f0ac02e386b1b32ad863b01b881f2c9de5bad8863933a0f16f (image=quay.io/ceph/ceph:v18, name=peaceful_keller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Oct 01 16:36:54 compute-0 systemd[1]: Started libpod-conmon-c0317696e86a49f0ac02e386b1b32ad863b01b881f2c9de5bad8863933a0f16f.scope.
Oct 01 16:36:54 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:36:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7b7c7ee5550c532d2cf081e4466495cc13dd2b0dae8304da8d4fbe7d585e896/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7b7c7ee5550c532d2cf081e4466495cc13dd2b0dae8304da8d4fbe7d585e896/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:36:54 compute-0 podman[104001]: 2025-10-01 16:36:54.898875402 +0000 UTC m=+0.103845006 container init c0317696e86a49f0ac02e386b1b32ad863b01b881f2c9de5bad8863933a0f16f (image=quay.io/ceph/ceph:v18, name=peaceful_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:36:54 compute-0 podman[104001]: 2025-10-01 16:36:54.90447187 +0000 UTC m=+0.109441444 container start c0317696e86a49f0ac02e386b1b32ad863b01b881f2c9de5bad8863933a0f16f (image=quay.io/ceph/ceph:v18, name=peaceful_keller, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 01 16:36:54 compute-0 podman[104001]: 2025-10-01 16:36:54.908556831 +0000 UTC m=+0.113526405 container attach c0317696e86a49f0ac02e386b1b32ad863b01b881f2c9de5bad8863933a0f16f (image=quay.io/ceph/ceph:v18, name=peaceful_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:36:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Oct 01 16:36:54 compute-0 podman[104001]: 2025-10-01 16:36:54.815712568 +0000 UTC m=+0.020682162 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 16:36:54 compute-0 ceph-mon[74273]: pgmap v143: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 6.1 KiB/s rd, 449 B/s wr, 15 op/s; 169 B/s, 6 objects/s recovering
Oct 01 16:36:54 compute-0 ceph-mon[74273]: 5.a scrub starts
Oct 01 16:36:54 compute-0 ceph-mon[74273]: 5.a scrub ok
Oct 01 16:36:54 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 01 16:36:54 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 01 16:36:54 compute-0 ceph-mon[74273]: osdmap e72: 3 total, 3 up, 3 in
Oct 01 16:36:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Oct 01 16:36:54 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Oct 01 16:36:54 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 73 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=72/73 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=72) [0] r=0 lpr=72 pi=[53,72)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:55 compute-0 peaceful_keller[104016]: {
Oct 01 16:36:55 compute-0 peaceful_keller[104016]:     "user_id": "openstack",
Oct 01 16:36:55 compute-0 peaceful_keller[104016]:     "display_name": "openstack",
Oct 01 16:36:55 compute-0 peaceful_keller[104016]:     "email": "",
Oct 01 16:36:55 compute-0 peaceful_keller[104016]:     "suspended": 0,
Oct 01 16:36:55 compute-0 peaceful_keller[104016]:     "max_buckets": 1000,
Oct 01 16:36:55 compute-0 peaceful_keller[104016]:     "subusers": [],
Oct 01 16:36:55 compute-0 peaceful_keller[104016]:     "keys": [
Oct 01 16:36:55 compute-0 peaceful_keller[104016]:         {
Oct 01 16:36:55 compute-0 peaceful_keller[104016]:             "user": "openstack",
Oct 01 16:36:55 compute-0 peaceful_keller[104016]:             "access_key": "4C4EEK6M12G2XQQBP22M",
Oct 01 16:36:55 compute-0 peaceful_keller[104016]:             "secret_key": "mAp80SzacZ6PUsA7gA3FgK5Fa7SUqhbXDF6f9CxN"
Oct 01 16:36:55 compute-0 peaceful_keller[104016]:         }
Oct 01 16:36:55 compute-0 peaceful_keller[104016]:     ],
Oct 01 16:36:55 compute-0 peaceful_keller[104016]:     "swift_keys": [],
Oct 01 16:36:55 compute-0 peaceful_keller[104016]:     "caps": [],
Oct 01 16:36:55 compute-0 peaceful_keller[104016]:     "op_mask": "read, write, delete",
Oct 01 16:36:55 compute-0 peaceful_keller[104016]:     "default_placement": "",
Oct 01 16:36:55 compute-0 peaceful_keller[104016]:     "default_storage_class": "",
Oct 01 16:36:55 compute-0 peaceful_keller[104016]:     "placement_tags": [],
Oct 01 16:36:55 compute-0 peaceful_keller[104016]:     "bucket_quota": {
Oct 01 16:36:55 compute-0 peaceful_keller[104016]:         "enabled": false,
Oct 01 16:36:55 compute-0 peaceful_keller[104016]:         "check_on_raw": false,
Oct 01 16:36:55 compute-0 peaceful_keller[104016]:         "max_size": -1,
Oct 01 16:36:55 compute-0 peaceful_keller[104016]:         "max_size_kb": 0,
Oct 01 16:36:55 compute-0 peaceful_keller[104016]:         "max_objects": -1
Oct 01 16:36:55 compute-0 peaceful_keller[104016]:     },
Oct 01 16:36:55 compute-0 peaceful_keller[104016]:     "user_quota": {
Oct 01 16:36:55 compute-0 peaceful_keller[104016]:         "enabled": false,
Oct 01 16:36:55 compute-0 peaceful_keller[104016]:         "check_on_raw": false,
Oct 01 16:36:55 compute-0 peaceful_keller[104016]:         "max_size": -1,
Oct 01 16:36:55 compute-0 peaceful_keller[104016]:         "max_size_kb": 0,
Oct 01 16:36:55 compute-0 peaceful_keller[104016]:         "max_objects": -1
Oct 01 16:36:55 compute-0 peaceful_keller[104016]:     },
Oct 01 16:36:55 compute-0 peaceful_keller[104016]:     "temp_url_keys": [],
Oct 01 16:36:55 compute-0 peaceful_keller[104016]:     "type": "rgw",
Oct 01 16:36:55 compute-0 peaceful_keller[104016]:     "mfa_ids": []
Oct 01 16:36:55 compute-0 peaceful_keller[104016]: }
Oct 01 16:36:55 compute-0 peaceful_keller[104016]: 
Oct 01 16:36:55 compute-0 systemd[1]: libpod-c0317696e86a49f0ac02e386b1b32ad863b01b881f2c9de5bad8863933a0f16f.scope: Deactivated successfully.
Oct 01 16:36:55 compute-0 podman[104001]: 2025-10-01 16:36:55.140271334 +0000 UTC m=+0.345240908 container died c0317696e86a49f0ac02e386b1b32ad863b01b881f2c9de5bad8863933a0f16f (image=quay.io/ceph/ceph:v18, name=peaceful_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:36:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7b7c7ee5550c532d2cf081e4466495cc13dd2b0dae8304da8d4fbe7d585e896-merged.mount: Deactivated successfully.
Oct 01 16:36:55 compute-0 podman[104001]: 2025-10-01 16:36:55.173542995 +0000 UTC m=+0.378512569 container remove c0317696e86a49f0ac02e386b1b32ad863b01b881f2c9de5bad8863933a0f16f (image=quay.io/ceph/ceph:v18, name=peaceful_keller, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 01 16:36:55 compute-0 systemd[1]: libpod-conmon-c0317696e86a49f0ac02e386b1b32ad863b01b881f2c9de5bad8863933a0f16f.scope: Deactivated successfully.
Oct 01 16:36:55 compute-0 sudo[103998]: pam_unix(sudo:session): session closed for user root
Oct 01 16:36:55 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v146: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 455 B/s wr, 15 op/s; 171 B/s, 6 objects/s recovering
Oct 01 16:36:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0) v1
Oct 01 16:36:55 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 01 16:36:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Oct 01 16:36:55 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 01 16:36:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Oct 01 16:36:55 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 01 16:36:55 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 01 16:36:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Oct 01 16:36:55 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Oct 01 16:36:55 compute-0 ceph-mon[74273]: 5.b deep-scrub starts
Oct 01 16:36:55 compute-0 ceph-mon[74273]: 5.b deep-scrub ok
Oct 01 16:36:55 compute-0 ceph-mon[74273]: osdmap e73: 3 total, 3 up, 3 in
Oct 01 16:36:55 compute-0 ceph-mon[74273]: pgmap v146: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 455 B/s wr, 15 op/s; 171 B/s, 6 objects/s recovering
Oct 01 16:36:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 01 16:36:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 01 16:36:56 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 74 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=74 pruub=8.358781815s) [0] r=-1 lpr=74 pi=[55,74)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 112.891479492s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:56 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 74 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=74 pruub=8.358562469s) [0] r=-1 lpr=74 pi=[55,74)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.891479492s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:56 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 74 pg[6.a( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=74) [0] r=0 lpr=74 pi=[55,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Oct 01 16:36:56 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 01 16:36:56 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 01 16:36:56 compute-0 ceph-mon[74273]: osdmap e74: 3 total, 3 up, 3 in
Oct 01 16:36:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Oct 01 16:36:56 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Oct 01 16:36:56 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 75 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=74/75 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=74) [0] r=0 lpr=74 pi=[55,74)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:57 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v149: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:36:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0) v1
Oct 01 16:36:57 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 01 16:36:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Oct 01 16:36:57 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 01 16:36:57 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 5.d deep-scrub starts
Oct 01 16:36:57 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 5.d deep-scrub ok
Oct 01 16:36:57 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Oct 01 16:36:57 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Oct 01 16:36:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Oct 01 16:36:57 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 01 16:36:57 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 01 16:36:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Oct 01 16:36:57 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Oct 01 16:36:57 compute-0 ceph-mon[74273]: osdmap e75: 3 total, 3 up, 3 in
Oct 01 16:36:57 compute-0 ceph-mon[74273]: pgmap v149: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:36:57 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 01 16:36:57 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 01 16:36:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:36:58 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Oct 01 16:36:58 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Oct 01 16:36:58 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 76 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=76 pruub=15.480947495s) [1] r=-1 lpr=76 pi=[57,76)/1 crt=37'39 mlcod 37'39 active pruub 127.249496460s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:36:58 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 76 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=76 pruub=15.480512619s) [1] r=-1 lpr=76 pi=[57,76)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 127.249496460s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:36:58 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 76 pg[6.b( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=0 lpr=76 pi=[57,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:36:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Oct 01 16:36:58 compute-0 ceph-mon[74273]: 5.d deep-scrub starts
Oct 01 16:36:58 compute-0 ceph-mon[74273]: 5.d deep-scrub ok
Oct 01 16:36:58 compute-0 ceph-mon[74273]: 4.19 scrub starts
Oct 01 16:36:58 compute-0 ceph-mon[74273]: 4.19 scrub ok
Oct 01 16:36:58 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 01 16:36:58 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 01 16:36:58 compute-0 ceph-mon[74273]: osdmap e76: 3 total, 3 up, 3 in
Oct 01 16:36:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Oct 01 16:36:59 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Oct 01 16:36:59 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 77 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=76/77 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=0 lpr=76 pi=[57,76)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:36:59 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v152: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 511 B/s wr, 4 op/s
Oct 01 16:36:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0) v1
Oct 01 16:36:59 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 01 16:36:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Oct 01 16:36:59 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 01 16:36:59 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 5.e scrub starts
Oct 01 16:36:59 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 5.e scrub ok
Oct 01 16:36:59 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Oct 01 16:36:59 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Oct 01 16:37:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Oct 01 16:37:00 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 01 16:37:00 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 01 16:37:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Oct 01 16:37:00 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Oct 01 16:37:00 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 78 pg[9.c( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=78 pruub=8.803480148s) [2] r=-1 lpr=78 pi=[49,78)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 117.304603577s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:37:00 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 78 pg[9.c( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=78 pruub=8.802944183s) [2] r=-1 lpr=78 pi=[49,78)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 117.304603577s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:37:00 compute-0 ceph-mon[74273]: 4.1d scrub starts
Oct 01 16:37:00 compute-0 ceph-mon[74273]: 4.1d scrub ok
Oct 01 16:37:00 compute-0 ceph-mon[74273]: osdmap e77: 3 total, 3 up, 3 in
Oct 01 16:37:00 compute-0 ceph-mon[74273]: pgmap v152: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 511 B/s wr, 4 op/s
Oct 01 16:37:00 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 01 16:37:00 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 01 16:37:00 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 78 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=78 pruub=8.802824974s) [2] r=-1 lpr=78 pi=[49,78)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 117.305435181s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:37:00 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 78 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=78 pruub=8.802522659s) [2] r=-1 lpr=78 pi=[49,78)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 117.305435181s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:37:00 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 78 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=78) [2] r=0 lpr=78 pi=[49,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:37:00 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 78 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=78) [2] r=0 lpr=78 pi=[49,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:37:00 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Oct 01 16:37:00 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Oct 01 16:37:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Oct 01 16:37:01 compute-0 ceph-mon[74273]: 5.e scrub starts
Oct 01 16:37:01 compute-0 ceph-mon[74273]: 5.e scrub ok
Oct 01 16:37:01 compute-0 ceph-mon[74273]: 4.1e scrub starts
Oct 01 16:37:01 compute-0 ceph-mon[74273]: 4.1e scrub ok
Oct 01 16:37:01 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 01 16:37:01 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 01 16:37:01 compute-0 ceph-mon[74273]: osdmap e78: 3 total, 3 up, 3 in
Oct 01 16:37:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Oct 01 16:37:01 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Oct 01 16:37:01 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 79 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=79) [2]/[1] r=-1 lpr=79 pi=[49,79)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:37:01 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 79 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=79) [2]/[1] r=-1 lpr=79 pi=[49,79)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:37:01 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 79 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=79) [2]/[1] r=-1 lpr=79 pi=[49,79)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:37:01 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 79 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=79) [2]/[1] r=-1 lpr=79 pi=[49,79)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:37:01 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 79 pg[9.c( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=79) [2]/[1] r=0 lpr=79 pi=[49,79)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:37:01 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 79 pg[9.c( v 40'385 (0'0,40'385] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=79) [2]/[1] r=0 lpr=79 pi=[49,79)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 16:37:01 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 79 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=79) [2]/[1] r=0 lpr=79 pi=[49,79)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:37:01 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 79 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=79) [2]/[1] r=0 lpr=79 pi=[49,79)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 16:37:01 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v155: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 511 B/s wr, 4 op/s
Oct 01 16:37:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0) v1
Oct 01 16:37:01 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 01 16:37:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Oct 01 16:37:01 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 01 16:37:01 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Oct 01 16:37:01 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Oct 01 16:37:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Oct 01 16:37:02 compute-0 ceph-mon[74273]: 4.1f scrub starts
Oct 01 16:37:02 compute-0 ceph-mon[74273]: 4.1f scrub ok
Oct 01 16:37:02 compute-0 ceph-mon[74273]: osdmap e79: 3 total, 3 up, 3 in
Oct 01 16:37:02 compute-0 ceph-mon[74273]: pgmap v155: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 511 B/s wr, 4 op/s
Oct 01 16:37:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 01 16:37:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 01 16:37:02 compute-0 ceph-mon[74273]: 7.10 scrub starts
Oct 01 16:37:02 compute-0 ceph-mon[74273]: 7.10 scrub ok
Oct 01 16:37:02 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 01 16:37:02 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 01 16:37:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Oct 01 16:37:02 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Oct 01 16:37:02 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 80 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=80 pruub=12.465473175s) [1] r=-1 lpr=80 pi=[61,80)/1 crt=37'39 mlcod 37'39 active pruub 127.348014832s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:37:02 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 80 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=61/62 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=80 pruub=12.465235710s) [1] r=-1 lpr=80 pi=[61,80)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 127.348014832s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:37:02 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 80 pg[6.d( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=80) [1] r=0 lpr=80 pi=[61,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:37:02 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 80 pg[9.c( v 40'385 (0'0,40'385] local-lis/les=79/80 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=79) [2]/[1] async=[2] r=0 lpr=79 pi=[49,79)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:37:02 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 80 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=79/80 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=79) [2]/[1] async=[2] r=0 lpr=79 pi=[49,79)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:37:02 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Oct 01 16:37:02 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Oct 01 16:37:02 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Oct 01 16:37:02 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Oct 01 16:37:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Oct 01 16:37:03 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 01 16:37:03 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 01 16:37:03 compute-0 ceph-mon[74273]: osdmap e80: 3 total, 3 up, 3 in
Oct 01 16:37:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Oct 01 16:37:03 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Oct 01 16:37:03 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 81 pg[9.c( v 40'385 (0'0,40'385] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/49 les/c/f=80/50/0 sis=81 pruub=15.371880531s) [2] async=[2] r=-1 lpr=81 pi=[49,81)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 126.932022095s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:37:03 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 81 pg[9.c( v 40'385 (0'0,40'385] local-lis/les=79/80 n=6 ec=49/34 lis/c=79/49 les/c/f=80/50/0 sis=81 pruub=15.371795654s) [2] r=-1 lpr=81 pi=[49,81)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.932022095s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:37:03 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 81 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=79/80 n=5 ec=49/34 lis/c=79/49 les/c/f=80/50/0 sis=81 pruub=15.375397682s) [2] async=[2] r=-1 lpr=81 pi=[49,81)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 126.939445496s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:37:03 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 81 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=79/80 n=5 ec=49/34 lis/c=79/49 les/c/f=80/50/0 sis=81 pruub=15.375286102s) [2] r=-1 lpr=81 pi=[49,81)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.939445496s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:37:03 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 81 pg[6.d( v 37'39 lc 33'12 (0'0,37'39] local-lis/les=80/81 n=1 ec=47/21 lis/c=61/61 les/c/f=62/62/0 sis=80) [1] r=0 lpr=80 pi=[61,80)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:37:03 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 81 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=79/49 les/c/f=80/50/0 sis=81) [2] r=0 lpr=81 pi=[49,81)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:37:03 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 81 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=79/49 les/c/f=80/50/0 sis=81) [2] r=0 lpr=81 pi=[49,81)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:37:03 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 81 pg[9.c( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=49/34 lis/c=79/49 les/c/f=80/50/0 sis=81) [2] r=0 lpr=81 pi=[49,81)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:37:03 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 81 pg[9.c( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=49/34 lis/c=79/49 les/c/f=80/50/0 sis=81) [2] r=0 lpr=81 pi=[49,81)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:37:03 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v158: 305 pgs: 2 active+remapped, 1 peering, 302 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 3 objects/s recovering
Oct 01 16:37:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:37:03 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 2.19 deep-scrub starts
Oct 01 16:37:03 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 2.19 deep-scrub ok
Oct 01 16:37:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Oct 01 16:37:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Oct 01 16:37:04 compute-0 ceph-mon[74273]: 5.10 scrub starts
Oct 01 16:37:04 compute-0 ceph-mon[74273]: 5.10 scrub ok
Oct 01 16:37:04 compute-0 ceph-mon[74273]: 5.1e scrub starts
Oct 01 16:37:04 compute-0 ceph-mon[74273]: 5.1e scrub ok
Oct 01 16:37:04 compute-0 ceph-mon[74273]: osdmap e81: 3 total, 3 up, 3 in
Oct 01 16:37:04 compute-0 ceph-mon[74273]: pgmap v158: 305 pgs: 2 active+remapped, 1 peering, 302 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 3 objects/s recovering
Oct 01 16:37:04 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Oct 01 16:37:04 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 82 pg[9.c( v 40'385 (0'0,40'385] local-lis/les=81/82 n=6 ec=49/34 lis/c=79/49 les/c/f=80/50/0 sis=81) [2] r=0 lpr=81 pi=[49,81)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:37:04 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 82 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=81/82 n=5 ec=49/34 lis/c=79/49 les/c/f=80/50/0 sis=81) [2] r=0 lpr=81 pi=[49,81)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:37:05 compute-0 ceph-mon[74273]: 2.19 deep-scrub starts
Oct 01 16:37:05 compute-0 ceph-mon[74273]: 2.19 deep-scrub ok
Oct 01 16:37:05 compute-0 ceph-mon[74273]: osdmap e82: 3 total, 3 up, 3 in
Oct 01 16:37:05 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v160: 305 pgs: 1 active+clean+scrubbing+deep, 2 active+remapped, 1 peering, 301 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 25 B/s, 3 objects/s recovering
Oct 01 16:37:05 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Oct 01 16:37:05 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Oct 01 16:37:06 compute-0 ceph-mon[74273]: pgmap v160: 305 pgs: 1 active+clean+scrubbing+deep, 2 active+remapped, 1 peering, 301 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 25 B/s, 3 objects/s recovering
Oct 01 16:37:06 compute-0 ceph-mon[74273]: 7.12 scrub starts
Oct 01 16:37:06 compute-0 ceph-mon[74273]: 7.12 scrub ok
Oct 01 16:37:06 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Oct 01 16:37:06 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Oct 01 16:37:06 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Oct 01 16:37:06 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Oct 01 16:37:07 compute-0 ceph-mon[74273]: 7.14 scrub starts
Oct 01 16:37:07 compute-0 ceph-mon[74273]: 7.14 scrub ok
Oct 01 16:37:07 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v161: 305 pgs: 1 active+clean+scrubbing+deep, 2 active+remapped, 1 peering, 301 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 2 objects/s recovering
Oct 01 16:37:07 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Oct 01 16:37:07 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Oct 01 16:37:08 compute-0 ceph-mon[74273]: 10.9 scrub starts
Oct 01 16:37:08 compute-0 ceph-mon[74273]: 10.9 scrub ok
Oct 01 16:37:08 compute-0 ceph-mon[74273]: pgmap v161: 305 pgs: 1 active+clean+scrubbing+deep, 2 active+remapped, 1 peering, 301 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 2 objects/s recovering
Oct 01 16:37:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:37:09 compute-0 ceph-mon[74273]: 5.7 scrub starts
Oct 01 16:37:09 compute-0 ceph-mon[74273]: 5.7 scrub ok
Oct 01 16:37:09 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v162: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 1 objects/s recovering
Oct 01 16:37:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0) v1
Oct 01 16:37:09 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 01 16:37:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Oct 01 16:37:09 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 01 16:37:09 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Oct 01 16:37:09 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Oct 01 16:37:09 compute-0 sshd-session[104112]: Accepted publickey for zuul from 192.168.122.30 port 38240 ssh2: ECDSA SHA256:cAu4I/kPoFUKOLOQB71BUt6Th09G4PIJ2iHT8DD8gEY
Oct 01 16:37:09 compute-0 systemd-logind[788]: New session 34 of user zuul.
Oct 01 16:37:09 compute-0 systemd[1]: Started Session 34 of User zuul.
Oct 01 16:37:09 compute-0 sshd-session[104112]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 16:37:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Oct 01 16:37:10 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 01 16:37:10 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 01 16:37:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Oct 01 16:37:10 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Oct 01 16:37:10 compute-0 ceph-mon[74273]: pgmap v162: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 1 objects/s recovering
Oct 01 16:37:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 01 16:37:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 01 16:37:10 compute-0 ceph-mon[74273]: 7.16 scrub starts
Oct 01 16:37:10 compute-0 ceph-mon[74273]: 7.16 scrub ok
Oct 01 16:37:10 compute-0 python3.9[104265]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:37:11 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 01 16:37:11 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 01 16:37:11 compute-0 ceph-mon[74273]: osdmap e83: 3 total, 3 up, 3 in
Oct 01 16:37:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_16:37:11
Oct 01 16:37:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 16:37:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 16:37:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'default.rgw.meta', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'backups', '.mgr']
Oct 01 16:37:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 16:37:11 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v164: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 8 B/s, 0 objects/s recovering
Oct 01 16:37:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Oct 01 16:37:11 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 01 16:37:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Oct 01 16:37:11 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 01 16:37:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:37:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:37:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 16:37:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 16:37:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:37:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:37:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:37:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:37:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:37:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:37:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:37:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:37:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:37:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:37:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:37:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:37:11 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Oct 01 16:37:11 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Oct 01 16:37:11 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Oct 01 16:37:11 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Oct 01 16:37:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Oct 01 16:37:12 compute-0 ceph-mon[74273]: pgmap v164: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 8 B/s, 0 objects/s recovering
Oct 01 16:37:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 01 16:37:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 01 16:37:12 compute-0 ceph-mon[74273]: 7.17 scrub starts
Oct 01 16:37:12 compute-0 ceph-mon[74273]: 7.17 scrub ok
Oct 01 16:37:12 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 01 16:37:12 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 01 16:37:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Oct 01 16:37:12 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Oct 01 16:37:12 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Oct 01 16:37:12 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Oct 01 16:37:12 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 84 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=84 pruub=9.714043617s) [2] r=-1 lpr=84 pi=[57,84)/1 crt=37'39 mlcod 37'39 active pruub 135.249786377s@ mbc={255={}}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:37:12 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 84 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=84 pruub=9.713825226s) [2] r=-1 lpr=84 pi=[57,84)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 135.249786377s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:37:12 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 84 pg[6.f( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=84) [2] r=0 lpr=84 pi=[57,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:37:13 compute-0 sudo[104481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdkxdnchtdomypskgtonxxtczywlvxum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336632.7839904-32-52786459757826/AnsiballZ_command.py'
Oct 01 16:37:13 compute-0 sudo[104481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:37:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Oct 01 16:37:13 compute-0 ceph-mon[74273]: 5.17 scrub starts
Oct 01 16:37:13 compute-0 ceph-mon[74273]: 5.17 scrub ok
Oct 01 16:37:13 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 01 16:37:13 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 01 16:37:13 compute-0 ceph-mon[74273]: osdmap e84: 3 total, 3 up, 3 in
Oct 01 16:37:13 compute-0 ceph-mon[74273]: 7.19 scrub starts
Oct 01 16:37:13 compute-0 ceph-mon[74273]: 7.19 scrub ok
Oct 01 16:37:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Oct 01 16:37:13 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Oct 01 16:37:13 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 85 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=84/85 n=1 ec=47/21 lis/c=57/57 les/c/f=58/58/0 sis=84) [2] r=0 lpr=84 pi=[57,84)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:37:13 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v167: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 11 B/s, 0 objects/s recovering
Oct 01 16:37:13 compute-0 python3.9[104483]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                             pushd /var/tmp
                                             curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                             pushd repo-setup-main
                                             python3 -m venv ./venv
                                             PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                             ./venv/bin/repo-setup current-podified -b antelope
                                             popd
                                             rm -rf repo-setup-main
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:37:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:37:14 compute-0 ceph-mon[74273]: osdmap e85: 3 total, 3 up, 3 in
Oct 01 16:37:14 compute-0 ceph-mon[74273]: pgmap v167: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 11 B/s, 0 objects/s recovering
Oct 01 16:37:14 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Oct 01 16:37:14 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Oct 01 16:37:15 compute-0 ceph-mon[74273]: 5.1b scrub starts
Oct 01 16:37:15 compute-0 ceph-mon[74273]: 5.1b scrub ok
Oct 01 16:37:15 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v168: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:37:15 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Oct 01 16:37:15 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Oct 01 16:37:16 compute-0 ceph-mon[74273]: pgmap v168: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:37:16 compute-0 ceph-mon[74273]: 7.1d scrub starts
Oct 01 16:37:16 compute-0 ceph-mon[74273]: 7.1d scrub ok
Oct 01 16:37:16 compute-0 sudo[104500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:37:16 compute-0 sudo[104500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:37:16 compute-0 sudo[104500]: pam_unix(sudo:session): session closed for user root
Oct 01 16:37:16 compute-0 sudo[104526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:37:16 compute-0 sudo[104526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:37:16 compute-0 sudo[104526]: pam_unix(sudo:session): session closed for user root
Oct 01 16:37:16 compute-0 sudo[104551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:37:16 compute-0 sudo[104551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:37:16 compute-0 sudo[104551]: pam_unix(sudo:session): session closed for user root
Oct 01 16:37:16 compute-0 sudo[104576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 16:37:16 compute-0 sudo[104576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:37:17 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v169: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:37:17 compute-0 sudo[104576]: pam_unix(sudo:session): session closed for user root
Oct 01 16:37:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:37:17 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:37:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 16:37:17 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:37:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 16:37:17 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:37:17 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 51501fe2-861c-4847-9248-9bcc975c9674 does not exist
Oct 01 16:37:17 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 7f0ab769-ab28-406b-8ffc-92c392f777fb does not exist
Oct 01 16:37:17 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 79e46bdb-7612-483f-ba2c-53893da1e78a does not exist
Oct 01 16:37:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 16:37:17 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:37:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 16:37:17 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:37:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:37:17 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:37:17 compute-0 sudo[104636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:37:17 compute-0 sudo[104636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:37:17 compute-0 sudo[104636]: pam_unix(sudo:session): session closed for user root
Oct 01 16:37:17 compute-0 sudo[104661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:37:17 compute-0 sudo[104661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:37:17 compute-0 sudo[104661]: pam_unix(sudo:session): session closed for user root
Oct 01 16:37:17 compute-0 sudo[104686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:37:17 compute-0 sudo[104686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:37:17 compute-0 sudo[104686]: pam_unix(sudo:session): session closed for user root
Oct 01 16:37:17 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Oct 01 16:37:17 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Oct 01 16:37:17 compute-0 sudo[104711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 16:37:17 compute-0 sudo[104711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:37:17 compute-0 podman[104776]: 2025-10-01 16:37:17.871578317 +0000 UTC m=+0.035778399 container create 441dbe8f29f1a3b9db7250baf2c3c7344d79486555a1285cb790bf671ff723c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:37:17 compute-0 systemd[1]: Started libpod-conmon-441dbe8f29f1a3b9db7250baf2c3c7344d79486555a1285cb790bf671ff723c1.scope.
Oct 01 16:37:17 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Oct 01 16:37:17 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:37:17 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Oct 01 16:37:17 compute-0 podman[104776]: 2025-10-01 16:37:17.951304637 +0000 UTC m=+0.115504749 container init 441dbe8f29f1a3b9db7250baf2c3c7344d79486555a1285cb790bf671ff723c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 16:37:17 compute-0 podman[104776]: 2025-10-01 16:37:17.856061616 +0000 UTC m=+0.020261718 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:37:17 compute-0 podman[104776]: 2025-10-01 16:37:17.960471747 +0000 UTC m=+0.124671829 container start 441dbe8f29f1a3b9db7250baf2c3c7344d79486555a1285cb790bf671ff723c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ramanujan, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 01 16:37:17 compute-0 upbeat_ramanujan[104792]: 167 167
Oct 01 16:37:17 compute-0 systemd[1]: libpod-441dbe8f29f1a3b9db7250baf2c3c7344d79486555a1285cb790bf671ff723c1.scope: Deactivated successfully.
Oct 01 16:37:17 compute-0 podman[104776]: 2025-10-01 16:37:17.966341119 +0000 UTC m=+0.130541211 container attach 441dbe8f29f1a3b9db7250baf2c3c7344d79486555a1285cb790bf671ff723c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 01 16:37:17 compute-0 podman[104776]: 2025-10-01 16:37:17.966599868 +0000 UTC m=+0.130799950 container died 441dbe8f29f1a3b9db7250baf2c3c7344d79486555a1285cb790bf671ff723c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ramanujan, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:37:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb4667dabe9a0e996fd698283d6c1d809144248aeb7edf21a1ca24ac65a30278-merged.mount: Deactivated successfully.
Oct 01 16:37:18 compute-0 podman[104776]: 2025-10-01 16:37:18.000086196 +0000 UTC m=+0.164286288 container remove 441dbe8f29f1a3b9db7250baf2c3c7344d79486555a1285cb790bf671ff723c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ramanujan, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:37:18 compute-0 systemd[1]: libpod-conmon-441dbe8f29f1a3b9db7250baf2c3c7344d79486555a1285cb790bf671ff723c1.scope: Deactivated successfully.
Oct 01 16:37:18 compute-0 podman[104816]: 2025-10-01 16:37:18.133176644 +0000 UTC m=+0.039440710 container create 72ffec08c8d662e1a96621aeab7ed16e6eb7156f76b9f1afe343da11dbaaa77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_allen, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 01 16:37:18 compute-0 systemd[1]: Started libpod-conmon-72ffec08c8d662e1a96621aeab7ed16e6eb7156f76b9f1afe343da11dbaaa77d.scope.
Oct 01 16:37:18 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ffde606936dc05e54ba824ead5b3d9c8b2303d29ad42cb75952e7e81db4881c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ffde606936dc05e54ba824ead5b3d9c8b2303d29ad42cb75952e7e81db4881c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ffde606936dc05e54ba824ead5b3d9c8b2303d29ad42cb75952e7e81db4881c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ffde606936dc05e54ba824ead5b3d9c8b2303d29ad42cb75952e7e81db4881c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ffde606936dc05e54ba824ead5b3d9c8b2303d29ad42cb75952e7e81db4881c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:37:18 compute-0 podman[104816]: 2025-10-01 16:37:18.114217933 +0000 UTC m=+0.020482019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:37:18 compute-0 podman[104816]: 2025-10-01 16:37:18.213558723 +0000 UTC m=+0.119822809 container init 72ffec08c8d662e1a96621aeab7ed16e6eb7156f76b9f1afe343da11dbaaa77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_allen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 01 16:37:18 compute-0 podman[104816]: 2025-10-01 16:37:18.224755774 +0000 UTC m=+0.131019830 container start 72ffec08c8d662e1a96621aeab7ed16e6eb7156f76b9f1afe343da11dbaaa77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_allen, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:37:18 compute-0 podman[104816]: 2025-10-01 16:37:18.228915268 +0000 UTC m=+0.135179344 container attach 72ffec08c8d662e1a96621aeab7ed16e6eb7156f76b9f1afe343da11dbaaa77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_allen, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 01 16:37:18 compute-0 ceph-mon[74273]: pgmap v169: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:37:18 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:37:18 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:37:18 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:37:18 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:37:18 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:37:18 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:37:18 compute-0 ceph-mon[74273]: 7.1e scrub starts
Oct 01 16:37:18 compute-0 ceph-mon[74273]: 10.8 scrub starts
Oct 01 16:37:18 compute-0 ceph-mon[74273]: 10.8 scrub ok
Oct 01 16:37:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:37:18 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Oct 01 16:37:18 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Oct 01 16:37:19 compute-0 trusting_allen[104833]: --> passed data devices: 0 physical, 3 LVM
Oct 01 16:37:19 compute-0 trusting_allen[104833]: --> relative data size: 1.0
Oct 01 16:37:19 compute-0 trusting_allen[104833]: --> All data devices are unavailable
Oct 01 16:37:19 compute-0 systemd[1]: libpod-72ffec08c8d662e1a96621aeab7ed16e6eb7156f76b9f1afe343da11dbaaa77d.scope: Deactivated successfully.
Oct 01 16:37:19 compute-0 podman[104816]: 2025-10-01 16:37:19.217568466 +0000 UTC m=+1.123832522 container died 72ffec08c8d662e1a96621aeab7ed16e6eb7156f76b9f1afe343da11dbaaa77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_allen, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:37:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ffde606936dc05e54ba824ead5b3d9c8b2303d29ad42cb75952e7e81db4881c-merged.mount: Deactivated successfully.
Oct 01 16:37:19 compute-0 podman[104816]: 2025-10-01 16:37:19.266516582 +0000 UTC m=+1.172780638 container remove 72ffec08c8d662e1a96621aeab7ed16e6eb7156f76b9f1afe343da11dbaaa77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_allen, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 01 16:37:19 compute-0 systemd[1]: libpod-conmon-72ffec08c8d662e1a96621aeab7ed16e6eb7156f76b9f1afe343da11dbaaa77d.scope: Deactivated successfully.
Oct 01 16:37:19 compute-0 sudo[104711]: pam_unix(sudo:session): session closed for user root
Oct 01 16:37:19 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v170: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 75 B/s, 0 objects/s recovering
Oct 01 16:37:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Oct 01 16:37:19 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct 01 16:37:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Oct 01 16:37:19 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct 01 16:37:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Oct 01 16:37:19 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Oct 01 16:37:19 compute-0 ceph-mon[74273]: 7.1e scrub ok
Oct 01 16:37:19 compute-0 ceph-mon[74273]: 10.4 scrub starts
Oct 01 16:37:19 compute-0 ceph-mon[74273]: 10.4 scrub ok
Oct 01 16:37:19 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct 01 16:37:19 compute-0 sudo[104876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:37:19 compute-0 sudo[104876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:37:19 compute-0 sudo[104876]: pam_unix(sudo:session): session closed for user root
Oct 01 16:37:19 compute-0 sudo[104905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:37:19 compute-0 sudo[104905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:37:19 compute-0 sudo[104905]: pam_unix(sudo:session): session closed for user root
Oct 01 16:37:19 compute-0 sudo[104930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:37:19 compute-0 sudo[104930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:37:19 compute-0 sudo[104930]: pam_unix(sudo:session): session closed for user root
Oct 01 16:37:19 compute-0 sudo[104955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 16:37:19 compute-0 sudo[104955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:37:19 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 5.1c deep-scrub starts
Oct 01 16:37:19 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 5.1c deep-scrub ok
Oct 01 16:37:19 compute-0 podman[105023]: 2025-10-01 16:37:19.783366917 +0000 UTC m=+0.036533791 container create 007c4a1c78f994ae9070b0a09c2538f3dd6fcce8317eea6c2403813a02b97cb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Oct 01 16:37:19 compute-0 systemd[1]: Started libpod-conmon-007c4a1c78f994ae9070b0a09c2538f3dd6fcce8317eea6c2403813a02b97cb2.scope.
Oct 01 16:37:19 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:37:19 compute-0 podman[105023]: 2025-10-01 16:37:19.766107591 +0000 UTC m=+0.019274465 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:37:19 compute-0 podman[105023]: 2025-10-01 16:37:19.865055262 +0000 UTC m=+0.118222126 container init 007c4a1c78f994ae9070b0a09c2538f3dd6fcce8317eea6c2403813a02b97cb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mahavira, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 01 16:37:19 compute-0 podman[105023]: 2025-10-01 16:37:19.870860089 +0000 UTC m=+0.124026963 container start 007c4a1c78f994ae9070b0a09c2538f3dd6fcce8317eea6c2403813a02b97cb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mahavira, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:37:19 compute-0 inspiring_mahavira[105039]: 167 167
Oct 01 16:37:19 compute-0 systemd[1]: libpod-007c4a1c78f994ae9070b0a09c2538f3dd6fcce8317eea6c2403813a02b97cb2.scope: Deactivated successfully.
Oct 01 16:37:19 compute-0 podman[105023]: 2025-10-01 16:37:19.876100578 +0000 UTC m=+0.129267462 container attach 007c4a1c78f994ae9070b0a09c2538f3dd6fcce8317eea6c2403813a02b97cb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Oct 01 16:37:19 compute-0 podman[105023]: 2025-10-01 16:37:19.876668817 +0000 UTC m=+0.129835681 container died 007c4a1c78f994ae9070b0a09c2538f3dd6fcce8317eea6c2403813a02b97cb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mahavira, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 01 16:37:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1a4a6524c12b42753c418694887ed3af2e7d7f3ca30e973384e329e6100c54b-merged.mount: Deactivated successfully.
Oct 01 16:37:19 compute-0 podman[105023]: 2025-10-01 16:37:19.909625745 +0000 UTC m=+0.162792599 container remove 007c4a1c78f994ae9070b0a09c2538f3dd6fcce8317eea6c2403813a02b97cb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:37:19 compute-0 systemd[1]: libpod-conmon-007c4a1c78f994ae9070b0a09c2538f3dd6fcce8317eea6c2403813a02b97cb2.scope: Deactivated successfully.
Oct 01 16:37:20 compute-0 podman[105063]: 2025-10-01 16:37:20.050300205 +0000 UTC m=+0.046179440 container create 2bd583b141b80ab0509f2aa54c81e7e0015cc63caf7c0bbccff1e0daf19b2796 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 01 16:37:20 compute-0 systemd[1]: Started libpod-conmon-2bd583b141b80ab0509f2aa54c81e7e0015cc63caf7c0bbccff1e0daf19b2796.scope.
Oct 01 16:37:20 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:37:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c79e839bf7f2c810e80be831b15114987f5199242c518f4fcac403f708846be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:37:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c79e839bf7f2c810e80be831b15114987f5199242c518f4fcac403f708846be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:37:20 compute-0 podman[105063]: 2025-10-01 16:37:20.026143864 +0000 UTC m=+0.022023119 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:37:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c79e839bf7f2c810e80be831b15114987f5199242c518f4fcac403f708846be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:37:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c79e839bf7f2c810e80be831b15114987f5199242c518f4fcac403f708846be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:37:20 compute-0 sudo[104481]: pam_unix(sudo:session): session closed for user root
Oct 01 16:37:20 compute-0 podman[105063]: 2025-10-01 16:37:20.145855387 +0000 UTC m=+0.141734652 container init 2bd583b141b80ab0509f2aa54c81e7e0015cc63caf7c0bbccff1e0daf19b2796 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 01 16:37:20 compute-0 podman[105063]: 2025-10-01 16:37:20.152742589 +0000 UTC m=+0.148621824 container start 2bd583b141b80ab0509f2aa54c81e7e0015cc63caf7c0bbccff1e0daf19b2796 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 01 16:37:20 compute-0 podman[105063]: 2025-10-01 16:37:20.156733185 +0000 UTC m=+0.152612420 container attach 2bd583b141b80ab0509f2aa54c81e7e0015cc63caf7c0bbccff1e0daf19b2796 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_swanson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 01 16:37:20 compute-0 ceph-mon[74273]: pgmap v170: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 75 B/s, 0 objects/s recovering
Oct 01 16:37:20 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct 01 16:37:20 compute-0 ceph-mon[74273]: osdmap e86: 3 total, 3 up, 3 in
Oct 01 16:37:20 compute-0 ceph-mon[74273]: 5.1c deep-scrub starts
Oct 01 16:37:20 compute-0 ceph-mon[74273]: 5.1c deep-scrub ok
Oct 01 16:37:20 compute-0 sshd-session[104115]: Connection closed by 192.168.122.30 port 38240
Oct 01 16:37:20 compute-0 sshd-session[104112]: pam_unix(sshd:session): session closed for user zuul
Oct 01 16:37:20 compute-0 systemd-logind[788]: Session 34 logged out. Waiting for processes to exit.
Oct 01 16:37:20 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Oct 01 16:37:20 compute-0 systemd[1]: session-34.scope: Consumed 7.967s CPU time.
Oct 01 16:37:20 compute-0 systemd-logind[788]: Removed session 34.
Oct 01 16:37:20 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Oct 01 16:37:20 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Oct 01 16:37:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 16:37:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:37:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 16:37:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:37:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:37:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:37:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:37:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:37:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:37:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:37:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:37:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:37:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 16:37:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:37:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:37:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:37:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 16:37:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:37:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 16:37:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:37:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:37:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:37:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 16:37:20 compute-0 youthful_swanson[105078]: {
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:     "0": [
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:         {
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             "devices": [
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "/dev/loop3"
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             ],
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             "lv_name": "ceph_lv0",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             "lv_size": "21470642176",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             "name": "ceph_lv0",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             "tags": {
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "ceph.cluster_name": "ceph",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "ceph.crush_device_class": "",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "ceph.encrypted": "0",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "ceph.osd_id": "0",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "ceph.type": "block",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "ceph.vdo": "0"
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             },
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             "type": "block",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             "vg_name": "ceph_vg0"
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:         }
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:     ],
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:     "1": [
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:         {
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             "devices": [
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "/dev/loop4"
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             ],
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             "lv_name": "ceph_lv1",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             "lv_size": "21470642176",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             "name": "ceph_lv1",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             "tags": {
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "ceph.cluster_name": "ceph",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "ceph.crush_device_class": "",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "ceph.encrypted": "0",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "ceph.osd_id": "1",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "ceph.type": "block",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "ceph.vdo": "0"
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             },
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             "type": "block",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             "vg_name": "ceph_vg1"
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:         }
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:     ],
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:     "2": [
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:         {
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             "devices": [
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "/dev/loop5"
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             ],
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             "lv_name": "ceph_lv2",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             "lv_size": "21470642176",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             "name": "ceph_lv2",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             "tags": {
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "ceph.cluster_name": "ceph",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "ceph.crush_device_class": "",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "ceph.encrypted": "0",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "ceph.osd_id": "2",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "ceph.type": "block",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:                 "ceph.vdo": "0"
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             },
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             "type": "block",
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:             "vg_name": "ceph_vg2"
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:         }
Oct 01 16:37:20 compute-0 youthful_swanson[105078]:     ]
Oct 01 16:37:20 compute-0 youthful_swanson[105078]: }
Oct 01 16:37:20 compute-0 systemd[1]: libpod-2bd583b141b80ab0509f2aa54c81e7e0015cc63caf7c0bbccff1e0daf19b2796.scope: Deactivated successfully.
Oct 01 16:37:20 compute-0 podman[105063]: 2025-10-01 16:37:20.875733138 +0000 UTC m=+0.871612373 container died 2bd583b141b80ab0509f2aa54c81e7e0015cc63caf7c0bbccff1e0daf19b2796 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:37:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c79e839bf7f2c810e80be831b15114987f5199242c518f4fcac403f708846be-merged.mount: Deactivated successfully.
Oct 01 16:37:20 compute-0 podman[105063]: 2025-10-01 16:37:20.925307007 +0000 UTC m=+0.921186242 container remove 2bd583b141b80ab0509f2aa54c81e7e0015cc63caf7c0bbccff1e0daf19b2796 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_swanson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:37:20 compute-0 systemd[1]: libpod-conmon-2bd583b141b80ab0509f2aa54c81e7e0015cc63caf7c0bbccff1e0daf19b2796.scope: Deactivated successfully.
Oct 01 16:37:20 compute-0 sudo[104955]: pam_unix(sudo:session): session closed for user root
Oct 01 16:37:21 compute-0 sudo[105123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:37:21 compute-0 sudo[105123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:37:21 compute-0 sudo[105123]: pam_unix(sudo:session): session closed for user root
Oct 01 16:37:21 compute-0 sudo[105148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:37:21 compute-0 sudo[105148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:37:21 compute-0 sudo[105148]: pam_unix(sudo:session): session closed for user root
Oct 01 16:37:21 compute-0 sudo[105173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:37:21 compute-0 sudo[105173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:37:21 compute-0 sudo[105173]: pam_unix(sudo:session): session closed for user root
Oct 01 16:37:21 compute-0 sudo[105198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 16:37:21 compute-0 sudo[105198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:37:21 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v172: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 74 B/s, 0 objects/s recovering
Oct 01 16:37:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Oct 01 16:37:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct 01 16:37:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Oct 01 16:37:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct 01 16:37:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Oct 01 16:37:21 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Oct 01 16:37:21 compute-0 ceph-mon[74273]: 5.1f scrub starts
Oct 01 16:37:21 compute-0 ceph-mon[74273]: 5.1f scrub ok
Oct 01 16:37:21 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct 01 16:37:21 compute-0 podman[105265]: 2025-10-01 16:37:21.493930189 +0000 UTC m=+0.039099343 container create 40f6816a1cef4439b23fcc60239942137cf738382c4101f50ca5c5b5fc15de87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 01 16:37:21 compute-0 systemd[1]: Started libpod-conmon-40f6816a1cef4439b23fcc60239942137cf738382c4101f50ca5c5b5fc15de87.scope.
Oct 01 16:37:21 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:37:21 compute-0 podman[105265]: 2025-10-01 16:37:21.474402738 +0000 UTC m=+0.019571862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:37:21 compute-0 podman[105265]: 2025-10-01 16:37:21.576800353 +0000 UTC m=+0.121969487 container init 40f6816a1cef4439b23fcc60239942137cf738382c4101f50ca5c5b5fc15de87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shannon, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 01 16:37:21 compute-0 podman[105265]: 2025-10-01 16:37:21.58895367 +0000 UTC m=+0.134122804 container start 40f6816a1cef4439b23fcc60239942137cf738382c4101f50ca5c5b5fc15de87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 01 16:37:21 compute-0 podman[105265]: 2025-10-01 16:37:21.592800063 +0000 UTC m=+0.137969197 container attach 40f6816a1cef4439b23fcc60239942137cf738382c4101f50ca5c5b5fc15de87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shannon, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 01 16:37:21 compute-0 modest_shannon[105282]: 167 167
Oct 01 16:37:21 compute-0 systemd[1]: libpod-40f6816a1cef4439b23fcc60239942137cf738382c4101f50ca5c5b5fc15de87.scope: Deactivated successfully.
Oct 01 16:37:21 compute-0 podman[105265]: 2025-10-01 16:37:21.595367175 +0000 UTC m=+0.140536319 container died 40f6816a1cef4439b23fcc60239942137cf738382c4101f50ca5c5b5fc15de87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shannon, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 01 16:37:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-9edfe1662b0f476e5980cded0860d880b51927ee4cc5c8c64e4dc16eb0aa30af-merged.mount: Deactivated successfully.
Oct 01 16:37:21 compute-0 podman[105265]: 2025-10-01 16:37:21.636449517 +0000 UTC m=+0.181618651 container remove 40f6816a1cef4439b23fcc60239942137cf738382c4101f50ca5c5b5fc15de87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shannon, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 01 16:37:21 compute-0 systemd[1]: libpod-conmon-40f6816a1cef4439b23fcc60239942137cf738382c4101f50ca5c5b5fc15de87.scope: Deactivated successfully.
Oct 01 16:37:21 compute-0 podman[105306]: 2025-10-01 16:37:21.854650338 +0000 UTC m=+0.054964613 container create 2497675eb2d425fb4e9198a7edbcc0a76484d95eec00556f9cede195d0d338e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:37:21 compute-0 systemd[1]: Started libpod-conmon-2497675eb2d425fb4e9198a7edbcc0a76484d95eec00556f9cede195d0d338e3.scope.
Oct 01 16:37:21 compute-0 podman[105306]: 2025-10-01 16:37:21.838581142 +0000 UTC m=+0.038895437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:37:21 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:37:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1f0047a52ec5d3998ac9826dfcb6e5b24908e0a2250d63bab0efbd5caf61571/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:37:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1f0047a52ec5d3998ac9826dfcb6e5b24908e0a2250d63bab0efbd5caf61571/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:37:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1f0047a52ec5d3998ac9826dfcb6e5b24908e0a2250d63bab0efbd5caf61571/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:37:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1f0047a52ec5d3998ac9826dfcb6e5b24908e0a2250d63bab0efbd5caf61571/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:37:21 compute-0 podman[105306]: 2025-10-01 16:37:21.954956998 +0000 UTC m=+0.155271293 container init 2497675eb2d425fb4e9198a7edbcc0a76484d95eec00556f9cede195d0d338e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:37:21 compute-0 podman[105306]: 2025-10-01 16:37:21.960622163 +0000 UTC m=+0.160936458 container start 2497675eb2d425fb4e9198a7edbcc0a76484d95eec00556f9cede195d0d338e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 01 16:37:21 compute-0 podman[105306]: 2025-10-01 16:37:21.96523681 +0000 UTC m=+0.165551075 container attach 2497675eb2d425fb4e9198a7edbcc0a76484d95eec00556f9cede195d0d338e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:37:22 compute-0 ceph-mon[74273]: pgmap v172: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 74 B/s, 0 objects/s recovering
Oct 01 16:37:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct 01 16:37:22 compute-0 ceph-mon[74273]: osdmap e87: 3 total, 3 up, 3 in
Oct 01 16:37:22 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Oct 01 16:37:22 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Oct 01 16:37:22 compute-0 hopeful_lumiere[105322]: {
Oct 01 16:37:22 compute-0 hopeful_lumiere[105322]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 16:37:22 compute-0 hopeful_lumiere[105322]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:37:22 compute-0 hopeful_lumiere[105322]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 16:37:22 compute-0 hopeful_lumiere[105322]:         "osd_id": 2,
Oct 01 16:37:22 compute-0 hopeful_lumiere[105322]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:37:22 compute-0 hopeful_lumiere[105322]:         "type": "bluestore"
Oct 01 16:37:22 compute-0 hopeful_lumiere[105322]:     },
Oct 01 16:37:22 compute-0 hopeful_lumiere[105322]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 16:37:22 compute-0 hopeful_lumiere[105322]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:37:22 compute-0 hopeful_lumiere[105322]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 16:37:22 compute-0 hopeful_lumiere[105322]:         "osd_id": 0,
Oct 01 16:37:22 compute-0 hopeful_lumiere[105322]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:37:22 compute-0 hopeful_lumiere[105322]:         "type": "bluestore"
Oct 01 16:37:22 compute-0 hopeful_lumiere[105322]:     },
Oct 01 16:37:22 compute-0 hopeful_lumiere[105322]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 16:37:22 compute-0 hopeful_lumiere[105322]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:37:22 compute-0 hopeful_lumiere[105322]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 16:37:22 compute-0 hopeful_lumiere[105322]:         "osd_id": 1,
Oct 01 16:37:22 compute-0 hopeful_lumiere[105322]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:37:22 compute-0 hopeful_lumiere[105322]:         "type": "bluestore"
Oct 01 16:37:22 compute-0 hopeful_lumiere[105322]:     }
Oct 01 16:37:22 compute-0 hopeful_lumiere[105322]: }
Oct 01 16:37:22 compute-0 systemd[1]: libpod-2497675eb2d425fb4e9198a7edbcc0a76484d95eec00556f9cede195d0d338e3.scope: Deactivated successfully.
Oct 01 16:37:22 compute-0 systemd[1]: libpod-2497675eb2d425fb4e9198a7edbcc0a76484d95eec00556f9cede195d0d338e3.scope: Consumed 1.026s CPU time.
Oct 01 16:37:22 compute-0 podman[105306]: 2025-10-01 16:37:22.98514619 +0000 UTC m=+1.185460505 container died 2497675eb2d425fb4e9198a7edbcc0a76484d95eec00556f9cede195d0d338e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:37:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1f0047a52ec5d3998ac9826dfcb6e5b24908e0a2250d63bab0efbd5caf61571-merged.mount: Deactivated successfully.
Oct 01 16:37:23 compute-0 podman[105306]: 2025-10-01 16:37:23.054946228 +0000 UTC m=+1.255260493 container remove 2497675eb2d425fb4e9198a7edbcc0a76484d95eec00556f9cede195d0d338e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 01 16:37:23 compute-0 systemd[1]: libpod-conmon-2497675eb2d425fb4e9198a7edbcc0a76484d95eec00556f9cede195d0d338e3.scope: Deactivated successfully.
Oct 01 16:37:23 compute-0 sudo[105198]: pam_unix(sudo:session): session closed for user root
Oct 01 16:37:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:37:23 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:37:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:37:23 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:37:23 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 751c0820-6c42-46f1-a743-224691f62cfe does not exist
Oct 01 16:37:23 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 50c44635-0a17-49f7-8874-05b315a11caa does not exist
Oct 01 16:37:23 compute-0 sudo[105369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:37:23 compute-0 sudo[105369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:37:23 compute-0 sudo[105369]: pam_unix(sudo:session): session closed for user root
Oct 01 16:37:23 compute-0 sudo[105394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 16:37:23 compute-0 sudo[105394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:37:23 compute-0 sudo[105394]: pam_unix(sudo:session): session closed for user root
Oct 01 16:37:23 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v174: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 75 B/s, 0 objects/s recovering
Oct 01 16:37:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Oct 01 16:37:23 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct 01 16:37:23 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 8.3 deep-scrub starts
Oct 01 16:37:23 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 8.3 deep-scrub ok
Oct 01 16:37:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:37:24 compute-0 ceph-mon[74273]: 8.1 scrub starts
Oct 01 16:37:24 compute-0 ceph-mon[74273]: 8.1 scrub ok
Oct 01 16:37:24 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:37:24 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:37:24 compute-0 ceph-mon[74273]: pgmap v174: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 75 B/s, 0 objects/s recovering
Oct 01 16:37:24 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct 01 16:37:24 compute-0 ceph-mon[74273]: 8.3 deep-scrub starts
Oct 01 16:37:24 compute-0 ceph-mon[74273]: 8.3 deep-scrub ok
Oct 01 16:37:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Oct 01 16:37:24 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct 01 16:37:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Oct 01 16:37:24 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Oct 01 16:37:25 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct 01 16:37:25 compute-0 ceph-mon[74273]: osdmap e88: 3 total, 3 up, 3 in
Oct 01 16:37:25 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v176: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:37:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Oct 01 16:37:25 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct 01 16:37:25 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 5.4 deep-scrub starts
Oct 01 16:37:25 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 5.4 deep-scrub ok
Oct 01 16:37:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Oct 01 16:37:26 compute-0 ceph-mon[74273]: pgmap v176: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:37:26 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct 01 16:37:26 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct 01 16:37:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Oct 01 16:37:26 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Oct 01 16:37:26 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Oct 01 16:37:26 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Oct 01 16:37:26 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Oct 01 16:37:26 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Oct 01 16:37:27 compute-0 ceph-mon[74273]: 5.4 deep-scrub starts
Oct 01 16:37:27 compute-0 ceph-mon[74273]: 5.4 deep-scrub ok
Oct 01 16:37:27 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct 01 16:37:27 compute-0 ceph-mon[74273]: osdmap e89: 3 total, 3 up, 3 in
Oct 01 16:37:27 compute-0 ceph-mon[74273]: 8.5 scrub starts
Oct 01 16:37:27 compute-0 ceph-mon[74273]: 8.5 scrub ok
Oct 01 16:37:27 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v178: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:37:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Oct 01 16:37:27 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct 01 16:37:27 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Oct 01 16:37:27 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Oct 01 16:37:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Oct 01 16:37:28 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct 01 16:37:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Oct 01 16:37:28 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Oct 01 16:37:28 compute-0 ceph-mon[74273]: 10.3 scrub starts
Oct 01 16:37:28 compute-0 ceph-mon[74273]: 10.3 scrub ok
Oct 01 16:37:28 compute-0 ceph-mon[74273]: pgmap v178: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:37:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct 01 16:37:28 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 89 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=57/58 n=5 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=89 pruub=10.012224197s) [2] r=-1 lpr=89 pi=[57,89)/1 crt=40'385 mlcod 0'0 active pruub 151.247009277s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:37:28 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 90 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=57/58 n=5 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=89 pruub=10.012161255s) [2] r=-1 lpr=89 pi=[57,89)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 151.247009277s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:37:28 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 90 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=89) [2] r=0 lpr=90 pi=[57,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:37:28 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Oct 01 16:37:28 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Oct 01 16:37:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:37:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Oct 01 16:37:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Oct 01 16:37:28 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Oct 01 16:37:28 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 91 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=57/58 n=5 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=91) [2]/[0] r=0 lpr=91 pi=[57,91)/1 crt=40'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:37:28 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 91 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=57/58 n=5 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=91) [2]/[0] r=0 lpr=91 pi=[57,91)/1 crt=40'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 16:37:28 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 91 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=91) [2]/[0] r=-1 lpr=91 pi=[57,91)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:37:28 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 91 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=91) [2]/[0] r=-1 lpr=91 pi=[57,91)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:37:29 compute-0 ceph-mon[74273]: 2.1d scrub starts
Oct 01 16:37:29 compute-0 ceph-mon[74273]: 2.1d scrub ok
Oct 01 16:37:29 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct 01 16:37:29 compute-0 ceph-mon[74273]: osdmap e90: 3 total, 3 up, 3 in
Oct 01 16:37:29 compute-0 ceph-mon[74273]: 8.7 scrub starts
Oct 01 16:37:29 compute-0 ceph-mon[74273]: 8.7 scrub ok
Oct 01 16:37:29 compute-0 ceph-mon[74273]: osdmap e91: 3 total, 3 up, 3 in
Oct 01 16:37:29 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v181: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:37:29 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Oct 01 16:37:29 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Oct 01 16:37:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Oct 01 16:37:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Oct 01 16:37:29 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Oct 01 16:37:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Oct 01 16:37:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Oct 01 16:37:30 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 92 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=91/92 n=5 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=91) [2]/[0] async=[2] r=0 lpr=91 pi=[57,91)/1 crt=40'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:37:30 compute-0 ceph-mon[74273]: pgmap v181: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:37:30 compute-0 ceph-mon[74273]: 10.5 scrub starts
Oct 01 16:37:30 compute-0 ceph-mon[74273]: 10.5 scrub ok
Oct 01 16:37:30 compute-0 ceph-mon[74273]: osdmap e92: 3 total, 3 up, 3 in
Oct 01 16:37:30 compute-0 ceph-mon[74273]: 8.8 scrub starts
Oct 01 16:37:30 compute-0 ceph-mon[74273]: 8.8 scrub ok
Oct 01 16:37:31 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v183: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:37:31 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Oct 01 16:37:31 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Oct 01 16:37:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Oct 01 16:37:31 compute-0 ceph-mon[74273]: 9.2 scrub starts
Oct 01 16:37:31 compute-0 ceph-mon[74273]: 9.2 scrub ok
Oct 01 16:37:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Oct 01 16:37:31 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Oct 01 16:37:31 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 93 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=91/57 les/c/f=92/58/0 sis=93) [2] r=0 lpr=93 pi=[57,93)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:37:31 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 93 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=91/57 les/c/f=92/58/0 sis=93) [2] r=0 lpr=93 pi=[57,93)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:37:31 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 93 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=91/92 n=5 ec=49/34 lis/c=91/57 les/c/f=92/58/0 sis=93 pruub=14.937431335s) [2] async=[2] r=-1 lpr=93 pi=[57,93)/1 crt=40'385 mlcod 40'385 active pruub 159.552078247s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:37:31 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 93 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=91/92 n=5 ec=49/34 lis/c=91/57 les/c/f=92/58/0 sis=93 pruub=14.937363625s) [2] r=-1 lpr=93 pi=[57,93)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 159.552078247s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:37:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Oct 01 16:37:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Oct 01 16:37:32 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Oct 01 16:37:32 compute-0 ceph-mon[74273]: pgmap v183: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:37:32 compute-0 ceph-mon[74273]: osdmap e93: 3 total, 3 up, 3 in
Oct 01 16:37:32 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 94 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=93/94 n=5 ec=49/34 lis/c=91/57 les/c/f=92/58/0 sis=93) [2] r=0 lpr=93 pi=[57,93)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:37:32 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Oct 01 16:37:33 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Oct 01 16:37:33 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v186: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 1 objects/s recovering
Oct 01 16:37:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Oct 01 16:37:33 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct 01 16:37:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:37:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Oct 01 16:37:33 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct 01 16:37:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Oct 01 16:37:33 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Oct 01 16:37:33 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 95 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=56/57 n=5 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=95 pruub=11.587526321s) [1] r=-1 lpr=95 pi=[56,95)/1 crt=40'385 mlcod 0'0 active pruub 158.234375000s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:37:33 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 95 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=56/57 n=5 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=95 pruub=11.587451935s) [1] r=-1 lpr=95 pi=[56,95)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 158.234375000s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:37:33 compute-0 ceph-mon[74273]: osdmap e94: 3 total, 3 up, 3 in
Oct 01 16:37:33 compute-0 ceph-mon[74273]: 10.15 scrub starts
Oct 01 16:37:33 compute-0 ceph-mon[74273]: 10.15 scrub ok
Oct 01 16:37:33 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct 01 16:37:33 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 95 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=95) [1] r=0 lpr=95 pi=[56,95)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:37:34 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 10.a scrub starts
Oct 01 16:37:34 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 10.a scrub ok
Oct 01 16:37:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Oct 01 16:37:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Oct 01 16:37:34 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Oct 01 16:37:34 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 96 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=56/57 n=5 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=96) [1]/[0] r=0 lpr=96 pi=[56,96)/1 crt=40'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:37:34 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 96 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=56/57 n=5 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=96) [1]/[0] r=0 lpr=96 pi=[56,96)/1 crt=40'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 16:37:34 compute-0 ceph-mon[74273]: pgmap v186: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 1 objects/s recovering
Oct 01 16:37:34 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct 01 16:37:34 compute-0 ceph-mon[74273]: osdmap e95: 3 total, 3 up, 3 in
Oct 01 16:37:34 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 96 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=96) [1]/[0] r=-1 lpr=96 pi=[56,96)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:37:34 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 96 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=96) [1]/[0] r=-1 lpr=96 pi=[56,96)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:37:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Oct 01 16:37:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Oct 01 16:37:35 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v189: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Oct 01 16:37:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Oct 01 16:37:35 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct 01 16:37:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Oct 01 16:37:35 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct 01 16:37:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Oct 01 16:37:35 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Oct 01 16:37:35 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 97 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=66/67 n=5 ec=49/34 lis/c=66/66 les/c/f=67/67/0 sis=97 pruub=15.791220665s) [0] r=-1 lpr=97 pi=[66,97)/1 crt=40'385 mlcod 0'0 active pruub 155.426681519s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:37:35 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 97 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=66/67 n=5 ec=49/34 lis/c=66/66 les/c/f=67/67/0 sis=97 pruub=15.791070938s) [0] r=-1 lpr=97 pi=[66,97)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 155.426681519s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:37:35 compute-0 ceph-mon[74273]: 10.a scrub starts
Oct 01 16:37:35 compute-0 ceph-mon[74273]: 10.a scrub ok
Oct 01 16:37:35 compute-0 ceph-mon[74273]: osdmap e96: 3 total, 3 up, 3 in
Oct 01 16:37:35 compute-0 ceph-mon[74273]: 2.1c scrub starts
Oct 01 16:37:35 compute-0 ceph-mon[74273]: 2.1c scrub ok
Oct 01 16:37:35 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct 01 16:37:35 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct 01 16:37:35 compute-0 ceph-mon[74273]: osdmap e97: 3 total, 3 up, 3 in
Oct 01 16:37:35 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 97 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=66/66 les/c/f=67/67/0 sis=97) [0] r=0 lpr=97 pi=[66,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:37:35 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 2.f scrub starts
Oct 01 16:37:35 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 2.f scrub ok
Oct 01 16:37:36 compute-0 sshd-session[105419]: Accepted publickey for zuul from 192.168.122.30 port 47552 ssh2: ECDSA SHA256:cAu4I/kPoFUKOLOQB71BUt6Th09G4PIJ2iHT8DD8gEY
Oct 01 16:37:36 compute-0 systemd-logind[788]: New session 35 of user zuul.
Oct 01 16:37:36 compute-0 systemd[1]: Started Session 35 of User zuul.
Oct 01 16:37:36 compute-0 sshd-session[105419]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 16:37:36 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 97 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=96/97 n=5 ec=49/34 lis/c=56/56 les/c/f=57/57/0 sis=96) [1]/[0] async=[1] r=0 lpr=96 pi=[56,96)/1 crt=40'385 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:37:36 compute-0 python3.9[105572]: ansible-ansible.legacy.ping Invoked with data=pong
Oct 01 16:37:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Oct 01 16:37:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Oct 01 16:37:36 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Oct 01 16:37:36 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 98 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=66/67 n=5 ec=49/34 lis/c=66/66 les/c/f=67/67/0 sis=98) [0]/[2] r=0 lpr=98 pi=[66,98)/1 crt=40'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:37:36 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 98 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=66/67 n=5 ec=49/34 lis/c=66/66 les/c/f=67/67/0 sis=98) [0]/[2] r=0 lpr=98 pi=[66,98)/1 crt=40'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 16:37:36 compute-0 ceph-mon[74273]: pgmap v189: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Oct 01 16:37:36 compute-0 ceph-mon[74273]: 2.f scrub starts
Oct 01 16:37:36 compute-0 ceph-mon[74273]: 2.f scrub ok
Oct 01 16:37:36 compute-0 ceph-mon[74273]: osdmap e98: 3 total, 3 up, 3 in
Oct 01 16:37:36 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 98 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=66/66 les/c/f=67/67/0 sis=98) [0]/[2] r=-1 lpr=98 pi=[66,98)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:37:36 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 98 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=66/66 les/c/f=67/67/0 sis=98) [0]/[2] r=-1 lpr=98 pi=[66,98)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:37:36 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 98 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=96/97 n=5 ec=49/34 lis/c=96/56 les/c/f=97/57/0 sis=98 pruub=15.543090820s) [1] async=[1] r=-1 lpr=98 pi=[56,98)/1 crt=40'385 mlcod 40'385 active pruub 165.239196777s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:37:36 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 98 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=96/97 n=5 ec=49/34 lis/c=96/56 les/c/f=97/57/0 sis=98 pruub=15.543034554s) [1] r=-1 lpr=98 pi=[56,98)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 165.239196777s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:37:36 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 98 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=96/56 les/c/f=97/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:37:36 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 98 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=96/56 les/c/f=97/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:37:37 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v192: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:37:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Oct 01 16:37:37 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct 01 16:37:37 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 10.c scrub starts
Oct 01 16:37:37 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 10.c scrub ok
Oct 01 16:37:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Oct 01 16:37:37 compute-0 python3.9[105746]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:37:37 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Oct 01 16:37:37 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Oct 01 16:37:38 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct 01 16:37:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Oct 01 16:37:38 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Oct 01 16:37:38 compute-0 ceph-mon[74273]: pgmap v192: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:37:38 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct 01 16:37:38 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 99 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=98/99 n=5 ec=49/34 lis/c=96/56 les/c/f=97/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:37:38 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 8.a scrub starts
Oct 01 16:37:38 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 8.a scrub ok
Oct 01 16:37:38 compute-0 sudo[105900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdwhuwkxhybmfsjjgxshwffggausfdgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336658.2601922-45-94622123012791/AnsiballZ_command.py'
Oct 01 16:37:38 compute-0 sudo[105900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:37:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:37:38 compute-0 python3.9[105902]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:37:38 compute-0 sudo[105900]: pam_unix(sudo:session): session closed for user root
Oct 01 16:37:39 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 99 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=98/99 n=5 ec=49/34 lis/c=66/66 les/c/f=67/67/0 sis=98) [0]/[2] async=[0] r=0 lpr=98 pi=[66,98)/1 crt=40'385 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:37:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Oct 01 16:37:39 compute-0 ceph-mon[74273]: 10.c scrub starts
Oct 01 16:37:39 compute-0 ceph-mon[74273]: 10.c scrub ok
Oct 01 16:37:39 compute-0 ceph-mon[74273]: 5.5 scrub starts
Oct 01 16:37:39 compute-0 ceph-mon[74273]: 5.5 scrub ok
Oct 01 16:37:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct 01 16:37:39 compute-0 ceph-mon[74273]: osdmap e99: 3 total, 3 up, 3 in
Oct 01 16:37:39 compute-0 ceph-mon[74273]: 8.a scrub starts
Oct 01 16:37:39 compute-0 ceph-mon[74273]: 8.a scrub ok
Oct 01 16:37:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Oct 01 16:37:39 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Oct 01 16:37:39 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 100 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=98/66 les/c/f=99/67/0 sis=100) [0] r=0 lpr=100 pi=[66,100)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:37:39 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 100 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=98/66 les/c/f=99/67/0 sis=100) [0] r=0 lpr=100 pi=[66,100)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:37:39 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 100 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=98/99 n=5 ec=49/34 lis/c=98/66 les/c/f=99/67/0 sis=100 pruub=15.804939270s) [0] async=[0] r=-1 lpr=100 pi=[66,100)/1 crt=40'385 mlcod 40'385 active pruub 158.811340332s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:37:39 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 100 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=98/99 n=5 ec=49/34 lis/c=98/66 les/c/f=99/67/0 sis=100 pruub=15.804207802s) [0] r=-1 lpr=100 pi=[66,100)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 158.811340332s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:37:39 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v195: 305 pgs: 1 remapped+peering, 1 peering, 303 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Oct 01 16:37:39 compute-0 sudo[106053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvhagicvkdzrrwutimxgwoewuukpkwuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336659.1120408-57-1113914867528/AnsiballZ_stat.py'
Oct 01 16:37:39 compute-0 sudo[106053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:37:39 compute-0 python3.9[106055]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:37:39 compute-0 sudo[106053]: pam_unix(sudo:session): session closed for user root
Oct 01 16:37:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Oct 01 16:37:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Oct 01 16:37:40 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Oct 01 16:37:40 compute-0 ceph-mon[74273]: osdmap e100: 3 total, 3 up, 3 in
Oct 01 16:37:40 compute-0 ceph-mon[74273]: pgmap v195: 305 pgs: 1 remapped+peering, 1 peering, 303 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Oct 01 16:37:40 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 101 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=100/101 n=5 ec=49/34 lis/c=98/66 les/c/f=99/67/0 sis=100) [0] r=0 lpr=100 pi=[66,100)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:37:40 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Oct 01 16:37:40 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Oct 01 16:37:40 compute-0 sudo[106207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxhcwsngfxphavbqnydapluoirhzruts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336660.059703-68-58513669866199/AnsiballZ_file.py'
Oct 01 16:37:40 compute-0 sudo[106207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:37:40 compute-0 python3.9[106209]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:37:40 compute-0 sudo[106207]: pam_unix(sudo:session): session closed for user root
Oct 01 16:37:41 compute-0 ceph-mon[74273]: osdmap e101: 3 total, 3 up, 3 in
Oct 01 16:37:41 compute-0 ceph-mon[74273]: 9.4 scrub starts
Oct 01 16:37:41 compute-0 ceph-mon[74273]: 9.4 scrub ok
Oct 01 16:37:41 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v197: 305 pgs: 1 remapped+peering, 1 peering, 303 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 0 objects/s recovering
Oct 01 16:37:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:37:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:37:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:37:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:37:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:37:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:37:41 compute-0 python3.9[106359]: ansible-ansible.builtin.service_facts Invoked
Oct 01 16:37:41 compute-0 network[106376]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 01 16:37:41 compute-0 network[106377]: 'network-scripts' will be removed from distribution in near future.
Oct 01 16:37:41 compute-0 network[106378]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 01 16:37:41 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Oct 01 16:37:41 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Oct 01 16:37:42 compute-0 ceph-mon[74273]: pgmap v197: 305 pgs: 1 remapped+peering, 1 peering, 303 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 0 objects/s recovering
Oct 01 16:37:43 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Oct 01 16:37:43 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Oct 01 16:37:43 compute-0 ceph-mon[74273]: 10.18 scrub starts
Oct 01 16:37:43 compute-0 ceph-mon[74273]: 10.18 scrub ok
Oct 01 16:37:43 compute-0 ceph-mon[74273]: 10.17 scrub starts
Oct 01 16:37:43 compute-0 ceph-mon[74273]: 10.17 scrub ok
Oct 01 16:37:43 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v198: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Oct 01 16:37:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:37:44 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Oct 01 16:37:44 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Oct 01 16:37:44 compute-0 ceph-mon[74273]: pgmap v198: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Oct 01 16:37:44 compute-0 ceph-mon[74273]: 10.7 scrub starts
Oct 01 16:37:44 compute-0 ceph-mon[74273]: 10.7 scrub ok
Oct 01 16:37:44 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 10.1b deep-scrub starts
Oct 01 16:37:44 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 10.1b deep-scrub ok
Oct 01 16:37:45 compute-0 ceph-mon[74273]: 10.1b deep-scrub starts
Oct 01 16:37:45 compute-0 ceph-mon[74273]: 10.1b deep-scrub ok
Oct 01 16:37:45 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v199: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Oct 01 16:37:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Oct 01 16:37:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct 01 16:37:45 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 8.13 deep-scrub starts
Oct 01 16:37:45 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 8.13 deep-scrub ok
Oct 01 16:37:45 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Oct 01 16:37:45 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Oct 01 16:37:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Oct 01 16:37:46 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct 01 16:37:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Oct 01 16:37:46 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Oct 01 16:37:46 compute-0 ceph-mon[74273]: pgmap v199: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Oct 01 16:37:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct 01 16:37:46 compute-0 ceph-mon[74273]: 8.13 deep-scrub starts
Oct 01 16:37:46 compute-0 ceph-mon[74273]: 8.13 deep-scrub ok
Oct 01 16:37:46 compute-0 ceph-mon[74273]: 10.1c scrub starts
Oct 01 16:37:46 compute-0 ceph-mon[74273]: 10.1c scrub ok
Oct 01 16:37:46 compute-0 python3.9[106640]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:37:47 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Oct 01 16:37:47 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Oct 01 16:37:47 compute-0 python3.9[106790]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:37:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct 01 16:37:47 compute-0 ceph-mon[74273]: osdmap e102: 3 total, 3 up, 3 in
Oct 01 16:37:47 compute-0 ceph-mon[74273]: 5.2 scrub starts
Oct 01 16:37:47 compute-0 ceph-mon[74273]: 5.2 scrub ok
Oct 01 16:37:47 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v201: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Oct 01 16:37:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Oct 01 16:37:47 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct 01 16:37:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Oct 01 16:37:48 compute-0 ceph-mon[74273]: pgmap v201: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Oct 01 16:37:48 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct 01 16:37:48 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct 01 16:37:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Oct 01 16:37:48 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Oct 01 16:37:48 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 103 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=57/58 n=5 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=103 pruub=14.112327576s) [2] r=-1 lpr=103 pi=[57,103)/1 crt=40'385 mlcod 0'0 active pruub 175.250549316s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:37:48 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 103 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=57/58 n=5 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=103 pruub=14.112270355s) [2] r=-1 lpr=103 pi=[57,103)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 175.250549316s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:37:48 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 103 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=103) [2] r=0 lpr=103 pi=[57,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:37:48 compute-0 python3.9[106944]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:37:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:37:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Oct 01 16:37:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Oct 01 16:37:48 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Oct 01 16:37:48 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 104 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=104) [2]/[0] r=-1 lpr=104 pi=[57,104)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:37:48 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 104 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=104) [2]/[0] r=-1 lpr=104 pi=[57,104)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:37:48 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 104 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=57/58 n=5 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=104) [2]/[0] r=0 lpr=104 pi=[57,104)/1 crt=40'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:37:48 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 104 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=57/58 n=5 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=104) [2]/[0] r=0 lpr=104 pi=[57,104)/1 crt=40'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 16:37:49 compute-0 sudo[107100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzyrqiadqmaoozmstlprohxzozkmvsbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336668.7953815-116-163993537925408/AnsiballZ_setup.py'
Oct 01 16:37:49 compute-0 sudo[107100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:37:49 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct 01 16:37:49 compute-0 ceph-mon[74273]: osdmap e103: 3 total, 3 up, 3 in
Oct 01 16:37:49 compute-0 ceph-mon[74273]: osdmap e104: 3 total, 3 up, 3 in
Oct 01 16:37:49 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v204: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Oct 01 16:37:49 compute-0 python3.9[107102]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 16:37:49 compute-0 sudo[107100]: pam_unix(sudo:session): session closed for user root
Oct 01 16:37:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Oct 01 16:37:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Oct 01 16:37:49 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Oct 01 16:37:49 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Oct 01 16:37:49 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Oct 01 16:37:49 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 105 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=104/105 n=5 ec=49/34 lis/c=57/57 les/c/f=58/58/0 sis=104) [2]/[0] async=[2] r=0 lpr=104 pi=[57,104)/1 crt=40'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:37:50 compute-0 sudo[107184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiwcuankxjbhvxtdtvcihxdiyzyvjvqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336668.7953815-116-163993537925408/AnsiballZ_dnf.py'
Oct 01 16:37:50 compute-0 sudo[107184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:37:50 compute-0 python3.9[107186]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 16:37:50 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Oct 01 16:37:50 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Oct 01 16:37:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Oct 01 16:37:50 compute-0 ceph-mon[74273]: pgmap v204: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Oct 01 16:37:50 compute-0 ceph-mon[74273]: osdmap e105: 3 total, 3 up, 3 in
Oct 01 16:37:50 compute-0 ceph-mon[74273]: 10.1d scrub starts
Oct 01 16:37:50 compute-0 ceph-mon[74273]: 10.1d scrub ok
Oct 01 16:37:50 compute-0 ceph-mon[74273]: 8.16 scrub starts
Oct 01 16:37:50 compute-0 ceph-mon[74273]: 8.16 scrub ok
Oct 01 16:37:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Oct 01 16:37:50 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Oct 01 16:37:50 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 106 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=104/105 n=5 ec=49/34 lis/c=104/57 les/c/f=105/58/0 sis=106 pruub=15.061511040s) [2] async=[2] r=-1 lpr=106 pi=[57,106)/1 crt=40'385 mlcod 40'385 active pruub 178.663589478s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:37:50 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 106 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=104/105 n=5 ec=49/34 lis/c=104/57 les/c/f=105/58/0 sis=106 pruub=15.061413765s) [2] r=-1 lpr=106 pi=[57,106)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 178.663589478s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:37:50 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 106 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=104/57 les/c/f=105/58/0 sis=106) [2] r=0 lpr=106 pi=[57,106)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:37:50 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 106 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=104/57 les/c/f=105/58/0 sis=106) [2] r=0 lpr=106 pi=[57,106)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:37:50 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 10.1f deep-scrub starts
Oct 01 16:37:50 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 10.1f deep-scrub ok
Oct 01 16:37:51 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Oct 01 16:37:51 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Oct 01 16:37:51 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v207: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:37:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Oct 01 16:37:51 compute-0 ceph-mon[74273]: osdmap e106: 3 total, 3 up, 3 in
Oct 01 16:37:51 compute-0 ceph-mon[74273]: 10.1f deep-scrub starts
Oct 01 16:37:51 compute-0 ceph-mon[74273]: 10.1f deep-scrub ok
Oct 01 16:37:51 compute-0 ceph-mon[74273]: 5.3 scrub starts
Oct 01 16:37:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Oct 01 16:37:51 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Oct 01 16:37:51 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 107 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=106/107 n=5 ec=49/34 lis/c=104/57 les/c/f=105/58/0 sis=106) [2] r=0 lpr=106 pi=[57,106)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:37:52 compute-0 ceph-mon[74273]: 5.3 scrub ok
Oct 01 16:37:52 compute-0 ceph-mon[74273]: pgmap v207: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:37:52 compute-0 ceph-mon[74273]: osdmap e107: 3 total, 3 up, 3 in
Oct 01 16:37:53 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v209: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 1 objects/s recovering
Oct 01 16:37:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Oct 01 16:37:53 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct 01 16:37:53 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 8.17 deep-scrub starts
Oct 01 16:37:53 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 8.17 deep-scrub ok
Oct 01 16:37:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:37:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Oct 01 16:37:53 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct 01 16:37:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Oct 01 16:37:53 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Oct 01 16:37:53 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct 01 16:37:53 compute-0 ceph-mon[74273]: 8.17 deep-scrub starts
Oct 01 16:37:53 compute-0 ceph-mon[74273]: 8.17 deep-scrub ok
Oct 01 16:37:54 compute-0 ceph-mon[74273]: pgmap v209: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 1 objects/s recovering
Oct 01 16:37:54 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct 01 16:37:54 compute-0 ceph-mon[74273]: osdmap e108: 3 total, 3 up, 3 in
Oct 01 16:37:54 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Oct 01 16:37:54 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Oct 01 16:37:55 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v211: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 19 B/s, 1 objects/s recovering
Oct 01 16:37:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Oct 01 16:37:55 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct 01 16:37:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Oct 01 16:37:55 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct 01 16:37:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Oct 01 16:37:55 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Oct 01 16:37:55 compute-0 ceph-mon[74273]: 4.13 scrub starts
Oct 01 16:37:55 compute-0 ceph-mon[74273]: 4.13 scrub ok
Oct 01 16:37:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct 01 16:37:56 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Oct 01 16:37:56 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Oct 01 16:37:56 compute-0 ceph-mon[74273]: pgmap v211: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 19 B/s, 1 objects/s recovering
Oct 01 16:37:56 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct 01 16:37:56 compute-0 ceph-mon[74273]: osdmap e109: 3 total, 3 up, 3 in
Oct 01 16:37:56 compute-0 ceph-mon[74273]: 8.19 scrub starts
Oct 01 16:37:56 compute-0 ceph-mon[74273]: 8.19 scrub ok
Oct 01 16:37:57 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 10.d scrub starts
Oct 01 16:37:57 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 10.d scrub ok
Oct 01 16:37:57 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v213: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Oct 01 16:37:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Oct 01 16:37:57 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct 01 16:37:57 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 4.e scrub starts
Oct 01 16:37:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Oct 01 16:37:57 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 4.e scrub ok
Oct 01 16:37:57 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct 01 16:37:57 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct 01 16:37:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Oct 01 16:37:57 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Oct 01 16:37:58 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 2.18 deep-scrub starts
Oct 01 16:37:58 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 2.18 deep-scrub ok
Oct 01 16:37:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:37:58 compute-0 ceph-mon[74273]: 10.d scrub starts
Oct 01 16:37:58 compute-0 ceph-mon[74273]: 10.d scrub ok
Oct 01 16:37:58 compute-0 ceph-mon[74273]: pgmap v213: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Oct 01 16:37:58 compute-0 ceph-mon[74273]: 4.e scrub starts
Oct 01 16:37:58 compute-0 ceph-mon[74273]: 4.e scrub ok
Oct 01 16:37:58 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct 01 16:37:58 compute-0 ceph-mon[74273]: osdmap e110: 3 total, 3 up, 3 in
Oct 01 16:37:59 compute-0 PackageKit[31093]: daemon quit
Oct 01 16:37:59 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct 01 16:37:59 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v215: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:37:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Oct 01 16:37:59 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct 01 16:37:59 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 10.e deep-scrub starts
Oct 01 16:37:59 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 10.e deep-scrub ok
Oct 01 16:37:59 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 9.a scrub starts
Oct 01 16:37:59 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 9.a scrub ok
Oct 01 16:37:59 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 110 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=81/82 n=5 ec=49/34 lis/c=81/81 les/c/f=82/82/0 sis=110 pruub=8.307058334s) [0] r=-1 lpr=110 pi=[81,110)/1 crt=40'385 mlcod 0'0 active pruub 171.912918091s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:37:59 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 110 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=81/82 n=5 ec=49/34 lis/c=81/81 les/c/f=82/82/0 sis=110 pruub=8.306986809s) [0] r=-1 lpr=110 pi=[81,110)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 171.912918091s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:37:59 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 110 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=81/81 les/c/f=82/82/0 sis=110) [0] r=0 lpr=110 pi=[81,110)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:37:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Oct 01 16:37:59 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct 01 16:37:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Oct 01 16:37:59 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Oct 01 16:37:59 compute-0 ceph-mon[74273]: 2.18 deep-scrub starts
Oct 01 16:37:59 compute-0 ceph-mon[74273]: 2.18 deep-scrub ok
Oct 01 16:37:59 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct 01 16:37:59 compute-0 ceph-mon[74273]: 9.a scrub starts
Oct 01 16:37:59 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 111 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=81/81 les/c/f=82/82/0 sis=111) [0]/[2] r=-1 lpr=111 pi=[81,111)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:37:59 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 111 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=81/81 les/c/f=82/82/0 sis=111) [0]/[2] r=-1 lpr=111 pi=[81,111)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:37:59 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 111 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=81/82 n=5 ec=49/34 lis/c=81/81 les/c/f=82/82/0 sis=111) [0]/[2] r=0 lpr=111 pi=[81,111)/1 crt=40'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:37:59 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 111 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=81/82 n=5 ec=49/34 lis/c=81/81 les/c/f=82/82/0 sis=111) [0]/[2] r=0 lpr=111 pi=[81,111)/1 crt=40'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 16:38:00 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Oct 01 16:38:00 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Oct 01 16:38:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Oct 01 16:38:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Oct 01 16:38:00 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Oct 01 16:38:00 compute-0 ceph-mon[74273]: pgmap v215: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:00 compute-0 ceph-mon[74273]: 10.e deep-scrub starts
Oct 01 16:38:00 compute-0 ceph-mon[74273]: 10.e deep-scrub ok
Oct 01 16:38:00 compute-0 ceph-mon[74273]: 9.a scrub ok
Oct 01 16:38:00 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct 01 16:38:00 compute-0 ceph-mon[74273]: osdmap e111: 3 total, 3 up, 3 in
Oct 01 16:38:01 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v218: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Oct 01 16:38:01 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct 01 16:38:01 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 112 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=111/112 n=5 ec=49/34 lis/c=81/81 les/c/f=82/82/0 sis=111) [0]/[2] async=[0] r=0 lpr=111 pi=[81,111)/1 crt=40'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:38:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Oct 01 16:38:01 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct 01 16:38:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Oct 01 16:38:01 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Oct 01 16:38:01 compute-0 ceph-mon[74273]: 4.18 scrub starts
Oct 01 16:38:01 compute-0 ceph-mon[74273]: 4.18 scrub ok
Oct 01 16:38:01 compute-0 ceph-mon[74273]: osdmap e112: 3 total, 3 up, 3 in
Oct 01 16:38:01 compute-0 ceph-mon[74273]: pgmap v218: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:01 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct 01 16:38:01 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 113 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=66/67 n=5 ec=49/34 lis/c=66/66 les/c/f=67/67/0 sis=113 pruub=13.728839874s) [0] r=-1 lpr=113 pi=[66,113)/1 crt=40'385 mlcod 0'0 active pruub 179.427413940s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:38:01 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 113 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=111/112 n=5 ec=49/34 lis/c=111/81 les/c/f=112/82/0 sis=113 pruub=15.442478180s) [0] async=[0] r=-1 lpr=113 pi=[81,113)/1 crt=40'385 mlcod 40'385 active pruub 181.141067505s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:38:01 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 113 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=111/112 n=5 ec=49/34 lis/c=111/81 les/c/f=112/82/0 sis=113 pruub=15.442378044s) [0] r=-1 lpr=113 pi=[81,113)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 181.141067505s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:38:01 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 113 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=66/67 n=5 ec=49/34 lis/c=66/66 les/c/f=67/67/0 sis=113 pruub=13.728336334s) [0] r=-1 lpr=113 pi=[66,113)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 179.427413940s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:38:01 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 113 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=66/66 les/c/f=67/67/0 sis=113) [0] r=0 lpr=113 pi=[66,113)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:38:01 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 113 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=111/81 les/c/f=112/82/0 sis=113) [0] r=0 lpr=113 pi=[81,113)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:38:01 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 113 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=111/81 les/c/f=112/82/0 sis=113) [0] r=0 lpr=113 pi=[81,113)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:38:02 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 2.b scrub starts
Oct 01 16:38:02 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 2.b scrub ok
Oct 01 16:38:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Oct 01 16:38:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Oct 01 16:38:02 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Oct 01 16:38:02 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 114 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=66/67 n=5 ec=49/34 lis/c=66/66 les/c/f=67/67/0 sis=114) [0]/[2] r=0 lpr=114 pi=[66,114)/1 crt=40'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:38:02 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 114 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=66/67 n=5 ec=49/34 lis/c=66/66 les/c/f=67/67/0 sis=114) [0]/[2] r=0 lpr=114 pi=[66,114)/1 crt=40'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 16:38:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct 01 16:38:02 compute-0 ceph-mon[74273]: osdmap e113: 3 total, 3 up, 3 in
Oct 01 16:38:02 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=66/66 les/c/f=67/67/0 sis=114) [0]/[2] r=-1 lpr=114 pi=[66,114)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:38:02 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=66/66 les/c/f=67/67/0 sis=114) [0]/[2] r=-1 lpr=114 pi=[66,114)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:38:02 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 114 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=113/114 n=5 ec=49/34 lis/c=111/81 les/c/f=112/82/0 sis=113) [0] r=0 lpr=113 pi=[81,113)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:38:03 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Oct 01 16:38:03 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Oct 01 16:38:03 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v221: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:03 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Oct 01 16:38:03 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Oct 01 16:38:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:38:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Oct 01 16:38:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Oct 01 16:38:03 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Oct 01 16:38:03 compute-0 ceph-mon[74273]: 2.b scrub starts
Oct 01 16:38:03 compute-0 ceph-mon[74273]: 2.b scrub ok
Oct 01 16:38:03 compute-0 ceph-mon[74273]: osdmap e114: 3 total, 3 up, 3 in
Oct 01 16:38:03 compute-0 ceph-mon[74273]: pgmap v221: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:03 compute-0 ceph-mon[74273]: 8.1e scrub starts
Oct 01 16:38:03 compute-0 ceph-mon[74273]: 8.1e scrub ok
Oct 01 16:38:03 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 115 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=114/115 n=5 ec=49/34 lis/c=66/66 les/c/f=67/67/0 sis=114) [0]/[2] async=[0] r=0 lpr=114 pi=[66,114)/1 crt=40'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:38:04 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 9.10 deep-scrub starts
Oct 01 16:38:04 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 9.10 deep-scrub ok
Oct 01 16:38:04 compute-0 ceph-mon[74273]: 10.1 scrub starts
Oct 01 16:38:04 compute-0 ceph-mon[74273]: 10.1 scrub ok
Oct 01 16:38:04 compute-0 ceph-mon[74273]: osdmap e115: 3 total, 3 up, 3 in
Oct 01 16:38:04 compute-0 ceph-mon[74273]: 9.10 deep-scrub starts
Oct 01 16:38:04 compute-0 ceph-mon[74273]: 9.10 deep-scrub ok
Oct 01 16:38:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Oct 01 16:38:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Oct 01 16:38:04 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Oct 01 16:38:04 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 116 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=114/115 n=5 ec=49/34 lis/c=114/66 les/c/f=115/67/0 sis=116 pruub=14.978901863s) [0] async=[0] r=-1 lpr=116 pi=[66,116)/1 crt=40'385 mlcod 40'385 active pruub 183.731384277s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:38:04 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 116 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=114/115 n=5 ec=49/34 lis/c=114/66 les/c/f=115/67/0 sis=116 pruub=14.978734016s) [0] r=-1 lpr=116 pi=[66,116)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 183.731384277s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:38:04 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 116 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=114/66 les/c/f=115/67/0 sis=116) [0] r=0 lpr=116 pi=[66,116)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:38:04 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 116 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=114/66 les/c/f=115/67/0 sis=116) [0] r=0 lpr=116 pi=[66,116)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:38:05 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v224: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Oct 01 16:38:05 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 5.15 deep-scrub starts
Oct 01 16:38:05 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 5.15 deep-scrub ok
Oct 01 16:38:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Oct 01 16:38:05 compute-0 ceph-mon[74273]: osdmap e116: 3 total, 3 up, 3 in
Oct 01 16:38:05 compute-0 ceph-mon[74273]: pgmap v224: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Oct 01 16:38:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Oct 01 16:38:05 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Oct 01 16:38:05 compute-0 ceph-osd[88140]: osd.0 pg_epoch: 117 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=116/117 n=5 ec=49/34 lis/c=114/66 les/c/f=115/67/0 sis=116) [0] r=0 lpr=116 pi=[66,116)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:38:06 compute-0 ceph-mon[74273]: 5.15 deep-scrub starts
Oct 01 16:38:06 compute-0 ceph-mon[74273]: 5.15 deep-scrub ok
Oct 01 16:38:06 compute-0 ceph-mon[74273]: osdmap e117: 3 total, 3 up, 3 in
Oct 01 16:38:07 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v226: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 1 objects/s recovering
Oct 01 16:38:07 compute-0 ceph-mon[74273]: pgmap v226: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 1 objects/s recovering
Oct 01 16:38:08 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Oct 01 16:38:08 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Oct 01 16:38:08 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Oct 01 16:38:08 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Oct 01 16:38:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:38:08 compute-0 ceph-mon[74273]: 9.12 scrub starts
Oct 01 16:38:08 compute-0 ceph-mon[74273]: 9.12 scrub ok
Oct 01 16:38:09 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v227: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Oct 01 16:38:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 01 16:38:09 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 01 16:38:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Oct 01 16:38:09 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 01 16:38:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Oct 01 16:38:10 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Oct 01 16:38:10 compute-0 ceph-mon[74273]: 2.1f scrub starts
Oct 01 16:38:10 compute-0 ceph-mon[74273]: 2.1f scrub ok
Oct 01 16:38:10 compute-0 ceph-mon[74273]: pgmap v227: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Oct 01 16:38:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 01 16:38:10 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 118 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=68/69 n=5 ec=49/34 lis/c=68/68 les/c/f=69/69/0 sis=118 pruub=14.932944298s) [1] r=-1 lpr=118 pi=[68,118)/1 crt=40'385 mlcod 0'0 active pruub 189.569885254s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:38:10 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 118 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=68/69 n=5 ec=49/34 lis/c=68/68 les/c/f=69/69/0 sis=118 pruub=14.932882309s) [1] r=-1 lpr=118 pi=[68,118)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 189.569885254s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:38:10 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=68/68 les/c/f=69/69/0 sis=118) [1] r=0 lpr=118 pi=[68,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:38:10 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 4.a scrub starts
Oct 01 16:38:10 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 4.a scrub ok
Oct 01 16:38:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Oct 01 16:38:11 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 01 16:38:11 compute-0 ceph-mon[74273]: osdmap e118: 3 total, 3 up, 3 in
Oct 01 16:38:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Oct 01 16:38:11 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Oct 01 16:38:11 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=68/68 les/c/f=69/69/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[68,119)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:38:11 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=68/68 les/c/f=69/69/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[68,119)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 16:38:11 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 119 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=68/69 n=5 ec=49/34 lis/c=68/68 les/c/f=69/69/0 sis=119) [1]/[2] r=0 lpr=119 pi=[68,119)/1 crt=40'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:38:11 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 119 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=68/69 n=5 ec=49/34 lis/c=68/68 les/c/f=69/69/0 sis=119) [1]/[2] r=0 lpr=119 pi=[68,119)/1 crt=40'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 16:38:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_16:38:11
Oct 01 16:38:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 16:38:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 16:38:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'volumes', 'vms', '.rgw.root', 'images', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data']
Oct 01 16:38:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 16:38:11 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v230: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct 01 16:38:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:38:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:38:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:38:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:38:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:38:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:38:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 16:38:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:38:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 16:38:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:38:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:38:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:38:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:38:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:38:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:38:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:38:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Oct 01 16:38:12 compute-0 ceph-mon[74273]: 4.a scrub starts
Oct 01 16:38:12 compute-0 ceph-mon[74273]: 4.a scrub ok
Oct 01 16:38:12 compute-0 ceph-mon[74273]: osdmap e119: 3 total, 3 up, 3 in
Oct 01 16:38:12 compute-0 ceph-mon[74273]: pgmap v230: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct 01 16:38:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Oct 01 16:38:12 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Oct 01 16:38:12 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 120 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=119/120 n=5 ec=49/34 lis/c=68/68 les/c/f=69/69/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[68,119)/1 crt=40'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:38:12 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Oct 01 16:38:12 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Oct 01 16:38:12 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Oct 01 16:38:12 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Oct 01 16:38:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Oct 01 16:38:13 compute-0 ceph-mon[74273]: osdmap e120: 3 total, 3 up, 3 in
Oct 01 16:38:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Oct 01 16:38:13 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Oct 01 16:38:13 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 121 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=119/120 n=5 ec=49/34 lis/c=119/68 les/c/f=120/69/0 sis=121 pruub=15.000177383s) [1] async=[1] r=-1 lpr=121 pi=[68,121)/1 crt=40'385 mlcod 40'385 active pruub 191.843826294s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:38:13 compute-0 ceph-osd[90269]: osd.2 pg_epoch: 121 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=119/120 n=5 ec=49/34 lis/c=119/68 les/c/f=120/69/0 sis=121 pruub=14.999989510s) [1] r=-1 lpr=121 pi=[68,121)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 191.843826294s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 16:38:13 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 121 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=119/68 les/c/f=120/69/0 sis=121) [1] r=0 lpr=121 pi=[68,121)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 16:38:13 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 121 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=49/34 lis/c=119/68 les/c/f=120/69/0 sis=121) [1] r=0 lpr=121 pi=[68,121)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 16:38:13 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v233: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:13 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Oct 01 16:38:13 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Oct 01 16:38:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:38:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Oct 01 16:38:14 compute-0 ceph-mon[74273]: 10.1e scrub starts
Oct 01 16:38:14 compute-0 ceph-mon[74273]: 10.1e scrub ok
Oct 01 16:38:14 compute-0 ceph-mon[74273]: 9.14 scrub starts
Oct 01 16:38:14 compute-0 ceph-mon[74273]: 9.14 scrub ok
Oct 01 16:38:14 compute-0 ceph-mon[74273]: osdmap e121: 3 total, 3 up, 3 in
Oct 01 16:38:14 compute-0 ceph-mon[74273]: pgmap v233: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Oct 01 16:38:14 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Oct 01 16:38:14 compute-0 ceph-osd[89167]: osd.1 pg_epoch: 122 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=121/122 n=5 ec=49/34 lis/c=119/68 les/c/f=120/69/0 sis=121) [1] r=0 lpr=121 pi=[68,121)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 16:38:14 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Oct 01 16:38:14 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Oct 01 16:38:15 compute-0 ceph-mon[74273]: 2.13 scrub starts
Oct 01 16:38:15 compute-0 ceph-mon[74273]: 2.13 scrub ok
Oct 01 16:38:15 compute-0 ceph-mon[74273]: osdmap e122: 3 total, 3 up, 3 in
Oct 01 16:38:15 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Oct 01 16:38:15 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:15 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Oct 01 16:38:15 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Oct 01 16:38:15 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Oct 01 16:38:16 compute-0 ceph-mon[74273]: 2.8 scrub starts
Oct 01 16:38:16 compute-0 ceph-mon[74273]: 2.8 scrub ok
Oct 01 16:38:16 compute-0 ceph-mon[74273]: pgmap v235: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:16 compute-0 ceph-mon[74273]: 9.1a scrub starts
Oct 01 16:38:16 compute-0 ceph-mon[74273]: 9.1a scrub ok
Oct 01 16:38:16 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Oct 01 16:38:16 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Oct 01 16:38:16 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Oct 01 16:38:16 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Oct 01 16:38:16 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Oct 01 16:38:16 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Oct 01 16:38:17 compute-0 ceph-mon[74273]: 2.16 scrub starts
Oct 01 16:38:17 compute-0 ceph-mon[74273]: 2.16 scrub ok
Oct 01 16:38:17 compute-0 ceph-mon[74273]: 11.5 scrub starts
Oct 01 16:38:17 compute-0 ceph-mon[74273]: 11.5 scrub ok
Oct 01 16:38:17 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:17 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Oct 01 16:38:17 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Oct 01 16:38:18 compute-0 ceph-mon[74273]: 2.11 scrub starts
Oct 01 16:38:18 compute-0 ceph-mon[74273]: 2.11 scrub ok
Oct 01 16:38:18 compute-0 ceph-mon[74273]: 4.11 scrub starts
Oct 01 16:38:18 compute-0 ceph-mon[74273]: 4.11 scrub ok
Oct 01 16:38:18 compute-0 ceph-mon[74273]: pgmap v236: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:18 compute-0 ceph-mon[74273]: 11.7 scrub starts
Oct 01 16:38:18 compute-0 ceph-mon[74273]: 11.7 scrub ok
Oct 01 16:38:18 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Oct 01 16:38:18 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Oct 01 16:38:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:38:19 compute-0 ceph-mon[74273]: 10.16 scrub starts
Oct 01 16:38:19 compute-0 ceph-mon[74273]: 10.16 scrub ok
Oct 01 16:38:19 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Oct 01 16:38:20 compute-0 ceph-mon[74273]: pgmap v237: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Oct 01 16:38:20 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Oct 01 16:38:20 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Oct 01 16:38:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 16:38:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:38:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 16:38:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:38:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:38:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:38:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:38:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:38:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:38:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:38:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:38:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:38:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 16:38:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:38:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:38:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:38:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 16:38:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:38:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 16:38:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:38:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:38:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:38:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 16:38:21 compute-0 ceph-mon[74273]: 5.14 scrub starts
Oct 01 16:38:21 compute-0 ceph-mon[74273]: 5.14 scrub ok
Oct 01 16:38:21 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Oct 01 16:38:21 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Oct 01 16:38:21 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Oct 01 16:38:22 compute-0 ceph-mon[74273]: pgmap v238: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Oct 01 16:38:22 compute-0 ceph-mon[74273]: 4.1b scrub starts
Oct 01 16:38:22 compute-0 ceph-mon[74273]: 4.1b scrub ok
Oct 01 16:38:22 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Oct 01 16:38:22 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Oct 01 16:38:23 compute-0 sudo[107332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:38:23 compute-0 sudo[107332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:38:23 compute-0 sudo[107332]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:23 compute-0 ceph-mon[74273]: 3.1f scrub starts
Oct 01 16:38:23 compute-0 ceph-mon[74273]: 3.1f scrub ok
Oct 01 16:38:23 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v239: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Oct 01 16:38:23 compute-0 sudo[107357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:38:23 compute-0 sudo[107357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:38:23 compute-0 sudo[107357]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:23 compute-0 sudo[107382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:38:23 compute-0 sudo[107382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:38:23 compute-0 sudo[107382]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:23 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 11.a scrub starts
Oct 01 16:38:23 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 11.a scrub ok
Oct 01 16:38:23 compute-0 sudo[107407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 16:38:23 compute-0 sudo[107407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:38:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:38:24 compute-0 sudo[107407]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:38:24 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:38:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 16:38:24 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:38:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 16:38:24 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:38:24 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 80d3785b-439e-466c-993b-7156d2a84ca0 does not exist
Oct 01 16:38:24 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 9567b3f4-1560-4c5a-b5c8-98b8041dfee4 does not exist
Oct 01 16:38:24 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev d53d303b-2280-46d2-b294-f6d66e01e215 does not exist
Oct 01 16:38:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 16:38:24 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:38:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 16:38:24 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:38:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:38:24 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:38:24 compute-0 sudo[107464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:38:24 compute-0 sudo[107464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:38:24 compute-0 sudo[107464]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:24 compute-0 sudo[107489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:38:24 compute-0 sudo[107489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:38:24 compute-0 sudo[107489]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:24 compute-0 ceph-mon[74273]: pgmap v239: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Oct 01 16:38:24 compute-0 ceph-mon[74273]: 11.a scrub starts
Oct 01 16:38:24 compute-0 ceph-mon[74273]: 11.a scrub ok
Oct 01 16:38:24 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:38:24 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:38:24 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:38:24 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:38:24 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:38:24 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:38:24 compute-0 sudo[107514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:38:24 compute-0 sudo[107514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:38:24 compute-0 sudo[107514]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:24 compute-0 sudo[107539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 16:38:24 compute-0 sudo[107539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:38:24 compute-0 podman[107603]: 2025-10-01 16:38:24.830449478 +0000 UTC m=+0.050607476 container create 5c8d8e90b7affc4eff0ec8ab8805d8b11a7f8502b200732252c5ef0d9ac5375c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ritchie, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:38:24 compute-0 systemd[1]: Started libpod-conmon-5c8d8e90b7affc4eff0ec8ab8805d8b11a7f8502b200732252c5ef0d9ac5375c.scope.
Oct 01 16:38:24 compute-0 podman[107603]: 2025-10-01 16:38:24.808814322 +0000 UTC m=+0.028972350 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:38:24 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:38:24 compute-0 podman[107603]: 2025-10-01 16:38:24.922421964 +0000 UTC m=+0.142579992 container init 5c8d8e90b7affc4eff0ec8ab8805d8b11a7f8502b200732252c5ef0d9ac5375c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ritchie, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 01 16:38:24 compute-0 podman[107603]: 2025-10-01 16:38:24.931830695 +0000 UTC m=+0.151988723 container start 5c8d8e90b7affc4eff0ec8ab8805d8b11a7f8502b200732252c5ef0d9ac5375c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 01 16:38:24 compute-0 podman[107603]: 2025-10-01 16:38:24.936357265 +0000 UTC m=+0.156515293 container attach 5c8d8e90b7affc4eff0ec8ab8805d8b11a7f8502b200732252c5ef0d9ac5375c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 01 16:38:24 compute-0 affectionate_ritchie[107620]: 167 167
Oct 01 16:38:24 compute-0 systemd[1]: libpod-5c8d8e90b7affc4eff0ec8ab8805d8b11a7f8502b200732252c5ef0d9ac5375c.scope: Deactivated successfully.
Oct 01 16:38:24 compute-0 podman[107603]: 2025-10-01 16:38:24.940587106 +0000 UTC m=+0.160745134 container died 5c8d8e90b7affc4eff0ec8ab8805d8b11a7f8502b200732252c5ef0d9ac5375c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ritchie, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 01 16:38:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ef30bd4e833558fc33b7a1de2f3904768c773c4794e4301e6ea0aa096c52ae8-merged.mount: Deactivated successfully.
Oct 01 16:38:24 compute-0 podman[107603]: 2025-10-01 16:38:24.984453448 +0000 UTC m=+0.204611446 container remove 5c8d8e90b7affc4eff0ec8ab8805d8b11a7f8502b200732252c5ef0d9ac5375c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:38:25 compute-0 systemd[1]: libpod-conmon-5c8d8e90b7affc4eff0ec8ab8805d8b11a7f8502b200732252c5ef0d9ac5375c.scope: Deactivated successfully.
Oct 01 16:38:25 compute-0 podman[107643]: 2025-10-01 16:38:25.170258322 +0000 UTC m=+0.062631146 container create fc1731c71aea448fc29d4381c834139401d26eff8aa55ba2b51435cc1d2ce8b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 01 16:38:25 compute-0 systemd[1]: Started libpod-conmon-fc1731c71aea448fc29d4381c834139401d26eff8aa55ba2b51435cc1d2ce8b7.scope.
Oct 01 16:38:25 compute-0 podman[107643]: 2025-10-01 16:38:25.13370723 +0000 UTC m=+0.026080094 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:38:25 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc106e1c4a493a212af9cc53d6738c655195940a980d645472e6196d5bde4e25/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc106e1c4a493a212af9cc53d6738c655195940a980d645472e6196d5bde4e25/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc106e1c4a493a212af9cc53d6738c655195940a980d645472e6196d5bde4e25/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc106e1c4a493a212af9cc53d6738c655195940a980d645472e6196d5bde4e25/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc106e1c4a493a212af9cc53d6738c655195940a980d645472e6196d5bde4e25/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:38:25 compute-0 podman[107643]: 2025-10-01 16:38:25.299212553 +0000 UTC m=+0.191585357 container init fc1731c71aea448fc29d4381c834139401d26eff8aa55ba2b51435cc1d2ce8b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:38:25 compute-0 podman[107643]: 2025-10-01 16:38:25.305128067 +0000 UTC m=+0.197500891 container start fc1731c71aea448fc29d4381c834139401d26eff8aa55ba2b51435cc1d2ce8b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_williamson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 01 16:38:25 compute-0 podman[107643]: 2025-10-01 16:38:25.308811626 +0000 UTC m=+0.201184410 container attach fc1731c71aea448fc29d4381c834139401d26eff8aa55ba2b51435cc1d2ce8b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_williamson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:38:25 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Oct 01 16:38:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 4.1a deep-scrub starts
Oct 01 16:38:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 4.1a deep-scrub ok
Oct 01 16:38:26 compute-0 reverent_williamson[107660]: --> passed data devices: 0 physical, 3 LVM
Oct 01 16:38:26 compute-0 reverent_williamson[107660]: --> relative data size: 1.0
Oct 01 16:38:26 compute-0 reverent_williamson[107660]: --> All data devices are unavailable
Oct 01 16:38:26 compute-0 systemd[1]: libpod-fc1731c71aea448fc29d4381c834139401d26eff8aa55ba2b51435cc1d2ce8b7.scope: Deactivated successfully.
Oct 01 16:38:26 compute-0 podman[107643]: 2025-10-01 16:38:26.322812717 +0000 UTC m=+1.215185511 container died fc1731c71aea448fc29d4381c834139401d26eff8aa55ba2b51435cc1d2ce8b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:38:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc106e1c4a493a212af9cc53d6738c655195940a980d645472e6196d5bde4e25-merged.mount: Deactivated successfully.
Oct 01 16:38:26 compute-0 podman[107643]: 2025-10-01 16:38:26.398411901 +0000 UTC m=+1.290784705 container remove fc1731c71aea448fc29d4381c834139401d26eff8aa55ba2b51435cc1d2ce8b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Oct 01 16:38:26 compute-0 ceph-mon[74273]: pgmap v240: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Oct 01 16:38:26 compute-0 ceph-mon[74273]: 4.1a deep-scrub starts
Oct 01 16:38:26 compute-0 ceph-mon[74273]: 4.1a deep-scrub ok
Oct 01 16:38:26 compute-0 systemd[1]: libpod-conmon-fc1731c71aea448fc29d4381c834139401d26eff8aa55ba2b51435cc1d2ce8b7.scope: Deactivated successfully.
Oct 01 16:38:26 compute-0 sudo[107539]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:26 compute-0 sudo[107701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:38:26 compute-0 sudo[107701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:38:26 compute-0 sudo[107701]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:26 compute-0 sudo[107726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:38:26 compute-0 sudo[107726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:38:26 compute-0 sudo[107726]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:26 compute-0 sudo[107751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:38:26 compute-0 sudo[107751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:38:26 compute-0 sudo[107751]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:26 compute-0 sudo[107776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 16:38:26 compute-0 sudo[107776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:38:27 compute-0 podman[107844]: 2025-10-01 16:38:27.17235877 +0000 UTC m=+0.054244136 container create bd76f3a785d9595ebb43639c6a7925b65df204b4c607ff12db505e3f9839368d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 01 16:38:27 compute-0 systemd[1]: Started libpod-conmon-bd76f3a785d9595ebb43639c6a7925b65df204b4c607ff12db505e3f9839368d.scope.
Oct 01 16:38:27 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:38:27 compute-0 podman[107844]: 2025-10-01 16:38:27.157692466 +0000 UTC m=+0.039577832 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:38:27 compute-0 podman[107844]: 2025-10-01 16:38:27.25685502 +0000 UTC m=+0.138740416 container init bd76f3a785d9595ebb43639c6a7925b65df204b4c607ff12db505e3f9839368d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_murdock, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 01 16:38:27 compute-0 podman[107844]: 2025-10-01 16:38:27.264563025 +0000 UTC m=+0.146448491 container start bd76f3a785d9595ebb43639c6a7925b65df204b4c607ff12db505e3f9839368d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:38:27 compute-0 great_murdock[107860]: 167 167
Oct 01 16:38:27 compute-0 systemd[1]: libpod-bd76f3a785d9595ebb43639c6a7925b65df204b4c607ff12db505e3f9839368d.scope: Deactivated successfully.
Oct 01 16:38:27 compute-0 podman[107844]: 2025-10-01 16:38:27.268861187 +0000 UTC m=+0.150746583 container attach bd76f3a785d9595ebb43639c6a7925b65df204b4c607ff12db505e3f9839368d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 01 16:38:27 compute-0 podman[107844]: 2025-10-01 16:38:27.269478491 +0000 UTC m=+0.151363857 container died bd76f3a785d9595ebb43639c6a7925b65df204b4c607ff12db505e3f9839368d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_murdock, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 01 16:38:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1a50327e0c72043ea6c6c0b7324bb392d199875bfa4f14c1402fab043a70adc-merged.mount: Deactivated successfully.
Oct 01 16:38:27 compute-0 podman[107844]: 2025-10-01 16:38:27.312536437 +0000 UTC m=+0.194421803 container remove bd76f3a785d9595ebb43639c6a7925b65df204b4c607ff12db505e3f9839368d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 01 16:38:27 compute-0 systemd[1]: libpod-conmon-bd76f3a785d9595ebb43639c6a7925b65df204b4c607ff12db505e3f9839368d.scope: Deactivated successfully.
Oct 01 16:38:27 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v241: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Oct 01 16:38:27 compute-0 podman[107884]: 2025-10-01 16:38:27.473036729 +0000 UTC m=+0.044681970 container create 4c0ae3ee17770759cd25730c44eff1bd15bd5823b9969e156982cbf5d127ada4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_swanson, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:38:27 compute-0 systemd[1]: Started libpod-conmon-4c0ae3ee17770759cd25730c44eff1bd15bd5823b9969e156982cbf5d127ada4.scope.
Oct 01 16:38:27 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:38:27 compute-0 podman[107884]: 2025-10-01 16:38:27.455554911 +0000 UTC m=+0.027200182 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b449675159e00ad399a5758fc1131335faa712544bd878d2c02d25fa54dd5dd0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b449675159e00ad399a5758fc1131335faa712544bd878d2c02d25fa54dd5dd0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b449675159e00ad399a5758fc1131335faa712544bd878d2c02d25fa54dd5dd0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b449675159e00ad399a5758fc1131335faa712544bd878d2c02d25fa54dd5dd0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:38:27 compute-0 podman[107884]: 2025-10-01 16:38:27.569123476 +0000 UTC m=+0.140768787 container init 4c0ae3ee17770759cd25730c44eff1bd15bd5823b9969e156982cbf5d127ada4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 01 16:38:27 compute-0 podman[107884]: 2025-10-01 16:38:27.582630866 +0000 UTC m=+0.154276137 container start 4c0ae3ee17770759cd25730c44eff1bd15bd5823b9969e156982cbf5d127ada4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:38:27 compute-0 podman[107884]: 2025-10-01 16:38:27.58866765 +0000 UTC m=+0.160312971 container attach 4c0ae3ee17770759cd25730c44eff1bd15bd5823b9969e156982cbf5d127ada4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]: {
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:     "0": [
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:         {
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             "devices": [
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "/dev/loop3"
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             ],
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             "lv_name": "ceph_lv0",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             "lv_size": "21470642176",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             "name": "ceph_lv0",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             "tags": {
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "ceph.cluster_name": "ceph",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "ceph.crush_device_class": "",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "ceph.encrypted": "0",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "ceph.osd_id": "0",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "ceph.type": "block",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "ceph.vdo": "0"
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             },
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             "type": "block",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             "vg_name": "ceph_vg0"
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:         }
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:     ],
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:     "1": [
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:         {
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             "devices": [
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "/dev/loop4"
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             ],
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             "lv_name": "ceph_lv1",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             "lv_size": "21470642176",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             "name": "ceph_lv1",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             "tags": {
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "ceph.cluster_name": "ceph",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "ceph.crush_device_class": "",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "ceph.encrypted": "0",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "ceph.osd_id": "1",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "ceph.type": "block",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "ceph.vdo": "0"
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             },
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             "type": "block",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             "vg_name": "ceph_vg1"
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:         }
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:     ],
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:     "2": [
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:         {
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             "devices": [
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "/dev/loop5"
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             ],
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             "lv_name": "ceph_lv2",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             "lv_size": "21470642176",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             "name": "ceph_lv2",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             "tags": {
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "ceph.cluster_name": "ceph",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "ceph.crush_device_class": "",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "ceph.encrypted": "0",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "ceph.osd_id": "2",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "ceph.type": "block",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:                 "ceph.vdo": "0"
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             },
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             "type": "block",
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:             "vg_name": "ceph_vg2"
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:         }
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]:     ]
Oct 01 16:38:28 compute-0 dreamy_swanson[107901]: }
Oct 01 16:38:28 compute-0 systemd[1]: libpod-4c0ae3ee17770759cd25730c44eff1bd15bd5823b9969e156982cbf5d127ada4.scope: Deactivated successfully.
Oct 01 16:38:28 compute-0 podman[107884]: 2025-10-01 16:38:28.32119365 +0000 UTC m=+0.892838891 container died 4c0ae3ee17770759cd25730c44eff1bd15bd5823b9969e156982cbf5d127ada4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_swanson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:38:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-b449675159e00ad399a5758fc1131335faa712544bd878d2c02d25fa54dd5dd0-merged.mount: Deactivated successfully.
Oct 01 16:38:28 compute-0 podman[107884]: 2025-10-01 16:38:28.396505056 +0000 UTC m=+0.968150297 container remove 4c0ae3ee17770759cd25730c44eff1bd15bd5823b9969e156982cbf5d127ada4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_swanson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:38:28 compute-0 ceph-mon[74273]: pgmap v241: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Oct 01 16:38:28 compute-0 systemd[1]: libpod-conmon-4c0ae3ee17770759cd25730c44eff1bd15bd5823b9969e156982cbf5d127ada4.scope: Deactivated successfully.
Oct 01 16:38:28 compute-0 sudo[107776]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:28 compute-0 sudo[107924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:38:28 compute-0 sudo[107924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:38:28 compute-0 sudo[107924]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:28 compute-0 sudo[107949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:38:28 compute-0 sudo[107949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:38:28 compute-0 sudo[107949]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:28 compute-0 sudo[107974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:38:28 compute-0 sudo[107974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:38:28 compute-0 sudo[107974]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:38:28 compute-0 sudo[107999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 16:38:28 compute-0 sudo[107999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:38:28 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Oct 01 16:38:28 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Oct 01 16:38:29 compute-0 podman[108064]: 2025-10-01 16:38:29.188992372 +0000 UTC m=+0.061360843 container create 7be56254c63ff02f9c56a90d872899bf077f5c6902b9c547dfa033dcc6b6c9a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shockley, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Oct 01 16:38:29 compute-0 systemd[1]: Started libpod-conmon-7be56254c63ff02f9c56a90d872899bf077f5c6902b9c547dfa033dcc6b6c9a7.scope.
Oct 01 16:38:29 compute-0 podman[108064]: 2025-10-01 16:38:29.157315639 +0000 UTC m=+0.029684140 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:38:29 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:38:29 compute-0 podman[108064]: 2025-10-01 16:38:29.277593571 +0000 UTC m=+0.149962072 container init 7be56254c63ff02f9c56a90d872899bf077f5c6902b9c547dfa033dcc6b6c9a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shockley, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 01 16:38:29 compute-0 podman[108064]: 2025-10-01 16:38:29.28461102 +0000 UTC m=+0.156979491 container start 7be56254c63ff02f9c56a90d872899bf077f5c6902b9c547dfa033dcc6b6c9a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shockley, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Oct 01 16:38:29 compute-0 podman[108064]: 2025-10-01 16:38:29.288651619 +0000 UTC m=+0.161020200 container attach 7be56254c63ff02f9c56a90d872899bf077f5c6902b9c547dfa033dcc6b6c9a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 01 16:38:29 compute-0 nice_shockley[108080]: 167 167
Oct 01 16:38:29 compute-0 systemd[1]: libpod-7be56254c63ff02f9c56a90d872899bf077f5c6902b9c547dfa033dcc6b6c9a7.scope: Deactivated successfully.
Oct 01 16:38:29 compute-0 conmon[108080]: conmon 7be56254c63ff02f9c56 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7be56254c63ff02f9c56a90d872899bf077f5c6902b9c547dfa033dcc6b6c9a7.scope/container/memory.events
Oct 01 16:38:29 compute-0 podman[108064]: 2025-10-01 16:38:29.291817471 +0000 UTC m=+0.164185942 container died 7be56254c63ff02f9c56a90d872899bf077f5c6902b9c547dfa033dcc6b6c9a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shockley, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Oct 01 16:38:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-258e3a932e5acf975f863dfc0f657b82ece03e23bd7fd7010c99efad4033d4c1-merged.mount: Deactivated successfully.
Oct 01 16:38:29 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Oct 01 16:38:29 compute-0 podman[108064]: 2025-10-01 16:38:29.3392377 +0000 UTC m=+0.211606201 container remove 7be56254c63ff02f9c56a90d872899bf077f5c6902b9c547dfa033dcc6b6c9a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shockley, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:38:29 compute-0 systemd[1]: libpod-conmon-7be56254c63ff02f9c56a90d872899bf077f5c6902b9c547dfa033dcc6b6c9a7.scope: Deactivated successfully.
Oct 01 16:38:29 compute-0 ceph-mon[74273]: 4.1c scrub starts
Oct 01 16:38:29 compute-0 ceph-mon[74273]: 4.1c scrub ok
Oct 01 16:38:29 compute-0 podman[108104]: 2025-10-01 16:38:29.530622352 +0000 UTC m=+0.053639995 container create f118136e21d575d2ee0959362c661a8c72ffc2eb30743b5807926bd47943cda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_wright, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:38:29 compute-0 systemd[1]: Started libpod-conmon-f118136e21d575d2ee0959362c661a8c72ffc2eb30743b5807926bd47943cda4.scope.
Oct 01 16:38:29 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:38:29 compute-0 podman[108104]: 2025-10-01 16:38:29.504757555 +0000 UTC m=+0.027775268 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:38:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf0e7ed7f79b63039720822e4a31cfdab796182fa101343377c0638e82f4aff2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:38:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf0e7ed7f79b63039720822e4a31cfdab796182fa101343377c0638e82f4aff2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:38:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf0e7ed7f79b63039720822e4a31cfdab796182fa101343377c0638e82f4aff2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:38:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf0e7ed7f79b63039720822e4a31cfdab796182fa101343377c0638e82f4aff2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:38:29 compute-0 podman[108104]: 2025-10-01 16:38:29.618180526 +0000 UTC m=+0.141198179 container init f118136e21d575d2ee0959362c661a8c72ffc2eb30743b5807926bd47943cda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 01 16:38:29 compute-0 podman[108104]: 2025-10-01 16:38:29.626026995 +0000 UTC m=+0.149044638 container start f118136e21d575d2ee0959362c661a8c72ffc2eb30743b5807926bd47943cda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_wright, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 01 16:38:29 compute-0 podman[108104]: 2025-10-01 16:38:29.629408973 +0000 UTC m=+0.152426606 container attach f118136e21d575d2ee0959362c661a8c72ffc2eb30743b5807926bd47943cda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_wright, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:38:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 11.c scrub starts
Oct 01 16:38:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 11.c scrub ok
Oct 01 16:38:30 compute-0 ceph-mon[74273]: pgmap v242: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Oct 01 16:38:30 compute-0 happy_wright[108121]: {
Oct 01 16:38:30 compute-0 happy_wright[108121]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 16:38:30 compute-0 happy_wright[108121]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:38:30 compute-0 happy_wright[108121]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 16:38:30 compute-0 happy_wright[108121]:         "osd_id": 2,
Oct 01 16:38:30 compute-0 happy_wright[108121]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:38:30 compute-0 happy_wright[108121]:         "type": "bluestore"
Oct 01 16:38:30 compute-0 happy_wright[108121]:     },
Oct 01 16:38:30 compute-0 happy_wright[108121]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 16:38:30 compute-0 happy_wright[108121]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:38:30 compute-0 happy_wright[108121]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 16:38:30 compute-0 happy_wright[108121]:         "osd_id": 0,
Oct 01 16:38:30 compute-0 happy_wright[108121]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:38:30 compute-0 happy_wright[108121]:         "type": "bluestore"
Oct 01 16:38:30 compute-0 happy_wright[108121]:     },
Oct 01 16:38:30 compute-0 happy_wright[108121]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 16:38:30 compute-0 happy_wright[108121]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:38:30 compute-0 happy_wright[108121]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 16:38:30 compute-0 happy_wright[108121]:         "osd_id": 1,
Oct 01 16:38:30 compute-0 happy_wright[108121]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:38:30 compute-0 happy_wright[108121]:         "type": "bluestore"
Oct 01 16:38:30 compute-0 happy_wright[108121]:     }
Oct 01 16:38:30 compute-0 happy_wright[108121]: }
Oct 01 16:38:30 compute-0 systemd[1]: libpod-f118136e21d575d2ee0959362c661a8c72ffc2eb30743b5807926bd47943cda4.scope: Deactivated successfully.
Oct 01 16:38:30 compute-0 systemd[1]: libpod-f118136e21d575d2ee0959362c661a8c72ffc2eb30743b5807926bd47943cda4.scope: Consumed 1.055s CPU time.
Oct 01 16:38:30 compute-0 conmon[108121]: conmon f118136e21d575d2ee09 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f118136e21d575d2ee0959362c661a8c72ffc2eb30743b5807926bd47943cda4.scope/container/memory.events
Oct 01 16:38:30 compute-0 podman[108104]: 2025-10-01 16:38:30.676755667 +0000 UTC m=+1.199773330 container died f118136e21d575d2ee0959362c661a8c72ffc2eb30743b5807926bd47943cda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_wright, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:38:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf0e7ed7f79b63039720822e4a31cfdab796182fa101343377c0638e82f4aff2-merged.mount: Deactivated successfully.
Oct 01 16:38:30 compute-0 podman[108104]: 2025-10-01 16:38:30.767479302 +0000 UTC m=+1.290496935 container remove f118136e21d575d2ee0959362c661a8c72ffc2eb30743b5807926bd47943cda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_wright, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:38:30 compute-0 systemd[1]: libpod-conmon-f118136e21d575d2ee0959362c661a8c72ffc2eb30743b5807926bd47943cda4.scope: Deactivated successfully.
Oct 01 16:38:30 compute-0 sudo[107999]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:38:30 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:38:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:38:30 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:38:30 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 6f971757-6526-4aa2-9fe4-756af4012ac0 does not exist
Oct 01 16:38:30 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev b7ffe451-b289-4692-9593-0973df8495af does not exist
Oct 01 16:38:30 compute-0 sudo[108166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:38:30 compute-0 sudo[108166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:38:30 compute-0 sudo[108166]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:30 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Oct 01 16:38:30 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Oct 01 16:38:30 compute-0 sudo[108191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 16:38:30 compute-0 sudo[108191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:38:30 compute-0 sudo[108191]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:31 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:31 compute-0 ceph-mon[74273]: 11.c scrub starts
Oct 01 16:38:31 compute-0 ceph-mon[74273]: 11.c scrub ok
Oct 01 16:38:31 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:38:31 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:38:31 compute-0 ceph-mon[74273]: 7.1a scrub starts
Oct 01 16:38:31 compute-0 ceph-mon[74273]: 7.1a scrub ok
Oct 01 16:38:31 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Oct 01 16:38:31 compute-0 sudo[107184]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:32 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Oct 01 16:38:32 compute-0 sudo[108365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xurvcxzveemgumqywhnenmulgrioxzpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336712.2219646-128-193185508610223/AnsiballZ_command.py'
Oct 01 16:38:32 compute-0 sudo[108365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:38:32 compute-0 python3.9[108367]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:38:32 compute-0 ceph-mon[74273]: pgmap v243: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:32 compute-0 ceph-mon[74273]: 3.1e scrub starts
Oct 01 16:38:32 compute-0 ceph-mon[74273]: 3.1e scrub ok
Oct 01 16:38:33 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:33 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Oct 01 16:38:33 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Oct 01 16:38:33 compute-0 sudo[108365]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:38:33 compute-0 ceph-mon[74273]: 11.13 scrub starts
Oct 01 16:38:33 compute-0 ceph-mon[74273]: 11.13 scrub ok
Oct 01 16:38:34 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Oct 01 16:38:34 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Oct 01 16:38:34 compute-0 sudo[108652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cffurhlvfoobkiqmylwcpirorzchbpty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336713.770861-136-192031960453616/AnsiballZ_selinux.py'
Oct 01 16:38:34 compute-0 sudo[108652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:38:34 compute-0 python3.9[108654]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct 01 16:38:34 compute-0 sudo[108652]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:34 compute-0 ceph-mon[74273]: pgmap v244: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:34 compute-0 ceph-mon[74273]: 11.15 scrub starts
Oct 01 16:38:34 compute-0 ceph-mon[74273]: 11.15 scrub ok
Oct 01 16:38:35 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Oct 01 16:38:35 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Oct 01 16:38:35 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:35 compute-0 sudo[108804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csbgibqcgnywpolafjufdglzexzdesot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336715.1172984-147-76521383994866/AnsiballZ_command.py'
Oct 01 16:38:35 compute-0 sudo[108804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:38:35 compute-0 python3.9[108806]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct 01 16:38:35 compute-0 sudo[108804]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:35 compute-0 ceph-mon[74273]: 3.1d scrub starts
Oct 01 16:38:35 compute-0 ceph-mon[74273]: 3.1d scrub ok
Oct 01 16:38:36 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Oct 01 16:38:36 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Oct 01 16:38:36 compute-0 sudo[108956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctuedvaxpczeusqtiiwsugyevztieriq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336715.8418684-155-116916060936228/AnsiballZ_file.py'
Oct 01 16:38:36 compute-0 sudo[108956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:38:36 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Oct 01 16:38:36 compute-0 python3.9[108958]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:38:36 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Oct 01 16:38:36 compute-0 sudo[108956]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:36 compute-0 ceph-mon[74273]: pgmap v245: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:36 compute-0 ceph-mon[74273]: 11.11 scrub starts
Oct 01 16:38:36 compute-0 ceph-mon[74273]: 11.11 scrub ok
Oct 01 16:38:37 compute-0 sudo[109108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crbnvivoxrkdoufuurvjkvziqdxxoiwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336716.5859323-163-172020259238353/AnsiballZ_mount.py'
Oct 01 16:38:37 compute-0 sudo[109108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:38:37 compute-0 python3.9[109110]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct 01 16:38:37 compute-0 sudo[109108]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:37 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:37 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Oct 01 16:38:37 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Oct 01 16:38:37 compute-0 ceph-mon[74273]: 11.17 scrub starts
Oct 01 16:38:37 compute-0 ceph-mon[74273]: 11.17 scrub ok
Oct 01 16:38:37 compute-0 ceph-mon[74273]: 11.16 scrub starts
Oct 01 16:38:37 compute-0 ceph-mon[74273]: 11.16 scrub ok
Oct 01 16:38:38 compute-0 sudo[109260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvcssjyfhexdigjvtygrdxneolnkxmcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336717.9758954-191-96797595521393/AnsiballZ_file.py'
Oct 01 16:38:38 compute-0 sudo[109260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:38:38 compute-0 python3.9[109262]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:38:38 compute-0 sudo[109260]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:38:38 compute-0 ceph-mon[74273]: pgmap v246: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:38 compute-0 sudo[109412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcefdxhbxmzapjthzankwiqufbgpjhrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336718.6163337-199-133999192646497/AnsiballZ_stat.py'
Oct 01 16:38:38 compute-0 sudo[109412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:38:39 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Oct 01 16:38:39 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Oct 01 16:38:39 compute-0 python3.9[109414]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:38:39 compute-0 sudo[109412]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:39 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:39 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Oct 01 16:38:39 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Oct 01 16:38:39 compute-0 sudo[109490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vojbvljsnjhbehriyewjkevkpftehykk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336718.6163337-199-133999192646497/AnsiballZ_file.py'
Oct 01 16:38:39 compute-0 sudo[109490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:38:39 compute-0 python3.9[109492]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:38:39 compute-0 sudo[109490]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:39 compute-0 ceph-mon[74273]: 8.11 scrub starts
Oct 01 16:38:39 compute-0 ceph-mon[74273]: 8.11 scrub ok
Oct 01 16:38:39 compute-0 ceph-mon[74273]: 11.1d scrub starts
Oct 01 16:38:39 compute-0 ceph-mon[74273]: 11.1d scrub ok
Oct 01 16:38:40 compute-0 sudo[109642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfaejldqnfueiaiemejbyvsldpqrxpkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336720.1409636-223-31585527285016/AnsiballZ_getent.py'
Oct 01 16:38:40 compute-0 sudo[109642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:38:40 compute-0 python3.9[109644]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct 01 16:38:40 compute-0 sudo[109642]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:40 compute-0 ceph-mon[74273]: pgmap v247: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:41 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Oct 01 16:38:41 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Oct 01 16:38:41 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:38:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:38:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:38:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:38:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:38:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:38:41 compute-0 sudo[109795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikliijtmwxvnymabbnfwsmphzdfuiqmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336721.0364044-233-23473479872450/AnsiballZ_getent.py'
Oct 01 16:38:41 compute-0 sudo[109795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:38:41 compute-0 python3.9[109797]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct 01 16:38:41 compute-0 sudo[109795]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:41 compute-0 ceph-mon[74273]: 8.12 scrub starts
Oct 01 16:38:41 compute-0 ceph-mon[74273]: 8.12 scrub ok
Oct 01 16:38:41 compute-0 ceph-mon[74273]: pgmap v248: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:42 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Oct 01 16:38:42 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Oct 01 16:38:42 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Oct 01 16:38:42 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Oct 01 16:38:42 compute-0 sudo[109948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itnazcnmbsptkbsxnoofrhjvdzxydmra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336721.84057-241-110137413913755/AnsiballZ_group.py'
Oct 01 16:38:42 compute-0 sudo[109948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:38:42 compute-0 python3.9[109950]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 01 16:38:42 compute-0 sudo[109948]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:42 compute-0 ceph-mon[74273]: 5.11 scrub starts
Oct 01 16:38:42 compute-0 ceph-mon[74273]: 5.11 scrub ok
Oct 01 16:38:43 compute-0 sudo[110100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsqjbsbcosqjkhnqdajrhjxxgdfhtxeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336722.8950818-250-15901275694697/AnsiballZ_file.py'
Oct 01 16:38:43 compute-0 sudo[110100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:38:43 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:43 compute-0 python3.9[110102]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct 01 16:38:43 compute-0 sudo[110100]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:38:43 compute-0 ceph-mon[74273]: 7.1b scrub starts
Oct 01 16:38:43 compute-0 ceph-mon[74273]: 7.1b scrub ok
Oct 01 16:38:43 compute-0 ceph-mon[74273]: pgmap v249: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:44 compute-0 sudo[110252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjufcyrxpvwuzjoczexnjizmvhtzcufs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336723.7047367-261-108277831910212/AnsiballZ_dnf.py'
Oct 01 16:38:44 compute-0 sudo[110252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:38:44 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Oct 01 16:38:44 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Oct 01 16:38:44 compute-0 python3.9[110254]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 16:38:44 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.14 deep-scrub starts
Oct 01 16:38:44 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.14 deep-scrub ok
Oct 01 16:38:44 compute-0 ceph-mon[74273]: 7.1c scrub starts
Oct 01 16:38:44 compute-0 ceph-mon[74273]: 7.1c scrub ok
Oct 01 16:38:45 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Oct 01 16:38:45 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Oct 01 16:38:45 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:45 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Oct 01 16:38:45 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Oct 01 16:38:45 compute-0 sudo[110252]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:45 compute-0 sudo[110405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olqdajeyucrrokolyfaviiuwxilzkupk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336725.5827372-269-266115277665974/AnsiballZ_file.py'
Oct 01 16:38:45 compute-0 sudo[110405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:38:45 compute-0 ceph-mon[74273]: 8.14 deep-scrub starts
Oct 01 16:38:45 compute-0 ceph-mon[74273]: 8.14 deep-scrub ok
Oct 01 16:38:45 compute-0 ceph-mon[74273]: 7.2 scrub starts
Oct 01 16:38:45 compute-0 ceph-mon[74273]: 7.2 scrub ok
Oct 01 16:38:45 compute-0 ceph-mon[74273]: pgmap v250: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:46 compute-0 python3.9[110407]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:38:46 compute-0 sudo[110405]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:46 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.d scrub starts
Oct 01 16:38:46 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.d scrub ok
Oct 01 16:38:46 compute-0 sudo[110557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydbfiwbamvaucbrofclgxsxktbejxhpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336726.3133287-277-105043668216470/AnsiballZ_stat.py'
Oct 01 16:38:46 compute-0 sudo[110557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:38:46 compute-0 python3.9[110559]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:38:46 compute-0 sudo[110557]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:46 compute-0 ceph-mon[74273]: 7.1f scrub starts
Oct 01 16:38:46 compute-0 ceph-mon[74273]: 7.1f scrub ok
Oct 01 16:38:46 compute-0 ceph-mon[74273]: 11.d scrub starts
Oct 01 16:38:46 compute-0 ceph-mon[74273]: 11.d scrub ok
Oct 01 16:38:47 compute-0 sudo[110635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfwhkrrxzbicldbqbpjasyopdyrtiqtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336726.3133287-277-105043668216470/AnsiballZ_file.py'
Oct 01 16:38:47 compute-0 sudo[110635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:38:47 compute-0 python3.9[110637]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:38:47 compute-0 sudo[110635]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:47 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:47 compute-0 sudo[110787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbmarcvcyayecfjvsajwnoxtpmbmcura ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336727.503819-290-215804918401349/AnsiballZ_stat.py'
Oct 01 16:38:47 compute-0 sudo[110787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:38:47 compute-0 ceph-mon[74273]: pgmap v251: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:48 compute-0 python3.9[110789]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:38:48 compute-0 sudo[110787]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:48 compute-0 sudo[110865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxjdmwiifladgqpktbslfujmghpdjqio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336727.503819-290-215804918401349/AnsiballZ_file.py'
Oct 01 16:38:48 compute-0 sudo[110865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:38:48 compute-0 python3.9[110867]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:38:48 compute-0 sudo[110865]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:38:49 compute-0 sudo[111017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isgacfyyyeljywmgmvburhvdsyavlsmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336728.9021895-305-40744017558653/AnsiballZ_dnf.py'
Oct 01 16:38:49 compute-0 sudo[111017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:38:49 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:49 compute-0 python3.9[111019]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 16:38:50 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 3.7 deep-scrub starts
Oct 01 16:38:50 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 3.7 deep-scrub ok
Oct 01 16:38:50 compute-0 ceph-mon[74273]: pgmap v252: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:50 compute-0 ceph-mon[74273]: 3.7 deep-scrub starts
Oct 01 16:38:50 compute-0 ceph-mon[74273]: 3.7 deep-scrub ok
Oct 01 16:38:50 compute-0 sudo[111017]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:51 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:51 compute-0 python3.9[111170]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:38:52 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Oct 01 16:38:52 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Oct 01 16:38:52 compute-0 ceph-mon[74273]: pgmap v253: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:52 compute-0 python3.9[111322]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 01 16:38:53 compute-0 python3.9[111472]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:38:53 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Oct 01 16:38:53 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Oct 01 16:38:53 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:53 compute-0 ceph-mon[74273]: 11.14 scrub starts
Oct 01 16:38:53 compute-0 ceph-mon[74273]: 11.14 scrub ok
Oct 01 16:38:53 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Oct 01 16:38:53 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Oct 01 16:38:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:38:54 compute-0 sudo[111622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulobssmulylwoeiozhbxgcckzkhwskvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336733.5278199-346-227290954233284/AnsiballZ_systemd.py'
Oct 01 16:38:54 compute-0 sudo[111622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:38:54 compute-0 ceph-mon[74273]: 7.18 scrub starts
Oct 01 16:38:54 compute-0 ceph-mon[74273]: 7.18 scrub ok
Oct 01 16:38:54 compute-0 ceph-mon[74273]: pgmap v254: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:54 compute-0 ceph-mon[74273]: 2.17 scrub starts
Oct 01 16:38:54 compute-0 ceph-mon[74273]: 2.17 scrub ok
Oct 01 16:38:54 compute-0 python3.9[111624]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:38:55 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Oct 01 16:38:55 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Oct 01 16:38:55 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:55 compute-0 ceph-mon[74273]: 3.1b scrub starts
Oct 01 16:38:55 compute-0 ceph-mon[74273]: 3.1b scrub ok
Oct 01 16:38:55 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 01 16:38:55 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Oct 01 16:38:55 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 01 16:38:55 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 01 16:38:55 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 01 16:38:55 compute-0 sudo[111622]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:56 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Oct 01 16:38:56 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Oct 01 16:38:56 compute-0 ceph-mon[74273]: pgmap v255: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:56 compute-0 ceph-mon[74273]: 3.18 scrub starts
Oct 01 16:38:56 compute-0 ceph-mon[74273]: 3.18 scrub ok
Oct 01 16:38:56 compute-0 python3.9[111785]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 01 16:38:57 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:58 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Oct 01 16:38:58 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Oct 01 16:38:58 compute-0 ceph-mon[74273]: pgmap v256: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:58 compute-0 ceph-mon[74273]: 8.10 scrub starts
Oct 01 16:38:58 compute-0 sudo[111935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggnkadgthjpyhkyeqfmxdrbnoitxmxmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336738.197733-403-216987521916929/AnsiballZ_systemd.py'
Oct 01 16:38:58 compute-0 sudo[111935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:38:58 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 5.12 deep-scrub starts
Oct 01 16:38:58 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 5.12 deep-scrub ok
Oct 01 16:38:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:38:58 compute-0 python3.9[111937]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:38:58 compute-0 sudo[111935]: pam_unix(sudo:session): session closed for user root
Oct 01 16:38:59 compute-0 sudo[112089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oukcvfimaqxadwsnnymgywduvfdnrswk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336739.010469-403-123326169299632/AnsiballZ_systemd.py'
Oct 01 16:38:59 compute-0 sudo[112089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:38:59 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:38:59 compute-0 ceph-mon[74273]: 8.10 scrub ok
Oct 01 16:38:59 compute-0 ceph-mon[74273]: 5.12 deep-scrub starts
Oct 01 16:38:59 compute-0 ceph-mon[74273]: 5.12 deep-scrub ok
Oct 01 16:38:59 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Oct 01 16:38:59 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Oct 01 16:38:59 compute-0 python3.9[112091]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:38:59 compute-0 sudo[112089]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:00 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 8.d scrub starts
Oct 01 16:39:00 compute-0 sshd-session[105422]: Connection closed by 192.168.122.30 port 47552
Oct 01 16:39:00 compute-0 sshd-session[105419]: pam_unix(sshd:session): session closed for user zuul
Oct 01 16:39:00 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Oct 01 16:39:00 compute-0 systemd[1]: session-35.scope: Consumed 1min 1.721s CPU time.
Oct 01 16:39:00 compute-0 systemd-logind[788]: Session 35 logged out. Waiting for processes to exit.
Oct 01 16:39:00 compute-0 systemd-logind[788]: Removed session 35.
Oct 01 16:39:00 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 8.d scrub ok
Oct 01 16:39:00 compute-0 ceph-mon[74273]: pgmap v257: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:00 compute-0 ceph-mon[74273]: 2.15 scrub starts
Oct 01 16:39:00 compute-0 ceph-mon[74273]: 2.15 scrub ok
Oct 01 16:39:00 compute-0 ceph-mon[74273]: 8.d scrub starts
Oct 01 16:39:00 compute-0 ceph-mon[74273]: 8.d scrub ok
Oct 01 16:39:01 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:01 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Oct 01 16:39:01 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Oct 01 16:39:02 compute-0 ceph-mon[74273]: pgmap v258: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:02 compute-0 ceph-mon[74273]: 5.13 scrub starts
Oct 01 16:39:02 compute-0 ceph-mon[74273]: 5.13 scrub ok
Oct 01 16:39:03 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:39:04 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Oct 01 16:39:04 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Oct 01 16:39:04 compute-0 ceph-mon[74273]: pgmap v259: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:05 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:05 compute-0 ceph-mon[74273]: 5.16 scrub starts
Oct 01 16:39:05 compute-0 ceph-mon[74273]: 5.16 scrub ok
Oct 01 16:39:06 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Oct 01 16:39:06 compute-0 sshd-session[112118]: Accepted publickey for zuul from 192.168.122.30 port 33256 ssh2: ECDSA SHA256:cAu4I/kPoFUKOLOQB71BUt6Th09G4PIJ2iHT8DD8gEY
Oct 01 16:39:06 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Oct 01 16:39:06 compute-0 systemd-logind[788]: New session 36 of user zuul.
Oct 01 16:39:06 compute-0 systemd[1]: Started Session 36 of User zuul.
Oct 01 16:39:06 compute-0 sshd-session[112118]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 16:39:06 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Oct 01 16:39:06 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Oct 01 16:39:06 compute-0 ceph-mon[74273]: pgmap v260: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:06 compute-0 ceph-mon[74273]: 3.5 scrub starts
Oct 01 16:39:06 compute-0 ceph-mon[74273]: 3.5 scrub ok
Oct 01 16:39:07 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 8.15 deep-scrub starts
Oct 01 16:39:07 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 8.15 deep-scrub ok
Oct 01 16:39:07 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Oct 01 16:39:07 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Oct 01 16:39:07 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:07 compute-0 python3.9[112271]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:39:07 compute-0 ceph-mon[74273]: 11.10 scrub starts
Oct 01 16:39:07 compute-0 ceph-mon[74273]: 11.10 scrub ok
Oct 01 16:39:07 compute-0 ceph-mon[74273]: 8.15 deep-scrub starts
Oct 01 16:39:07 compute-0 ceph-mon[74273]: 8.15 deep-scrub ok
Oct 01 16:39:08 compute-0 sudo[112425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjtxutwjjavswaxoeiuscrvxluucwhjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336747.9215791-36-37333234504443/AnsiballZ_getent.py'
Oct 01 16:39:08 compute-0 sudo[112425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:39:08 compute-0 ceph-mon[74273]: 7.3 scrub starts
Oct 01 16:39:08 compute-0 ceph-mon[74273]: 7.3 scrub ok
Oct 01 16:39:08 compute-0 ceph-mon[74273]: pgmap v261: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:08 compute-0 python3.9[112427]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct 01 16:39:08 compute-0 sudo[112425]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:39:09 compute-0 sudo[112578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kytrieoyxlketkuvzmucnzpwbxtqkkgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336748.8788235-48-202719718935658/AnsiballZ_setup.py'
Oct 01 16:39:09 compute-0 sudo[112578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:39:09 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 11.e scrub starts
Oct 01 16:39:09 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 11.e scrub ok
Oct 01 16:39:09 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:09 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Oct 01 16:39:09 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Oct 01 16:39:09 compute-0 python3.9[112580]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 16:39:09 compute-0 sudo[112578]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:10 compute-0 sudo[112662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgrjmcirdxwmxqrfwirsdlyqpjishaik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336748.8788235-48-202719718935658/AnsiballZ_dnf.py'
Oct 01 16:39:10 compute-0 sudo[112662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:39:10 compute-0 python3.9[112664]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 01 16:39:10 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Oct 01 16:39:10 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Oct 01 16:39:10 compute-0 ceph-mon[74273]: 11.e scrub starts
Oct 01 16:39:10 compute-0 ceph-mon[74273]: 11.e scrub ok
Oct 01 16:39:10 compute-0 ceph-mon[74273]: pgmap v262: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:10 compute-0 ceph-mon[74273]: 10.1a scrub starts
Oct 01 16:39:10 compute-0 ceph-mon[74273]: 10.1a scrub ok
Oct 01 16:39:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_16:39:11
Oct 01 16:39:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 16:39:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 16:39:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'backups', 'cephfs.cephfs.data', 'images', '.mgr', 'default.rgw.log', 'default.rgw.meta']
Oct 01 16:39:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 16:39:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:39:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:39:11 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:39:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:39:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:39:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:39:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 16:39:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:39:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 16:39:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:39:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:39:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:39:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:39:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:39:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:39:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:39:11 compute-0 ceph-mon[74273]: 10.19 scrub starts
Oct 01 16:39:11 compute-0 ceph-mon[74273]: 10.19 scrub ok
Oct 01 16:39:11 compute-0 sudo[112662]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:12 compute-0 sudo[112815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxpbvzccqvtyyhburncffuhhpnfooggx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336751.8741732-62-122037157346933/AnsiballZ_dnf.py'
Oct 01 16:39:12 compute-0 sudo[112815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:39:12 compute-0 python3.9[112817]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 16:39:12 compute-0 ceph-mon[74273]: pgmap v263: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:12 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Oct 01 16:39:13 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Oct 01 16:39:13 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:13 compute-0 sudo[112815]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:13 compute-0 ceph-mon[74273]: 11.9 scrub starts
Oct 01 16:39:13 compute-0 ceph-mon[74273]: 11.9 scrub ok
Oct 01 16:39:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:39:14 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Oct 01 16:39:14 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Oct 01 16:39:14 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 11.f scrub starts
Oct 01 16:39:14 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 11.f scrub ok
Oct 01 16:39:14 compute-0 sudo[112968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaohrwcmyhreqxqbgbjnzphuhiggpkkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336753.7022307-70-164761536543634/AnsiballZ_systemd.py'
Oct 01 16:39:14 compute-0 sudo[112968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:39:14 compute-0 ceph-mon[74273]: pgmap v264: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:14 compute-0 ceph-mon[74273]: 7.5 scrub starts
Oct 01 16:39:14 compute-0 ceph-mon[74273]: 7.5 scrub ok
Oct 01 16:39:14 compute-0 ceph-mon[74273]: 11.f scrub starts
Oct 01 16:39:14 compute-0 ceph-mon[74273]: 11.f scrub ok
Oct 01 16:39:14 compute-0 python3.9[112970]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 01 16:39:15 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:15 compute-0 sudo[112968]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:16 compute-0 python3.9[113123]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:39:16 compute-0 ceph-mon[74273]: pgmap v265: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:17 compute-0 sudo[113273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbyrrjhzlzpmzgnojdncioumgancpdxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336756.827971-88-122013406375544/AnsiballZ_sefcontext.py'
Oct 01 16:39:17 compute-0 sudo[113273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:39:17 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:17 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Oct 01 16:39:17 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Oct 01 16:39:17 compute-0 python3.9[113275]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct 01 16:39:17 compute-0 ceph-mon[74273]: 5.9 scrub starts
Oct 01 16:39:17 compute-0 ceph-mon[74273]: 5.9 scrub ok
Oct 01 16:39:17 compute-0 sudo[113273]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:18 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.b scrub starts
Oct 01 16:39:18 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.b scrub ok
Oct 01 16:39:18 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.c scrub starts
Oct 01 16:39:18 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.c scrub ok
Oct 01 16:39:18 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Oct 01 16:39:18 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Oct 01 16:39:18 compute-0 ceph-mon[74273]: pgmap v266: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:18 compute-0 ceph-mon[74273]: 11.b scrub starts
Oct 01 16:39:18 compute-0 ceph-mon[74273]: 11.b scrub ok
Oct 01 16:39:18 compute-0 ceph-mon[74273]: 8.c scrub starts
Oct 01 16:39:18 compute-0 ceph-mon[74273]: 8.c scrub ok
Oct 01 16:39:18 compute-0 ceph-mon[74273]: 10.6 scrub starts
Oct 01 16:39:18 compute-0 ceph-mon[74273]: 10.6 scrub ok
Oct 01 16:39:18 compute-0 python3.9[113425]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:39:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:39:19 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:19 compute-0 sudo[113581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyrdhmzdvruptuhicjdimnhyvymykuii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336759.067395-106-223603591494126/AnsiballZ_dnf.py'
Oct 01 16:39:19 compute-0 sudo[113581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:39:19 compute-0 python3.9[113583]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 16:39:20 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 7.c scrub starts
Oct 01 16:39:20 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 7.c scrub ok
Oct 01 16:39:20 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 2.d scrub starts
Oct 01 16:39:20 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 2.d scrub ok
Oct 01 16:39:20 compute-0 ceph-mon[74273]: pgmap v267: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:20 compute-0 ceph-mon[74273]: 7.c scrub starts
Oct 01 16:39:20 compute-0 ceph-mon[74273]: 7.c scrub ok
Oct 01 16:39:20 compute-0 ceph-mon[74273]: 2.d scrub starts
Oct 01 16:39:20 compute-0 ceph-mon[74273]: 2.d scrub ok
Oct 01 16:39:20 compute-0 sudo[113581]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 16:39:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:39:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 16:39:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:39:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:39:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:39:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:39:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:39:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:39:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:39:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:39:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:39:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 16:39:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:39:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:39:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:39:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 16:39:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:39:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 16:39:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:39:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:39:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:39:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 16:39:21 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:21 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 5.f deep-scrub starts
Oct 01 16:39:21 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 5.f deep-scrub ok
Oct 01 16:39:21 compute-0 sudo[113734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvnyktwogpzmwtxvwxzyslcwdaumpydu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336761.0185716-114-12781821573534/AnsiballZ_command.py'
Oct 01 16:39:21 compute-0 sudo[113734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:39:21 compute-0 ceph-mon[74273]: 5.f deep-scrub starts
Oct 01 16:39:21 compute-0 ceph-mon[74273]: 5.f deep-scrub ok
Oct 01 16:39:21 compute-0 python3.9[113736]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:39:22 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Oct 01 16:39:22 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Oct 01 16:39:22 compute-0 sudo[113734]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:22 compute-0 ceph-mon[74273]: pgmap v268: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:22 compute-0 ceph-mon[74273]: 11.12 scrub starts
Oct 01 16:39:22 compute-0 ceph-mon[74273]: 11.12 scrub ok
Oct 01 16:39:23 compute-0 sudo[114021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oersvzxtaszmglfnruykealqjwxnkxpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336762.7644672-122-154815683309723/AnsiballZ_file.py'
Oct 01 16:39:23 compute-0 sudo[114021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:39:23 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:23 compute-0 python3.9[114023]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 01 16:39:23 compute-0 sudo[114021]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:39:24 compute-0 python3.9[114173]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:39:24 compute-0 ceph-mon[74273]: pgmap v269: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:24 compute-0 sudo[114325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aetjplchcnfxoatlflqatgormlyijgbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336764.5799758-138-108981389712740/AnsiballZ_dnf.py'
Oct 01 16:39:24 compute-0 sudo[114325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:39:25 compute-0 python3.9[114327]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 16:39:25 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:26 compute-0 sudo[114325]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:26 compute-0 ceph-mon[74273]: pgmap v270: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:26 compute-0 sudo[114478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzqyxzkjwqhofcrpaueymmdqkxixmtyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336766.5632114-147-259437489343298/AnsiballZ_dnf.py'
Oct 01 16:39:26 compute-0 sudo[114478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:39:27 compute-0 python3.9[114480]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 16:39:27 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:28 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Oct 01 16:39:28 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Oct 01 16:39:28 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.e scrub starts
Oct 01 16:39:28 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.e scrub ok
Oct 01 16:39:28 compute-0 sudo[114478]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:28 compute-0 ceph-mon[74273]: pgmap v271: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:28 compute-0 ceph-mon[74273]: 3.8 scrub starts
Oct 01 16:39:28 compute-0 ceph-mon[74273]: 3.8 scrub ok
Oct 01 16:39:28 compute-0 ceph-mon[74273]: 8.e scrub starts
Oct 01 16:39:28 compute-0 ceph-mon[74273]: 8.e scrub ok
Oct 01 16:39:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:39:29 compute-0 sudo[114631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzeegpwvotlancjegcxjhanaqjhvlbal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336768.7083826-159-211069573941988/AnsiballZ_stat.py'
Oct 01 16:39:29 compute-0 sudo[114631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:39:29 compute-0 python3.9[114633]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:39:29 compute-0 sudo[114631]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:29 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:30 compute-0 sudo[114785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqcglagguykcfxcuuceytulhhbdjleiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336769.5406008-167-2193793410236/AnsiballZ_slurp.py'
Oct 01 16:39:30 compute-0 sudo[114785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:39:30 compute-0 python3.9[114787]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Oct 01 16:39:30 compute-0 sudo[114785]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:30 compute-0 ceph-mon[74273]: pgmap v272: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:31 compute-0 sudo[114812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:39:31 compute-0 sudo[114812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:39:31 compute-0 sudo[114812]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:31 compute-0 sshd-session[112121]: Connection closed by 192.168.122.30 port 33256
Oct 01 16:39:31 compute-0 sshd-session[112118]: pam_unix(sshd:session): session closed for user zuul
Oct 01 16:39:31 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Oct 01 16:39:31 compute-0 systemd[1]: session-36.scope: Consumed 18.188s CPU time.
Oct 01 16:39:31 compute-0 systemd-logind[788]: Session 36 logged out. Waiting for processes to exit.
Oct 01 16:39:31 compute-0 systemd-logind[788]: Removed session 36.
Oct 01 16:39:31 compute-0 sudo[114837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:39:31 compute-0 sudo[114837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:39:31 compute-0 sudo[114837]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:31 compute-0 sudo[114862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:39:31 compute-0 sudo[114862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:39:31 compute-0 sudo[114862]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:31 compute-0 sudo[114887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 16:39:31 compute-0 sudo[114887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:39:31 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:31 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 10.b scrub starts
Oct 01 16:39:31 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 10.b scrub ok
Oct 01 16:39:31 compute-0 ceph-mon[74273]: 10.b scrub starts
Oct 01 16:39:31 compute-0 ceph-mon[74273]: 10.b scrub ok
Oct 01 16:39:31 compute-0 sudo[114887]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:39:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:39:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 16:39:31 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:39:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 16:39:31 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:39:31 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 8b537864-1882-450a-a807-13aa066d8800 does not exist
Oct 01 16:39:31 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 182ac18a-8077-4714-a354-25d941e32db6 does not exist
Oct 01 16:39:31 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev dc4718df-5448-4570-b160-dbe0a93b9287 does not exist
Oct 01 16:39:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 16:39:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:39:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 16:39:31 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:39:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:39:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:39:31 compute-0 sudo[114943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:39:31 compute-0 sudo[114943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:39:31 compute-0 sudo[114943]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:31 compute-0 sudo[114968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:39:31 compute-0 sudo[114968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:39:31 compute-0 sudo[114968]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:32 compute-0 sudo[114993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:39:32 compute-0 sudo[114993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:39:32 compute-0 sudo[114993]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:32 compute-0 sudo[115018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 16:39:32 compute-0 sudo[115018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:39:32 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 3.6 deep-scrub starts
Oct 01 16:39:32 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 3.6 deep-scrub ok
Oct 01 16:39:32 compute-0 podman[115083]: 2025-10-01 16:39:32.468388571 +0000 UTC m=+0.055547485 container create a1a9094214ae2d1a8812b9b0122a5b2fdc00d179cbaa1a85283eb25cbc801032 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_shannon, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:39:32 compute-0 systemd[75900]: Created slice User Background Tasks Slice.
Oct 01 16:39:32 compute-0 systemd[75900]: Starting Cleanup of User's Temporary Files and Directories...
Oct 01 16:39:32 compute-0 systemd[1]: Started libpod-conmon-a1a9094214ae2d1a8812b9b0122a5b2fdc00d179cbaa1a85283eb25cbc801032.scope.
Oct 01 16:39:32 compute-0 systemd[75900]: Finished Cleanup of User's Temporary Files and Directories.
Oct 01 16:39:32 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:39:32 compute-0 podman[115083]: 2025-10-01 16:39:32.517526669 +0000 UTC m=+0.104685593 container init a1a9094214ae2d1a8812b9b0122a5b2fdc00d179cbaa1a85283eb25cbc801032 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_shannon, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:39:32 compute-0 podman[115083]: 2025-10-01 16:39:32.523198622 +0000 UTC m=+0.110357536 container start a1a9094214ae2d1a8812b9b0122a5b2fdc00d179cbaa1a85283eb25cbc801032 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_shannon, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Oct 01 16:39:32 compute-0 podman[115083]: 2025-10-01 16:39:32.525723393 +0000 UTC m=+0.112882307 container attach a1a9094214ae2d1a8812b9b0122a5b2fdc00d179cbaa1a85283eb25cbc801032 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:39:32 compute-0 awesome_shannon[115101]: 167 167
Oct 01 16:39:32 compute-0 systemd[1]: libpod-a1a9094214ae2d1a8812b9b0122a5b2fdc00d179cbaa1a85283eb25cbc801032.scope: Deactivated successfully.
Oct 01 16:39:32 compute-0 podman[115083]: 2025-10-01 16:39:32.52774069 +0000 UTC m=+0.114899604 container died a1a9094214ae2d1a8812b9b0122a5b2fdc00d179cbaa1a85283eb25cbc801032 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:39:32 compute-0 podman[115083]: 2025-10-01 16:39:32.440573319 +0000 UTC m=+0.027732333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:39:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9fc164559ced776c1cf76a44b9fe3d008381c72d05671af9373c1b7463a5e77-merged.mount: Deactivated successfully.
Oct 01 16:39:32 compute-0 podman[115083]: 2025-10-01 16:39:32.562796205 +0000 UTC m=+0.149955119 container remove a1a9094214ae2d1a8812b9b0122a5b2fdc00d179cbaa1a85283eb25cbc801032 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 01 16:39:32 compute-0 systemd[1]: libpod-conmon-a1a9094214ae2d1a8812b9b0122a5b2fdc00d179cbaa1a85283eb25cbc801032.scope: Deactivated successfully.
Oct 01 16:39:32 compute-0 podman[115126]: 2025-10-01 16:39:32.724343258 +0000 UTC m=+0.042224148 container create 29b1fa979f4fccebb9b216ebdd9483d48ebdff2afb7d9b8e139a9a33200d4b32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:39:32 compute-0 ceph-mon[74273]: pgmap v273: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:32 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:39:32 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:39:32 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:39:32 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:39:32 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:39:32 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:39:32 compute-0 ceph-mon[74273]: 3.6 deep-scrub starts
Oct 01 16:39:32 compute-0 ceph-mon[74273]: 3.6 deep-scrub ok
Oct 01 16:39:32 compute-0 systemd[1]: Started libpod-conmon-29b1fa979f4fccebb9b216ebdd9483d48ebdff2afb7d9b8e139a9a33200d4b32.scope.
Oct 01 16:39:32 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:39:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96c1b504807b4a717081f20d73af1cd85db9a49880a93bbf752e65b31f238647/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:39:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96c1b504807b4a717081f20d73af1cd85db9a49880a93bbf752e65b31f238647/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:39:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96c1b504807b4a717081f20d73af1cd85db9a49880a93bbf752e65b31f238647/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:39:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96c1b504807b4a717081f20d73af1cd85db9a49880a93bbf752e65b31f238647/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:39:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96c1b504807b4a717081f20d73af1cd85db9a49880a93bbf752e65b31f238647/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:39:32 compute-0 podman[115126]: 2025-10-01 16:39:32.70857908 +0000 UTC m=+0.026459960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:39:32 compute-0 podman[115126]: 2025-10-01 16:39:32.813434754 +0000 UTC m=+0.131315634 container init 29b1fa979f4fccebb9b216ebdd9483d48ebdff2afb7d9b8e139a9a33200d4b32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 01 16:39:32 compute-0 podman[115126]: 2025-10-01 16:39:32.822573118 +0000 UTC m=+0.140453988 container start 29b1fa979f4fccebb9b216ebdd9483d48ebdff2afb7d9b8e139a9a33200d4b32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_yalow, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:39:32 compute-0 podman[115126]: 2025-10-01 16:39:32.825798269 +0000 UTC m=+0.143679139 container attach 29b1fa979f4fccebb9b216ebdd9483d48ebdff2afb7d9b8e139a9a33200d4b32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_yalow, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 01 16:39:33 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:39:33 compute-0 ecstatic_yalow[115142]: --> passed data devices: 0 physical, 3 LVM
Oct 01 16:39:33 compute-0 ecstatic_yalow[115142]: --> relative data size: 1.0
Oct 01 16:39:33 compute-0 ecstatic_yalow[115142]: --> All data devices are unavailable
Oct 01 16:39:33 compute-0 systemd[1]: libpod-29b1fa979f4fccebb9b216ebdd9483d48ebdff2afb7d9b8e139a9a33200d4b32.scope: Deactivated successfully.
Oct 01 16:39:33 compute-0 podman[115126]: 2025-10-01 16:39:33.932064205 +0000 UTC m=+1.249945115 container died 29b1fa979f4fccebb9b216ebdd9483d48ebdff2afb7d9b8e139a9a33200d4b32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 01 16:39:33 compute-0 systemd[1]: libpod-29b1fa979f4fccebb9b216ebdd9483d48ebdff2afb7d9b8e139a9a33200d4b32.scope: Consumed 1.056s CPU time.
Oct 01 16:39:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-96c1b504807b4a717081f20d73af1cd85db9a49880a93bbf752e65b31f238647-merged.mount: Deactivated successfully.
Oct 01 16:39:34 compute-0 podman[115126]: 2025-10-01 16:39:34.010697809 +0000 UTC m=+1.328578709 container remove 29b1fa979f4fccebb9b216ebdd9483d48ebdff2afb7d9b8e139a9a33200d4b32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 01 16:39:34 compute-0 systemd[1]: libpod-conmon-29b1fa979f4fccebb9b216ebdd9483d48ebdff2afb7d9b8e139a9a33200d4b32.scope: Deactivated successfully.
Oct 01 16:39:34 compute-0 sudo[115018]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:34 compute-0 sudo[115186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:39:34 compute-0 sudo[115186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:39:34 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Oct 01 16:39:34 compute-0 sudo[115186]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:34 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Oct 01 16:39:34 compute-0 sudo[115211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:39:34 compute-0 sudo[115211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:39:34 compute-0 sudo[115211]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:34 compute-0 sudo[115236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:39:34 compute-0 sudo[115236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:39:34 compute-0 sudo[115236]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:34 compute-0 sudo[115261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 16:39:34 compute-0 sudo[115261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:39:34 compute-0 ceph-mon[74273]: pgmap v274: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:34 compute-0 ceph-mon[74273]: 11.2 scrub starts
Oct 01 16:39:34 compute-0 ceph-mon[74273]: 11.2 scrub ok
Oct 01 16:39:34 compute-0 podman[115325]: 2025-10-01 16:39:34.741042461 +0000 UTC m=+0.058733761 container create a6f4e77b21b40594725105e0ced70317a9df4418433899bab13956d2f8a9a03b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:39:34 compute-0 systemd[1]: Started libpod-conmon-a6f4e77b21b40594725105e0ced70317a9df4418433899bab13956d2f8a9a03b.scope.
Oct 01 16:39:34 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:39:34 compute-0 podman[115325]: 2025-10-01 16:39:34.712564644 +0000 UTC m=+0.030256044 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:39:34 compute-0 podman[115325]: 2025-10-01 16:39:34.807975285 +0000 UTC m=+0.125666595 container init a6f4e77b21b40594725105e0ced70317a9df4418433899bab13956d2f8a9a03b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hofstadter, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:39:34 compute-0 podman[115325]: 2025-10-01 16:39:34.815329552 +0000 UTC m=+0.133020842 container start a6f4e77b21b40594725105e0ced70317a9df4418433899bab13956d2f8a9a03b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:39:34 compute-0 podman[115325]: 2025-10-01 16:39:34.818054059 +0000 UTC m=+0.135745349 container attach a6f4e77b21b40594725105e0ced70317a9df4418433899bab13956d2f8a9a03b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 16:39:34 compute-0 focused_hofstadter[115341]: 167 167
Oct 01 16:39:34 compute-0 systemd[1]: libpod-a6f4e77b21b40594725105e0ced70317a9df4418433899bab13956d2f8a9a03b.scope: Deactivated successfully.
Oct 01 16:39:34 compute-0 podman[115325]: 2025-10-01 16:39:34.821196349 +0000 UTC m=+0.138887659 container died a6f4e77b21b40594725105e0ced70317a9df4418433899bab13956d2f8a9a03b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 01 16:39:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-a20f8df6eb36129f5dff42c4df0d93bb412033d62ba025830b8af625c6458b77-merged.mount: Deactivated successfully.
Oct 01 16:39:34 compute-0 podman[115325]: 2025-10-01 16:39:34.855590939 +0000 UTC m=+0.173282229 container remove a6f4e77b21b40594725105e0ced70317a9df4418433899bab13956d2f8a9a03b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hofstadter, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Oct 01 16:39:34 compute-0 systemd[1]: libpod-conmon-a6f4e77b21b40594725105e0ced70317a9df4418433899bab13956d2f8a9a03b.scope: Deactivated successfully.
Oct 01 16:39:35 compute-0 podman[115365]: 2025-10-01 16:39:35.031408919 +0000 UTC m=+0.053820104 container create c9d285e3dbc2953ce2b9ce574857779128ddbb2b190f11dc253813aa5e430433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:39:35 compute-0 systemd[1]: Started libpod-conmon-c9d285e3dbc2953ce2b9ce574857779128ddbb2b190f11dc253813aa5e430433.scope.
Oct 01 16:39:35 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:39:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79416b98f1d5bb636543f5fc97d39a1336ac3893d96e4c58039502c314b136dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:39:35 compute-0 podman[115365]: 2025-10-01 16:39:35.001214855 +0000 UTC m=+0.023626100 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:39:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79416b98f1d5bb636543f5fc97d39a1336ac3893d96e4c58039502c314b136dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:39:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79416b98f1d5bb636543f5fc97d39a1336ac3893d96e4c58039502c314b136dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:39:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79416b98f1d5bb636543f5fc97d39a1336ac3893d96e4c58039502c314b136dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:39:35 compute-0 podman[115365]: 2025-10-01 16:39:35.112800465 +0000 UTC m=+0.135211670 container init c9d285e3dbc2953ce2b9ce574857779128ddbb2b190f11dc253813aa5e430433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 01 16:39:35 compute-0 podman[115365]: 2025-10-01 16:39:35.124984716 +0000 UTC m=+0.147395901 container start c9d285e3dbc2953ce2b9ce574857779128ddbb2b190f11dc253813aa5e430433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 01 16:39:35 compute-0 podman[115365]: 2025-10-01 16:39:35.128729223 +0000 UTC m=+0.151140468 container attach c9d285e3dbc2953ce2b9ce574857779128ddbb2b190f11dc253813aa5e430433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bassi, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:39:35 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]: {
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:     "0": [
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:         {
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             "devices": [
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "/dev/loop3"
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             ],
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             "lv_name": "ceph_lv0",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             "lv_size": "21470642176",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             "name": "ceph_lv0",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             "tags": {
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "ceph.cluster_name": "ceph",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "ceph.crush_device_class": "",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "ceph.encrypted": "0",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "ceph.osd_id": "0",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "ceph.type": "block",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "ceph.vdo": "0"
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             },
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             "type": "block",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             "vg_name": "ceph_vg0"
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:         }
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:     ],
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:     "1": [
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:         {
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             "devices": [
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "/dev/loop4"
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             ],
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             "lv_name": "ceph_lv1",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             "lv_size": "21470642176",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             "name": "ceph_lv1",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             "tags": {
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "ceph.cluster_name": "ceph",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "ceph.crush_device_class": "",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "ceph.encrypted": "0",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "ceph.osd_id": "1",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "ceph.type": "block",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "ceph.vdo": "0"
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             },
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             "type": "block",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             "vg_name": "ceph_vg1"
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:         }
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:     ],
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:     "2": [
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:         {
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             "devices": [
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "/dev/loop5"
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             ],
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             "lv_name": "ceph_lv2",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             "lv_size": "21470642176",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             "name": "ceph_lv2",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             "tags": {
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "ceph.cluster_name": "ceph",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "ceph.crush_device_class": "",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "ceph.encrypted": "0",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "ceph.osd_id": "2",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "ceph.type": "block",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:                 "ceph.vdo": "0"
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             },
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             "type": "block",
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:             "vg_name": "ceph_vg2"
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:         }
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]:     ]
Oct 01 16:39:35 compute-0 dazzling_bassi[115381]: }
Oct 01 16:39:35 compute-0 systemd[1]: libpod-c9d285e3dbc2953ce2b9ce574857779128ddbb2b190f11dc253813aa5e430433.scope: Deactivated successfully.
Oct 01 16:39:35 compute-0 podman[115365]: 2025-10-01 16:39:35.886546284 +0000 UTC m=+0.908957429 container died c9d285e3dbc2953ce2b9ce574857779128ddbb2b190f11dc253813aa5e430433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:39:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-79416b98f1d5bb636543f5fc97d39a1336ac3893d96e4c58039502c314b136dc-merged.mount: Deactivated successfully.
Oct 01 16:39:35 compute-0 podman[115365]: 2025-10-01 16:39:35.941784989 +0000 UTC m=+0.964196134 container remove c9d285e3dbc2953ce2b9ce574857779128ddbb2b190f11dc253813aa5e430433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:39:35 compute-0 systemd[1]: libpod-conmon-c9d285e3dbc2953ce2b9ce574857779128ddbb2b190f11dc253813aa5e430433.scope: Deactivated successfully.
Oct 01 16:39:35 compute-0 sudo[115261]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:36 compute-0 sudo[115401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:39:36 compute-0 sudo[115401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:39:36 compute-0 sudo[115401]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:36 compute-0 sudo[115426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:39:36 compute-0 sudo[115426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:39:36 compute-0 sudo[115426]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:36 compute-0 sudo[115451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:39:36 compute-0 sudo[115451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:39:36 compute-0 sudo[115451]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:36 compute-0 sudo[115476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 16:39:36 compute-0 sudo[115476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:39:36 compute-0 podman[115542]: 2025-10-01 16:39:36.653026017 +0000 UTC m=+0.048715435 container create 771a5c1fa584a5cb2369f1049c10fb832d81feffe3fe864b8c6597d64ca88f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:39:36 compute-0 systemd[1]: Started libpod-conmon-771a5c1fa584a5cb2369f1049c10fb832d81feffe3fe864b8c6597d64ca88f77.scope.
Oct 01 16:39:36 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:39:36 compute-0 podman[115542]: 2025-10-01 16:39:36.633787237 +0000 UTC m=+0.029476645 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:39:36 compute-0 podman[115542]: 2025-10-01 16:39:36.728565798 +0000 UTC m=+0.124255196 container init 771a5c1fa584a5cb2369f1049c10fb832d81feffe3fe864b8c6597d64ca88f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 01 16:39:36 compute-0 podman[115542]: 2025-10-01 16:39:36.74028635 +0000 UTC m=+0.135975748 container start 771a5c1fa584a5cb2369f1049c10fb832d81feffe3fe864b8c6597d64ca88f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ritchie, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:39:36 compute-0 podman[115542]: 2025-10-01 16:39:36.742944519 +0000 UTC m=+0.138633917 container attach 771a5c1fa584a5cb2369f1049c10fb832d81feffe3fe864b8c6597d64ca88f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ritchie, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 01 16:39:36 compute-0 stoic_ritchie[115559]: 167 167
Oct 01 16:39:36 compute-0 systemd[1]: libpod-771a5c1fa584a5cb2369f1049c10fb832d81feffe3fe864b8c6597d64ca88f77.scope: Deactivated successfully.
Oct 01 16:39:36 compute-0 podman[115542]: 2025-10-01 16:39:36.749363906 +0000 UTC m=+0.145053314 container died 771a5c1fa584a5cb2369f1049c10fb832d81feffe3fe864b8c6597d64ca88f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ritchie, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:39:36 compute-0 ceph-mon[74273]: pgmap v275: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a18943ab9934209dfb3f221e806b3187d361398e1a7fd309e2b020284507480-merged.mount: Deactivated successfully.
Oct 01 16:39:36 compute-0 podman[115542]: 2025-10-01 16:39:36.805177864 +0000 UTC m=+0.200867262 container remove 771a5c1fa584a5cb2369f1049c10fb832d81feffe3fe864b8c6597d64ca88f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ritchie, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:39:36 compute-0 systemd[1]: libpod-conmon-771a5c1fa584a5cb2369f1049c10fb832d81feffe3fe864b8c6597d64ca88f77.scope: Deactivated successfully.
Oct 01 16:39:36 compute-0 podman[115584]: 2025-10-01 16:39:36.979766119 +0000 UTC m=+0.039317688 container create c5877f5b109385bb72c719511cd9e93f020bdd59b869cb737bf34825150460e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_colden, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:39:37 compute-0 systemd[1]: Started libpod-conmon-c5877f5b109385bb72c719511cd9e93f020bdd59b869cb737bf34825150460e2.scope.
Oct 01 16:39:37 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:39:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4c8962dbd245f11671f16e88aa4c1442c6a97e5bfb66a9b3ab0c5239a7f1ff1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:39:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4c8962dbd245f11671f16e88aa4c1442c6a97e5bfb66a9b3ab0c5239a7f1ff1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:39:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4c8962dbd245f11671f16e88aa4c1442c6a97e5bfb66a9b3ab0c5239a7f1ff1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:39:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4c8962dbd245f11671f16e88aa4c1442c6a97e5bfb66a9b3ab0c5239a7f1ff1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:39:37 compute-0 podman[115584]: 2025-10-01 16:39:37.046367821 +0000 UTC m=+0.105919400 container init c5877f5b109385bb72c719511cd9e93f020bdd59b869cb737bf34825150460e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_colden, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:39:37 compute-0 podman[115584]: 2025-10-01 16:39:37.051755357 +0000 UTC m=+0.111306916 container start c5877f5b109385bb72c719511cd9e93f020bdd59b869cb737bf34825150460e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:39:37 compute-0 podman[115584]: 2025-10-01 16:39:37.055066048 +0000 UTC m=+0.114617607 container attach c5877f5b109385bb72c719511cd9e93f020bdd59b869cb737bf34825150460e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Oct 01 16:39:37 compute-0 podman[115584]: 2025-10-01 16:39:36.962768214 +0000 UTC m=+0.022319783 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:39:37 compute-0 sshd-session[115606]: Accepted publickey for zuul from 192.168.122.30 port 47586 ssh2: ECDSA SHA256:cAu4I/kPoFUKOLOQB71BUt6Th09G4PIJ2iHT8DD8gEY
Oct 01 16:39:37 compute-0 systemd-logind[788]: New session 37 of user zuul.
Oct 01 16:39:37 compute-0 systemd[1]: Started Session 37 of User zuul.
Oct 01 16:39:37 compute-0 sshd-session[115606]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 16:39:37 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:37 compute-0 goofy_colden[115601]: {
Oct 01 16:39:37 compute-0 goofy_colden[115601]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 16:39:37 compute-0 goofy_colden[115601]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:39:37 compute-0 goofy_colden[115601]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 16:39:37 compute-0 goofy_colden[115601]:         "osd_id": 2,
Oct 01 16:39:37 compute-0 goofy_colden[115601]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:39:37 compute-0 goofy_colden[115601]:         "type": "bluestore"
Oct 01 16:39:37 compute-0 goofy_colden[115601]:     },
Oct 01 16:39:37 compute-0 goofy_colden[115601]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 16:39:37 compute-0 goofy_colden[115601]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:39:37 compute-0 goofy_colden[115601]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 16:39:37 compute-0 goofy_colden[115601]:         "osd_id": 0,
Oct 01 16:39:37 compute-0 goofy_colden[115601]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:39:37 compute-0 goofy_colden[115601]:         "type": "bluestore"
Oct 01 16:39:37 compute-0 goofy_colden[115601]:     },
Oct 01 16:39:37 compute-0 goofy_colden[115601]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 16:39:37 compute-0 goofy_colden[115601]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:39:37 compute-0 goofy_colden[115601]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 16:39:37 compute-0 goofy_colden[115601]:         "osd_id": 1,
Oct 01 16:39:37 compute-0 goofy_colden[115601]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:39:37 compute-0 goofy_colden[115601]:         "type": "bluestore"
Oct 01 16:39:37 compute-0 goofy_colden[115601]:     }
Oct 01 16:39:37 compute-0 goofy_colden[115601]: }
Oct 01 16:39:37 compute-0 podman[115584]: 2025-10-01 16:39:37.994598441 +0000 UTC m=+1.054150000 container died c5877f5b109385bb72c719511cd9e93f020bdd59b869cb737bf34825150460e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 01 16:39:37 compute-0 systemd[1]: libpod-c5877f5b109385bb72c719511cd9e93f020bdd59b869cb737bf34825150460e2.scope: Deactivated successfully.
Oct 01 16:39:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4c8962dbd245f11671f16e88aa4c1442c6a97e5bfb66a9b3ab0c5239a7f1ff1-merged.mount: Deactivated successfully.
Oct 01 16:39:38 compute-0 podman[115584]: 2025-10-01 16:39:38.052140439 +0000 UTC m=+1.111692008 container remove c5877f5b109385bb72c719511cd9e93f020bdd59b869cb737bf34825150460e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 01 16:39:38 compute-0 systemd[1]: libpod-conmon-c5877f5b109385bb72c719511cd9e93f020bdd59b869cb737bf34825150460e2.scope: Deactivated successfully.
Oct 01 16:39:38 compute-0 sudo[115476]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:39:38 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:39:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:39:38 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:39:38 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 3fa29de0-930b-4c90-9df4-c7869699e871 does not exist
Oct 01 16:39:38 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev d904d426-3dcf-4fcf-9a92-f6966a716c1e does not exist
Oct 01 16:39:38 compute-0 python3.9[115779]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:39:38 compute-0 sudo[115799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:39:38 compute-0 sudo[115799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:39:38 compute-0 sudo[115799]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:38 compute-0 sudo[115828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 16:39:38 compute-0 sudo[115828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:39:38 compute-0 sudo[115828]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:39:38 compute-0 ceph-mon[74273]: pgmap v276: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:38 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:39:38 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:39:39 compute-0 python3.9[116002]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 16:39:39 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:40 compute-0 python3.9[116195]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:39:40 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 2.a scrub starts
Oct 01 16:39:40 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 2.a scrub ok
Oct 01 16:39:40 compute-0 ceph-mon[74273]: pgmap v277: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:40 compute-0 ceph-mon[74273]: 2.a scrub starts
Oct 01 16:39:40 compute-0 ceph-mon[74273]: 2.a scrub ok
Oct 01 16:39:40 compute-0 sshd-session[115609]: Connection closed by 192.168.122.30 port 47586
Oct 01 16:39:40 compute-0 sshd-session[115606]: pam_unix(sshd:session): session closed for user zuul
Oct 01 16:39:40 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Oct 01 16:39:40 compute-0 systemd[1]: session-37.scope: Consumed 2.416s CPU time.
Oct 01 16:39:40 compute-0 systemd-logind[788]: Session 37 logged out. Waiting for processes to exit.
Oct 01 16:39:40 compute-0 systemd-logind[788]: Removed session 37.
Oct 01 16:39:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:39:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:39:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:39:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:39:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:39:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:39:41 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:42 compute-0 ceph-mon[74273]: pgmap v278: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:43 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:43 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 5.c scrub starts
Oct 01 16:39:43 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 5.c scrub ok
Oct 01 16:39:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:39:43 compute-0 ceph-mon[74273]: 5.c scrub starts
Oct 01 16:39:44 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 2.7 deep-scrub starts
Oct 01 16:39:44 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 2.7 deep-scrub ok
Oct 01 16:39:44 compute-0 ceph-mon[74273]: pgmap v279: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:44 compute-0 ceph-mon[74273]: 5.c scrub ok
Oct 01 16:39:44 compute-0 ceph-mon[74273]: 2.7 deep-scrub starts
Oct 01 16:39:45 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:45 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 10.f scrub starts
Oct 01 16:39:45 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 10.f scrub ok
Oct 01 16:39:45 compute-0 ceph-mon[74273]: 2.7 deep-scrub ok
Oct 01 16:39:46 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 7.e scrub starts
Oct 01 16:39:46 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 7.e scrub ok
Oct 01 16:39:46 compute-0 sshd-session[116221]: Accepted publickey for zuul from 192.168.122.30 port 33070 ssh2: ECDSA SHA256:cAu4I/kPoFUKOLOQB71BUt6Th09G4PIJ2iHT8DD8gEY
Oct 01 16:39:46 compute-0 systemd-logind[788]: New session 38 of user zuul.
Oct 01 16:39:46 compute-0 systemd[1]: Started Session 38 of User zuul.
Oct 01 16:39:46 compute-0 sshd-session[116221]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 16:39:46 compute-0 ceph-mon[74273]: pgmap v280: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:46 compute-0 ceph-mon[74273]: 10.f scrub starts
Oct 01 16:39:46 compute-0 ceph-mon[74273]: 10.f scrub ok
Oct 01 16:39:46 compute-0 ceph-mon[74273]: 7.e scrub starts
Oct 01 16:39:46 compute-0 ceph-mon[74273]: 7.e scrub ok
Oct 01 16:39:47 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 8.2 deep-scrub starts
Oct 01 16:39:47 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 8.2 deep-scrub ok
Oct 01 16:39:47 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 3.a scrub starts
Oct 01 16:39:47 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 3.a scrub ok
Oct 01 16:39:47 compute-0 python3.9[116374]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:39:47 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:47 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Oct 01 16:39:47 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Oct 01 16:39:47 compute-0 ceph-mon[74273]: 8.2 deep-scrub starts
Oct 01 16:39:47 compute-0 ceph-mon[74273]: 8.2 deep-scrub ok
Oct 01 16:39:47 compute-0 ceph-mon[74273]: 10.2 scrub starts
Oct 01 16:39:47 compute-0 ceph-mon[74273]: 10.2 scrub ok
Oct 01 16:39:48 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Oct 01 16:39:48 compute-0 python3.9[116528]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:39:48 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Oct 01 16:39:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:39:48 compute-0 ceph-mon[74273]: 3.a scrub starts
Oct 01 16:39:48 compute-0 ceph-mon[74273]: 3.a scrub ok
Oct 01 16:39:48 compute-0 ceph-mon[74273]: pgmap v281: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:49 compute-0 sudo[116682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhkxmmawvfcvzxjeclyswcczhhkaffec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336788.7837703-40-3689327538248/AnsiballZ_setup.py'
Oct 01 16:39:49 compute-0 sudo[116682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:39:49 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:49 compute-0 python3.9[116684]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 16:39:49 compute-0 sudo[116682]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:49 compute-0 ceph-mon[74273]: 7.4 scrub starts
Oct 01 16:39:49 compute-0 ceph-mon[74273]: 7.4 scrub ok
Oct 01 16:39:50 compute-0 sudo[116766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plrdeazsmjjlxenygwiltoescwihnfux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336788.7837703-40-3689327538248/AnsiballZ_dnf.py'
Oct 01 16:39:50 compute-0 sudo[116766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:39:50 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.f scrub starts
Oct 01 16:39:50 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.f scrub ok
Oct 01 16:39:50 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Oct 01 16:39:50 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Oct 01 16:39:50 compute-0 python3.9[116768]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 16:39:50 compute-0 ceph-mon[74273]: pgmap v282: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:50 compute-0 ceph-mon[74273]: 2.6 scrub starts
Oct 01 16:39:50 compute-0 ceph-mon[74273]: 2.6 scrub ok
Oct 01 16:39:51 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:51 compute-0 sudo[116766]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:51 compute-0 ceph-mon[74273]: 8.f scrub starts
Oct 01 16:39:51 compute-0 ceph-mon[74273]: 8.f scrub ok
Oct 01 16:39:52 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Oct 01 16:39:52 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Oct 01 16:39:52 compute-0 sudo[116919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrkosfwgxwzgmoykymlrfeufjfwmdusn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336791.9260266-52-198435573520879/AnsiballZ_setup.py'
Oct 01 16:39:52 compute-0 sudo[116919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:39:52 compute-0 python3.9[116921]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 16:39:52 compute-0 sudo[116919]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:52 compute-0 ceph-mon[74273]: pgmap v283: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:52 compute-0 ceph-mon[74273]: 11.8 scrub starts
Oct 01 16:39:52 compute-0 ceph-mon[74273]: 11.8 scrub ok
Oct 01 16:39:53 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:53 compute-0 sudo[117114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yazndifvprhrfiszoclysrhjlorqoilq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336793.138569-63-43934488296859/AnsiballZ_file.py'
Oct 01 16:39:53 compute-0 sudo[117114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:39:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:39:53 compute-0 python3.9[117116]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:39:53 compute-0 sudo[117114]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:54 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 7.f scrub starts
Oct 01 16:39:54 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 7.f scrub ok
Oct 01 16:39:54 compute-0 sudo[117266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtnnzzomjjxcexwpahnuvegblsjuycdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336793.9406278-71-89194464493271/AnsiballZ_command.py'
Oct 01 16:39:54 compute-0 sudo[117266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:39:54 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Oct 01 16:39:54 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Oct 01 16:39:54 compute-0 python3.9[117268]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:39:54 compute-0 sudo[117266]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:54 compute-0 ceph-mon[74273]: pgmap v284: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:54 compute-0 ceph-mon[74273]: 10.11 scrub starts
Oct 01 16:39:54 compute-0 ceph-mon[74273]: 10.11 scrub ok
Oct 01 16:39:55 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:55 compute-0 sudo[117431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjthrxhsojrdhbaoixjmbfgfcsdciewd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336794.9351947-79-145903357622758/AnsiballZ_stat.py'
Oct 01 16:39:55 compute-0 sudo[117431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:39:55 compute-0 python3.9[117433]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:39:55 compute-0 sudo[117431]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:55 compute-0 ceph-mon[74273]: 7.f scrub starts
Oct 01 16:39:55 compute-0 ceph-mon[74273]: 7.f scrub ok
Oct 01 16:39:56 compute-0 sudo[117509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uofhqbxjzkusvhxgtuttitkozfhyhyuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336794.9351947-79-145903357622758/AnsiballZ_file.py'
Oct 01 16:39:56 compute-0 sudo[117509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:39:56 compute-0 python3.9[117511]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:39:56 compute-0 sudo[117509]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:56 compute-0 sudo[117661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlwdmlwrincuxlnhnfvxocfzocvwcviu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336796.4450848-91-206730664338777/AnsiballZ_stat.py'
Oct 01 16:39:56 compute-0 sudo[117661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:39:56 compute-0 ceph-mon[74273]: pgmap v285: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:57 compute-0 python3.9[117663]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:39:57 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 3.e scrub starts
Oct 01 16:39:57 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 3.e scrub ok
Oct 01 16:39:57 compute-0 sudo[117661]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:57 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.b deep-scrub starts
Oct 01 16:39:57 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.b deep-scrub ok
Oct 01 16:39:57 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:57 compute-0 sudo[117739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yodccwfgiqgxuhzpsmknoqonteyjhdvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336796.4450848-91-206730664338777/AnsiballZ_file.py'
Oct 01 16:39:57 compute-0 sudo[117739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:39:57 compute-0 python3.9[117741]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:39:57 compute-0 sudo[117739]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:57 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Oct 01 16:39:57 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Oct 01 16:39:57 compute-0 ceph-mon[74273]: 3.e scrub starts
Oct 01 16:39:57 compute-0 ceph-mon[74273]: 3.e scrub ok
Oct 01 16:39:58 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Oct 01 16:39:58 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Oct 01 16:39:58 compute-0 sudo[117892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wglamvzjjhfxvgkayncyadhzwlwvccur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336797.8289702-104-108709468440868/AnsiballZ_ini_file.py'
Oct 01 16:39:58 compute-0 sudo[117892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:39:58 compute-0 python3.9[117894]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:39:58 compute-0 sudo[117892]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:39:58 compute-0 ceph-mon[74273]: 8.b deep-scrub starts
Oct 01 16:39:58 compute-0 ceph-mon[74273]: 8.b deep-scrub ok
Oct 01 16:39:58 compute-0 ceph-mon[74273]: pgmap v286: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:58 compute-0 ceph-mon[74273]: 5.1 scrub starts
Oct 01 16:39:58 compute-0 ceph-mon[74273]: 5.1 scrub ok
Oct 01 16:39:59 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Oct 01 16:39:59 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Oct 01 16:39:59 compute-0 sudo[118044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmkbzvpgggxcgihsavqsogbqxwugpkcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336798.784443-104-160828216396057/AnsiballZ_ini_file.py'
Oct 01 16:39:59 compute-0 sudo[118044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:39:59 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:39:59 compute-0 python3.9[118046]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:39:59 compute-0 sudo[118044]: pam_unix(sudo:session): session closed for user root
Oct 01 16:39:59 compute-0 ceph-mon[74273]: 3.9 scrub starts
Oct 01 16:39:59 compute-0 ceph-mon[74273]: 3.9 scrub ok
Oct 01 16:39:59 compute-0 ceph-mon[74273]: 8.4 scrub starts
Oct 01 16:39:59 compute-0 ceph-mon[74273]: 8.4 scrub ok
Oct 01 16:39:59 compute-0 ceph-mon[74273]: pgmap v287: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:00 compute-0 sudo[118196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odfenqgsjzkixbcwhyiigrqawywbsucb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336799.582711-104-226027695991299/AnsiballZ_ini_file.py'
Oct 01 16:40:00 compute-0 sudo[118196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:00 compute-0 python3.9[118198]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:40:00 compute-0 sudo[118196]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:00 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Oct 01 16:40:00 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Oct 01 16:40:00 compute-0 sudo[118348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkuqkysepiluopwvjljzwvqoefndqsvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336800.3718536-104-2278356231710/AnsiballZ_ini_file.py'
Oct 01 16:40:00 compute-0 sudo[118348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:00 compute-0 python3.9[118350]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:40:00 compute-0 sudo[118348]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:01 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Oct 01 16:40:01 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Oct 01 16:40:01 compute-0 ceph-mon[74273]: 11.1 scrub starts
Oct 01 16:40:01 compute-0 ceph-mon[74273]: 11.1 scrub ok
Oct 01 16:40:01 compute-0 ceph-mon[74273]: 11.4 scrub starts
Oct 01 16:40:01 compute-0 ceph-mon[74273]: 11.4 scrub ok
Oct 01 16:40:01 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:01 compute-0 sudo[118500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxdctzptqxefbryzabbhwuuqmzgllmex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336801.2341907-135-72303034800124/AnsiballZ_dnf.py'
Oct 01 16:40:01 compute-0 sudo[118500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:01 compute-0 python3.9[118502]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 16:40:02 compute-0 ceph-mon[74273]: pgmap v288: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:02 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 7.a scrub starts
Oct 01 16:40:02 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 7.a scrub ok
Oct 01 16:40:03 compute-0 sudo[118500]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:03 compute-0 ceph-mon[74273]: 7.a scrub starts
Oct 01 16:40:03 compute-0 ceph-mon[74273]: 7.a scrub ok
Oct 01 16:40:03 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:40:04 compute-0 sudo[118653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqgekkoudnneexjmyqlifufdiowinsxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336803.7229433-146-227818126869163/AnsiballZ_setup.py'
Oct 01 16:40:04 compute-0 sudo[118653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:04 compute-0 ceph-mon[74273]: pgmap v289: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:04 compute-0 python3.9[118655]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:40:04 compute-0 sudo[118653]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:04 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Oct 01 16:40:04 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Oct 01 16:40:04 compute-0 sudo[118807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwfshovnyjaupadvlmcgrfvvomanqkjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336804.5857084-154-263648334339977/AnsiballZ_stat.py'
Oct 01 16:40:04 compute-0 sudo[118807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:05 compute-0 python3.9[118809]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:40:05 compute-0 sudo[118807]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:05 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 3.c scrub starts
Oct 01 16:40:05 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 3.c scrub ok
Oct 01 16:40:05 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:05 compute-0 sudo[118959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krdrwwzvmxqbdztmjopepsgnkruxopyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336805.3619025-163-195127353950189/AnsiballZ_stat.py'
Oct 01 16:40:05 compute-0 sudo[118959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:05 compute-0 python3.9[118961]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:40:05 compute-0 sudo[118959]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:05 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Oct 01 16:40:06 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Oct 01 16:40:06 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Oct 01 16:40:06 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Oct 01 16:40:06 compute-0 ceph-mon[74273]: 10.10 scrub starts
Oct 01 16:40:06 compute-0 ceph-mon[74273]: 10.10 scrub ok
Oct 01 16:40:06 compute-0 ceph-mon[74273]: 3.c scrub starts
Oct 01 16:40:06 compute-0 ceph-mon[74273]: 3.c scrub ok
Oct 01 16:40:06 compute-0 ceph-mon[74273]: pgmap v290: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:06 compute-0 ceph-mon[74273]: 7.8 scrub starts
Oct 01 16:40:06 compute-0 ceph-mon[74273]: 7.8 scrub ok
Oct 01 16:40:06 compute-0 sudo[119111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmepsisfmizvxaqecznncwnaykptknnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336806.2133794-173-263155469668234/AnsiballZ_service_facts.py'
Oct 01 16:40:06 compute-0 sudo[119111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:06 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Oct 01 16:40:06 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Oct 01 16:40:06 compute-0 python3.9[119113]: ansible-service_facts Invoked
Oct 01 16:40:07 compute-0 network[119130]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 01 16:40:07 compute-0 network[119131]: 'network-scripts' will be removed from distribution in near future.
Oct 01 16:40:07 compute-0 network[119132]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 01 16:40:07 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Oct 01 16:40:07 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Oct 01 16:40:07 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:07 compute-0 ceph-mon[74273]: 7.9 scrub starts
Oct 01 16:40:07 compute-0 ceph-mon[74273]: 7.9 scrub ok
Oct 01 16:40:08 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Oct 01 16:40:08 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Oct 01 16:40:08 compute-0 ceph-mon[74273]: 10.13 scrub starts
Oct 01 16:40:08 compute-0 ceph-mon[74273]: 10.13 scrub ok
Oct 01 16:40:08 compute-0 ceph-mon[74273]: 11.6 scrub starts
Oct 01 16:40:08 compute-0 ceph-mon[74273]: 11.6 scrub ok
Oct 01 16:40:08 compute-0 ceph-mon[74273]: pgmap v291: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:40:08 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Oct 01 16:40:08 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Oct 01 16:40:08 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Oct 01 16:40:08 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Oct 01 16:40:09 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:09 compute-0 ceph-mon[74273]: 8.6 scrub starts
Oct 01 16:40:09 compute-0 ceph-mon[74273]: 8.6 scrub ok
Oct 01 16:40:09 compute-0 ceph-mon[74273]: 11.18 scrub starts
Oct 01 16:40:09 compute-0 ceph-mon[74273]: 11.18 scrub ok
Oct 01 16:40:09 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Oct 01 16:40:09 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Oct 01 16:40:10 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 3.f scrub starts
Oct 01 16:40:10 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 3.f scrub ok
Oct 01 16:40:10 compute-0 ceph-mon[74273]: 2.1b scrub starts
Oct 01 16:40:10 compute-0 ceph-mon[74273]: 2.1b scrub ok
Oct 01 16:40:10 compute-0 ceph-mon[74273]: pgmap v292: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:10 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Oct 01 16:40:10 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Oct 01 16:40:11 compute-0 sudo[119111]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_16:40:11
Oct 01 16:40:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 16:40:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 16:40:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'backups', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control']
Oct 01 16:40:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 16:40:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:40:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:40:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:40:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:40:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:40:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:40:11 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 16:40:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:40:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 16:40:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:40:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:40:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:40:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:40:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:40:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:40:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:40:11 compute-0 ceph-mon[74273]: 10.12 scrub starts
Oct 01 16:40:11 compute-0 ceph-mon[74273]: 10.12 scrub ok
Oct 01 16:40:11 compute-0 ceph-mon[74273]: 3.f scrub starts
Oct 01 16:40:11 compute-0 ceph-mon[74273]: 3.f scrub ok
Oct 01 16:40:11 compute-0 sudo[119418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxoxqewxxlhsfniydabodiprumhciirz ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1759336811.5746121-186-78787560699860/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1759336811.5746121-186-78787560699860/args'
Oct 01 16:40:11 compute-0 sudo[119418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:11 compute-0 sudo[119418]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:12 compute-0 ceph-mon[74273]: 5.1d scrub starts
Oct 01 16:40:12 compute-0 ceph-mon[74273]: 5.1d scrub ok
Oct 01 16:40:12 compute-0 ceph-mon[74273]: pgmap v293: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:12 compute-0 sudo[119585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aplcahgipdtwjscomdytzfikyscjkdex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336812.3601646-197-205504401669106/AnsiballZ_dnf.py'
Oct 01 16:40:12 compute-0 sudo[119585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:12 compute-0 python3.9[119587]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 16:40:12 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Oct 01 16:40:12 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Oct 01 16:40:13 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Oct 01 16:40:13 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Oct 01 16:40:13 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:13 compute-0 ceph-mon[74273]: 11.3 scrub starts
Oct 01 16:40:13 compute-0 ceph-mon[74273]: 11.3 scrub ok
Oct 01 16:40:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:40:14 compute-0 sudo[119585]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:14 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Oct 01 16:40:14 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Oct 01 16:40:14 compute-0 ceph-mon[74273]: 8.1a scrub starts
Oct 01 16:40:14 compute-0 ceph-mon[74273]: 8.1a scrub ok
Oct 01 16:40:14 compute-0 ceph-mon[74273]: pgmap v294: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:15 compute-0 sudo[119738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-citlepertlijcnbjzyobujdvlkhgzxth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336814.400177-210-201426709469103/AnsiballZ_package_facts.py'
Oct 01 16:40:15 compute-0 sudo[119738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:15 compute-0 python3.9[119740]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct 01 16:40:15 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:15 compute-0 ceph-mon[74273]: 3.12 scrub starts
Oct 01 16:40:15 compute-0 ceph-mon[74273]: 3.12 scrub ok
Oct 01 16:40:15 compute-0 sudo[119738]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:15 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Oct 01 16:40:15 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Oct 01 16:40:16 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Oct 01 16:40:16 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Oct 01 16:40:16 compute-0 sudo[119890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbpofzksuwmkgbqpkkwnspjemgqapzzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336815.9816577-220-181842306872537/AnsiballZ_stat.py'
Oct 01 16:40:16 compute-0 sudo[119890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:16 compute-0 ceph-mon[74273]: pgmap v295: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:16 compute-0 ceph-mon[74273]: 7.15 scrub starts
Oct 01 16:40:16 compute-0 ceph-mon[74273]: 7.15 scrub ok
Oct 01 16:40:16 compute-0 python3.9[119892]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:40:16 compute-0 sudo[119890]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:16 compute-0 sudo[119968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pshdtojdofoaosozlsnloafcrxfyshga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336815.9816577-220-181842306872537/AnsiballZ_file.py'
Oct 01 16:40:16 compute-0 sudo[119968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:16 compute-0 python3.9[119970]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:40:17 compute-0 sudo[119968]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:17 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:17 compute-0 ceph-mon[74273]: 11.19 scrub starts
Oct 01 16:40:17 compute-0 ceph-mon[74273]: 11.19 scrub ok
Oct 01 16:40:17 compute-0 sudo[120120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-potxloptexpdxrplrwtrmcnucnhtekxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336817.206536-232-114361919940652/AnsiballZ_stat.py'
Oct 01 16:40:17 compute-0 sudo[120120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:17 compute-0 python3.9[120122]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:40:17 compute-0 sudo[120120]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:17 compute-0 sudo[120198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxmjsrweqohbusxkapzcnrdlodhactms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336817.206536-232-114361919940652/AnsiballZ_file.py'
Oct 01 16:40:17 compute-0 sudo[120198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:18 compute-0 python3.9[120200]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:40:18 compute-0 sudo[120198]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:18 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Oct 01 16:40:18 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Oct 01 16:40:18 compute-0 ceph-mon[74273]: pgmap v296: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:40:19 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Oct 01 16:40:19 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Oct 01 16:40:19 compute-0 sudo[120350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihmoufivgzuprixhrdgvvzlezsmtcadr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336818.676975-250-196686149285852/AnsiballZ_lineinfile.py'
Oct 01 16:40:19 compute-0 sudo[120350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:19 compute-0 python3.9[120352]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:40:19 compute-0 sudo[120350]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:19 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Oct 01 16:40:19 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Oct 01 16:40:19 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:19 compute-0 ceph-mon[74273]: 8.1f scrub starts
Oct 01 16:40:19 compute-0 ceph-mon[74273]: 8.1f scrub ok
Oct 01 16:40:20 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Oct 01 16:40:20 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Oct 01 16:40:20 compute-0 sudo[120502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgqpodkdefykhzotbvhrwdejkgxsixdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336819.8110888-265-192128529267916/AnsiballZ_setup.py'
Oct 01 16:40:20 compute-0 sudo[120502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:20 compute-0 python3.9[120504]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 16:40:20 compute-0 sudo[120502]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:20 compute-0 ceph-mon[74273]: 2.9 scrub starts
Oct 01 16:40:20 compute-0 ceph-mon[74273]: 2.9 scrub ok
Oct 01 16:40:20 compute-0 ceph-mon[74273]: 3.15 scrub starts
Oct 01 16:40:20 compute-0 ceph-mon[74273]: 3.15 scrub ok
Oct 01 16:40:20 compute-0 ceph-mon[74273]: pgmap v297: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 16:40:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:40:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 16:40:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:40:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:40:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:40:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:40:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:40:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:40:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:40:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:40:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:40:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 16:40:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:40:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:40:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:40:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 16:40:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:40:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 16:40:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:40:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:40:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:40:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 16:40:21 compute-0 sudo[120586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcfysbsauytgfajflogrdjbzlaiefwwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336819.8110888-265-192128529267916/AnsiballZ_systemd.py'
Oct 01 16:40:21 compute-0 sudo[120586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:21 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Oct 01 16:40:21 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Oct 01 16:40:21 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:21 compute-0 python3.9[120588]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:40:21 compute-0 sudo[120586]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:21 compute-0 ceph-mon[74273]: 5.1a scrub starts
Oct 01 16:40:21 compute-0 ceph-mon[74273]: 5.1a scrub ok
Oct 01 16:40:22 compute-0 sshd-session[116224]: Connection closed by 192.168.122.30 port 33070
Oct 01 16:40:22 compute-0 sshd-session[116221]: pam_unix(sshd:session): session closed for user zuul
Oct 01 16:40:22 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Oct 01 16:40:22 compute-0 systemd[1]: session-38.scope: Consumed 24.722s CPU time.
Oct 01 16:40:22 compute-0 systemd-logind[788]: Session 38 logged out. Waiting for processes to exit.
Oct 01 16:40:22 compute-0 systemd-logind[788]: Removed session 38.
Oct 01 16:40:22 compute-0 ceph-mon[74273]: 8.18 scrub starts
Oct 01 16:40:22 compute-0 ceph-mon[74273]: 8.18 scrub ok
Oct 01 16:40:22 compute-0 ceph-mon[74273]: pgmap v298: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:23 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:40:24 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Oct 01 16:40:24 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Oct 01 16:40:24 compute-0 ceph-mon[74273]: pgmap v299: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:24 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Oct 01 16:40:24 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Oct 01 16:40:25 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:25 compute-0 ceph-mon[74273]: 8.1d scrub starts
Oct 01 16:40:25 compute-0 ceph-mon[74273]: 8.1d scrub ok
Oct 01 16:40:25 compute-0 ceph-mon[74273]: 11.1a scrub starts
Oct 01 16:40:25 compute-0 ceph-mon[74273]: 11.1a scrub ok
Oct 01 16:40:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.1b deep-scrub starts
Oct 01 16:40:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.1b deep-scrub ok
Oct 01 16:40:26 compute-0 ceph-mon[74273]: pgmap v300: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:26 compute-0 ceph-mon[74273]: 11.1b deep-scrub starts
Oct 01 16:40:26 compute-0 ceph-mon[74273]: 11.1b deep-scrub ok
Oct 01 16:40:26 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Oct 01 16:40:26 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Oct 01 16:40:27 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:27 compute-0 sshd-session[120615]: Accepted publickey for zuul from 192.168.122.30 port 46558 ssh2: ECDSA SHA256:cAu4I/kPoFUKOLOQB71BUt6Th09G4PIJ2iHT8DD8gEY
Oct 01 16:40:27 compute-0 systemd-logind[788]: New session 39 of user zuul.
Oct 01 16:40:27 compute-0 systemd[1]: Started Session 39 of User zuul.
Oct 01 16:40:27 compute-0 sshd-session[120615]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 16:40:27 compute-0 ceph-mon[74273]: 8.1b scrub starts
Oct 01 16:40:27 compute-0 ceph-mon[74273]: 8.1b scrub ok
Oct 01 16:40:27 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Oct 01 16:40:27 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Oct 01 16:40:28 compute-0 sudo[120768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhrldlipwsuuxxqeawykkjebqblkevxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336827.5882256-22-147030148812574/AnsiballZ_file.py'
Oct 01 16:40:28 compute-0 sudo[120768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:28 compute-0 python3.9[120770]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:40:28 compute-0 sudo[120768]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:28 compute-0 ceph-mon[74273]: pgmap v301: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:28 compute-0 ceph-mon[74273]: 11.1c scrub starts
Oct 01 16:40:28 compute-0 ceph-mon[74273]: 11.1c scrub ok
Oct 01 16:40:28 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Oct 01 16:40:28 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:40:28.647822) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 16:40:28 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Oct 01 16:40:28 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336828648054, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7191, "num_deletes": 251, "total_data_size": 9372540, "memory_usage": 9611328, "flush_reason": "Manual Compaction"}
Oct 01 16:40:28 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Oct 01 16:40:28 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336828695643, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7522741, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 140, "largest_seqno": 7328, "table_properties": {"data_size": 7496343, "index_size": 17259, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8069, "raw_key_size": 74737, "raw_average_key_size": 23, "raw_value_size": 7434233, "raw_average_value_size": 2305, "num_data_blocks": 757, "num_entries": 3224, "num_filter_entries": 3224, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336403, "oldest_key_time": 1759336403, "file_creation_time": 1759336828, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Oct 01 16:40:28 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 47859 microseconds, and 28461 cpu microseconds.
Oct 01 16:40:28 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:40:28.695697) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7522741 bytes OK
Oct 01 16:40:28 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:40:28.695716) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Oct 01 16:40:28 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:40:28.697329) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Oct 01 16:40:28 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:40:28.697343) EVENT_LOG_v1 {"time_micros": 1759336828697338, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Oct 01 16:40:28 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:40:28.697370) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Oct 01 16:40:28 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9341308, prev total WAL file size 9341308, number of live WAL files 2.
Oct 01 16:40:28 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 16:40:28 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:40:28.699797) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Oct 01 16:40:28 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Oct 01 16:40:28 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7346KB) 13(52KB) 8(1944B)]
Oct 01 16:40:28 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336828699951, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7578838, "oldest_snapshot_seqno": -1}
Oct 01 16:40:28 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3039 keys, 7534632 bytes, temperature: kUnknown
Oct 01 16:40:28 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336828737432, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7534632, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7508715, "index_size": 17231, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7621, "raw_key_size": 72782, "raw_average_key_size": 23, "raw_value_size": 7448257, "raw_average_value_size": 2450, "num_data_blocks": 758, "num_entries": 3039, "num_filter_entries": 3039, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336399, "oldest_key_time": 0, "file_creation_time": 1759336828, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Oct 01 16:40:28 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 16:40:28 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:40:28.737688) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7534632 bytes
Oct 01 16:40:28 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:40:28.738954) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 201.8 rd, 200.6 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.2, 0.0 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3329, records dropped: 290 output_compression: NoCompression
Oct 01 16:40:28 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:40:28.738973) EVENT_LOG_v1 {"time_micros": 1759336828738963, "job": 4, "event": "compaction_finished", "compaction_time_micros": 37554, "compaction_time_cpu_micros": 19421, "output_level": 6, "num_output_files": 1, "total_output_size": 7534632, "num_input_records": 3329, "num_output_records": 3039, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 16:40:28 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 16:40:28 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336828740700, "job": 4, "event": "table_file_deletion", "file_number": 19}
Oct 01 16:40:28 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 16:40:28 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336828740765, "job": 4, "event": "table_file_deletion", "file_number": 13}
Oct 01 16:40:28 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 16:40:28 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336828740804, "job": 4, "event": "table_file_deletion", "file_number": 8}
Oct 01 16:40:28 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:40:28.699643) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:40:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:40:28 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 7.11 deep-scrub starts
Oct 01 16:40:29 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 7.11 deep-scrub ok
Oct 01 16:40:29 compute-0 sudo[120921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsgzwwhmysohvoolmtzsavtkuozhawnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336828.5928054-34-104136218527705/AnsiballZ_stat.py'
Oct 01 16:40:29 compute-0 sudo[120921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:29 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Oct 01 16:40:29 compute-0 python3.9[120923]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:40:29 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Oct 01 16:40:29 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:29 compute-0 sudo[120921]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:29 compute-0 sudo[120999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aknrdziaxpubieoyrenpevbkjvqnbisr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336828.5928054-34-104136218527705/AnsiballZ_file.py'
Oct 01 16:40:29 compute-0 sudo[120999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:29 compute-0 ceph-mon[74273]: 7.11 deep-scrub starts
Oct 01 16:40:29 compute-0 ceph-mon[74273]: 7.11 deep-scrub ok
Oct 01 16:40:29 compute-0 python3.9[121001]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:40:29 compute-0 sudo[120999]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:30 compute-0 sshd-session[120618]: Connection closed by 192.168.122.30 port 46558
Oct 01 16:40:30 compute-0 sshd-session[120615]: pam_unix(sshd:session): session closed for user zuul
Oct 01 16:40:30 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Oct 01 16:40:30 compute-0 systemd[1]: session-39.scope: Consumed 1.665s CPU time.
Oct 01 16:40:30 compute-0 systemd-logind[788]: Session 39 logged out. Waiting for processes to exit.
Oct 01 16:40:30 compute-0 systemd-logind[788]: Removed session 39.
Oct 01 16:40:30 compute-0 ceph-mon[74273]: 7.13 scrub starts
Oct 01 16:40:30 compute-0 ceph-mon[74273]: 7.13 scrub ok
Oct 01 16:40:30 compute-0 ceph-mon[74273]: pgmap v302: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:31 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:31 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Oct 01 16:40:31 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Oct 01 16:40:31 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Oct 01 16:40:31 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Oct 01 16:40:32 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Oct 01 16:40:32 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Oct 01 16:40:32 compute-0 ceph-mon[74273]: pgmap v303: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:32 compute-0 ceph-mon[74273]: 3.17 scrub starts
Oct 01 16:40:32 compute-0 ceph-mon[74273]: 3.17 scrub ok
Oct 01 16:40:32 compute-0 ceph-mon[74273]: 11.1e scrub starts
Oct 01 16:40:32 compute-0 ceph-mon[74273]: 11.1e scrub ok
Oct 01 16:40:32 compute-0 ceph-mon[74273]: 10.14 scrub starts
Oct 01 16:40:32 compute-0 ceph-mon[74273]: 10.14 scrub ok
Oct 01 16:40:33 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:33 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Oct 01 16:40:33 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Oct 01 16:40:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:40:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Oct 01 16:40:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Oct 01 16:40:34 compute-0 ceph-mon[74273]: pgmap v304: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:34 compute-0 ceph-mon[74273]: 8.9 scrub starts
Oct 01 16:40:34 compute-0 ceph-mon[74273]: 8.9 scrub ok
Oct 01 16:40:35 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:35 compute-0 ceph-mon[74273]: 7.6 scrub starts
Oct 01 16:40:35 compute-0 ceph-mon[74273]: 7.6 scrub ok
Oct 01 16:40:35 compute-0 sshd-session[121026]: Accepted publickey for zuul from 192.168.122.30 port 52808 ssh2: ECDSA SHA256:cAu4I/kPoFUKOLOQB71BUt6Th09G4PIJ2iHT8DD8gEY
Oct 01 16:40:35 compute-0 systemd-logind[788]: New session 40 of user zuul.
Oct 01 16:40:35 compute-0 systemd[1]: Started Session 40 of User zuul.
Oct 01 16:40:35 compute-0 sshd-session[121026]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 16:40:36 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Oct 01 16:40:36 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Oct 01 16:40:36 compute-0 ceph-mon[74273]: pgmap v305: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:36 compute-0 ceph-mon[74273]: 5.18 scrub starts
Oct 01 16:40:36 compute-0 ceph-mon[74273]: 5.18 scrub ok
Oct 01 16:40:37 compute-0 python3.9[121179]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:40:37 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:37 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Oct 01 16:40:37 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Oct 01 16:40:37 compute-0 sudo[121333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqgmopkkjxlulccujpxfyplhifyareac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336837.5054152-33-185317926689144/AnsiballZ_file.py'
Oct 01 16:40:37 compute-0 sudo[121333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:38 compute-0 python3.9[121335]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:40:38 compute-0 sudo[121333]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:38 compute-0 sudo[121360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:40:38 compute-0 sudo[121360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:40:38 compute-0 sudo[121360]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:38 compute-0 sudo[121406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:40:38 compute-0 sudo[121406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:40:38 compute-0 sudo[121406]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:38 compute-0 sudo[121456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:40:38 compute-0 sudo[121456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:40:38 compute-0 sudo[121456]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:38 compute-0 sudo[121508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 16:40:38 compute-0 sudo[121508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:40:38 compute-0 ceph-mon[74273]: pgmap v306: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:38 compute-0 ceph-mon[74273]: 9.3 scrub starts
Oct 01 16:40:38 compute-0 ceph-mon[74273]: 9.3 scrub ok
Oct 01 16:40:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:40:38 compute-0 sudo[121628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yafkpfcxjgczcjpcxppctqrjrhlcnzkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336838.333427-41-83049199652669/AnsiballZ_stat.py'
Oct 01 16:40:38 compute-0 sudo[121628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:38 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Oct 01 16:40:38 compute-0 sudo[121508]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:38 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Oct 01 16:40:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:40:39 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:40:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 16:40:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:40:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 16:40:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:40:39 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev fafd5a39-d9ce-4111-ae91-b7a4565c6241 does not exist
Oct 01 16:40:39 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev e413d683-4f53-4535-896e-961ad3200034 does not exist
Oct 01 16:40:39 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 2fd8050b-a887-4298-b5f3-09ca02c2bb4b does not exist
Oct 01 16:40:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 16:40:39 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:40:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 16:40:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:40:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:40:39 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:40:39 compute-0 python3.9[121630]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:40:39 compute-0 sudo[121643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:40:39 compute-0 sudo[121643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:40:39 compute-0 sudo[121643]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:39 compute-0 sudo[121628]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:39 compute-0 sudo[121670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:40:39 compute-0 sudo[121670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:40:39 compute-0 sudo[121670]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:39 compute-0 sudo[121718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:40:39 compute-0 sudo[121718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:40:39 compute-0 sudo[121718]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:39 compute-0 sudo[121766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 16:40:39 compute-0 sudo[121766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:40:39 compute-0 sudo[121818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mszspmwixglsemvqmuvuingqccvodxyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336838.333427-41-83049199652669/AnsiballZ_file.py'
Oct 01 16:40:39 compute-0 sudo[121818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:39 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:39 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Oct 01 16:40:39 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Oct 01 16:40:39 compute-0 python3.9[121820]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.o25t3hak recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:40:39 compute-0 sudo[121818]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:39 compute-0 podman[121883]: 2025-10-01 16:40:39.687512058 +0000 UTC m=+0.052007955 container create ede53ad2bde3f97737480a20975a9db39bb6f4ba4c52b8ef2dc3f2aae0222349 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_leakey, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 01 16:40:39 compute-0 ceph-mon[74273]: 11.1f scrub starts
Oct 01 16:40:39 compute-0 ceph-mon[74273]: 11.1f scrub ok
Oct 01 16:40:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:40:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:40:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:40:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:40:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:40:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:40:39 compute-0 systemd[1]: Started libpod-conmon-ede53ad2bde3f97737480a20975a9db39bb6f4ba4c52b8ef2dc3f2aae0222349.scope.
Oct 01 16:40:39 compute-0 podman[121883]: 2025-10-01 16:40:39.657751014 +0000 UTC m=+0.022246971 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:40:39 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:40:39 compute-0 podman[121883]: 2025-10-01 16:40:39.775053974 +0000 UTC m=+0.139549931 container init ede53ad2bde3f97737480a20975a9db39bb6f4ba4c52b8ef2dc3f2aae0222349 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_leakey, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:40:39 compute-0 podman[121883]: 2025-10-01 16:40:39.785230958 +0000 UTC m=+0.149726805 container start ede53ad2bde3f97737480a20975a9db39bb6f4ba4c52b8ef2dc3f2aae0222349 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_leakey, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:40:39 compute-0 podman[121883]: 2025-10-01 16:40:39.789317147 +0000 UTC m=+0.153813044 container attach ede53ad2bde3f97737480a20975a9db39bb6f4ba4c52b8ef2dc3f2aae0222349 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_leakey, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:40:39 compute-0 dazzling_leakey[121902]: 167 167
Oct 01 16:40:39 compute-0 systemd[1]: libpod-ede53ad2bde3f97737480a20975a9db39bb6f4ba4c52b8ef2dc3f2aae0222349.scope: Deactivated successfully.
Oct 01 16:40:39 compute-0 podman[121883]: 2025-10-01 16:40:39.792435187 +0000 UTC m=+0.156931084 container died ede53ad2bde3f97737480a20975a9db39bb6f4ba4c52b8ef2dc3f2aae0222349 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 01 16:40:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4cc10e0889410ef58b25a43b9c517b4095c6fdb92bb0fb5a6a4ecef77d3d4f3-merged.mount: Deactivated successfully.
Oct 01 16:40:39 compute-0 podman[121883]: 2025-10-01 16:40:39.847645501 +0000 UTC m=+0.212141368 container remove ede53ad2bde3f97737480a20975a9db39bb6f4ba4c52b8ef2dc3f2aae0222349 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_leakey, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:40:39 compute-0 systemd[1]: libpod-conmon-ede53ad2bde3f97737480a20975a9db39bb6f4ba4c52b8ef2dc3f2aae0222349.scope: Deactivated successfully.
Oct 01 16:40:39 compute-0 podman[121977]: 2025-10-01 16:40:39.987059116 +0000 UTC m=+0.042180791 container create 4b4768eb8722e2bbf9c56a418dea0697b10f6786d864d23ebb1820bb0a77667f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:40:40 compute-0 systemd[1]: Started libpod-conmon-4b4768eb8722e2bbf9c56a418dea0697b10f6786d864d23ebb1820bb0a77667f.scope.
Oct 01 16:40:40 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:40:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6584d5a4b43bd67d6c5c75771ba59ccda4080b59909b2220624acdd1b202b7a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:40:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6584d5a4b43bd67d6c5c75771ba59ccda4080b59909b2220624acdd1b202b7a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:40:40 compute-0 podman[121977]: 2025-10-01 16:40:39.966499856 +0000 UTC m=+0.021621561 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:40:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6584d5a4b43bd67d6c5c75771ba59ccda4080b59909b2220624acdd1b202b7a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:40:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6584d5a4b43bd67d6c5c75771ba59ccda4080b59909b2220624acdd1b202b7a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:40:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6584d5a4b43bd67d6c5c75771ba59ccda4080b59909b2220624acdd1b202b7a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:40:40 compute-0 podman[121977]: 2025-10-01 16:40:40.073233589 +0000 UTC m=+0.128355304 container init 4b4768eb8722e2bbf9c56a418dea0697b10f6786d864d23ebb1820bb0a77667f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Oct 01 16:40:40 compute-0 podman[121977]: 2025-10-01 16:40:40.081752476 +0000 UTC m=+0.136874151 container start 4b4768eb8722e2bbf9c56a418dea0697b10f6786d864d23ebb1820bb0a77667f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lovelace, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:40:40 compute-0 podman[121977]: 2025-10-01 16:40:40.085585427 +0000 UTC m=+0.140707102 container attach 4b4768eb8722e2bbf9c56a418dea0697b10f6786d864d23ebb1820bb0a77667f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lovelace, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:40:40 compute-0 sudo[122072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwgixftznidtbogkaxqgcpykatlvkeda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336839.8546062-61-210053616935471/AnsiballZ_stat.py'
Oct 01 16:40:40 compute-0 sudo[122072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:40 compute-0 python3.9[122074]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:40:40 compute-0 sudo[122072]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:40 compute-0 sudo[122150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yponwkvomkmaudsxkpfwgesqudohprxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336839.8546062-61-210053616935471/AnsiballZ_file.py'
Oct 01 16:40:40 compute-0 sudo[122150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:40 compute-0 ceph-mon[74273]: pgmap v307: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:40 compute-0 ceph-mon[74273]: 9.1b scrub starts
Oct 01 16:40:40 compute-0 ceph-mon[74273]: 9.1b scrub ok
Oct 01 16:40:40 compute-0 python3.9[122152]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.nqqd_kjb recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:40:40 compute-0 sudo[122150]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:41 compute-0 jolly_lovelace[122017]: --> passed data devices: 0 physical, 3 LVM
Oct 01 16:40:41 compute-0 jolly_lovelace[122017]: --> relative data size: 1.0
Oct 01 16:40:41 compute-0 jolly_lovelace[122017]: --> All data devices are unavailable
Oct 01 16:40:41 compute-0 systemd[1]: libpod-4b4768eb8722e2bbf9c56a418dea0697b10f6786d864d23ebb1820bb0a77667f.scope: Deactivated successfully.
Oct 01 16:40:41 compute-0 conmon[122017]: conmon 4b4768eb8722e2bbf9c5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4b4768eb8722e2bbf9c56a418dea0697b10f6786d864d23ebb1820bb0a77667f.scope/container/memory.events
Oct 01 16:40:41 compute-0 podman[121977]: 2025-10-01 16:40:41.066228746 +0000 UTC m=+1.121350441 container died 4b4768eb8722e2bbf9c56a418dea0697b10f6786d864d23ebb1820bb0a77667f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:40:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6584d5a4b43bd67d6c5c75771ba59ccda4080b59909b2220624acdd1b202b7a-merged.mount: Deactivated successfully.
Oct 01 16:40:41 compute-0 podman[121977]: 2025-10-01 16:40:41.141709777 +0000 UTC m=+1.196831452 container remove 4b4768eb8722e2bbf9c56a418dea0697b10f6786d864d23ebb1820bb0a77667f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:40:41 compute-0 systemd[1]: libpod-conmon-4b4768eb8722e2bbf9c56a418dea0697b10f6786d864d23ebb1820bb0a77667f.scope: Deactivated successfully.
Oct 01 16:40:41 compute-0 sudo[121766]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:41 compute-0 sudo[122286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:40:41 compute-0 sudo[122286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:40:41 compute-0 sudo[122286]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:41 compute-0 sudo[122317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:40:41 compute-0 sudo[122317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:40:41 compute-0 sudo[122317]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:41 compute-0 sudo[122362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:40:41 compute-0 sudo[122362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:40:41 compute-0 sudo[122362]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:40:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:40:41 compute-0 sudo[122412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yupjsqfduzmnzswmvftzqqwgvimsyfaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336841.0322053-74-103431104445072/AnsiballZ_file.py'
Oct 01 16:40:41 compute-0 sudo[122412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:40:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:40:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:40:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:40:41 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:41 compute-0 sudo[122416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 16:40:41 compute-0 sudo[122416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:40:41 compute-0 python3.9[122415]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:40:41 compute-0 sudo[122412]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:41 compute-0 podman[122502]: 2025-10-01 16:40:41.680837285 +0000 UTC m=+0.040383218 container create a5e4bf30d1a8fc9b1e68fd52d24f85c58d617cf1a0e260d68f437e53873d07e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Oct 01 16:40:41 compute-0 systemd[1]: Started libpod-conmon-a5e4bf30d1a8fc9b1e68fd52d24f85c58d617cf1a0e260d68f437e53873d07e8.scope.
Oct 01 16:40:41 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:40:41 compute-0 podman[122502]: 2025-10-01 16:40:41.660572338 +0000 UTC m=+0.020118301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:40:41 compute-0 podman[122502]: 2025-10-01 16:40:41.770180414 +0000 UTC m=+0.129726347 container init a5e4bf30d1a8fc9b1e68fd52d24f85c58d617cf1a0e260d68f437e53873d07e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_volhard, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 01 16:40:41 compute-0 podman[122502]: 2025-10-01 16:40:41.779507322 +0000 UTC m=+0.139053275 container start a5e4bf30d1a8fc9b1e68fd52d24f85c58d617cf1a0e260d68f437e53873d07e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_volhard, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:40:41 compute-0 nostalgic_volhard[122520]: 167 167
Oct 01 16:40:41 compute-0 systemd[1]: libpod-a5e4bf30d1a8fc9b1e68fd52d24f85c58d617cf1a0e260d68f437e53873d07e8.scope: Deactivated successfully.
Oct 01 16:40:41 compute-0 podman[122502]: 2025-10-01 16:40:41.786597269 +0000 UTC m=+0.146143222 container attach a5e4bf30d1a8fc9b1e68fd52d24f85c58d617cf1a0e260d68f437e53873d07e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 01 16:40:41 compute-0 podman[122502]: 2025-10-01 16:40:41.788113509 +0000 UTC m=+0.147659452 container died a5e4bf30d1a8fc9b1e68fd52d24f85c58d617cf1a0e260d68f437e53873d07e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_volhard, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:40:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cbf6f956771708b518c305f656ed10c9f8924647711172a39f1a400082504a5-merged.mount: Deactivated successfully.
Oct 01 16:40:41 compute-0 podman[122502]: 2025-10-01 16:40:41.824387945 +0000 UTC m=+0.183933868 container remove a5e4bf30d1a8fc9b1e68fd52d24f85c58d617cf1a0e260d68f437e53873d07e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 01 16:40:41 compute-0 systemd[1]: libpod-conmon-a5e4bf30d1a8fc9b1e68fd52d24f85c58d617cf1a0e260d68f437e53873d07e8.scope: Deactivated successfully.
Oct 01 16:40:41 compute-0 podman[122617]: 2025-10-01 16:40:41.969503191 +0000 UTC m=+0.048454055 container create 733adb3171faaf0c7975c30f287b166c653e49768f0f57eca3fd0244bd7559af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_haibt, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:40:41 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Oct 01 16:40:42 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Oct 01 16:40:42 compute-0 systemd[1]: Started libpod-conmon-733adb3171faaf0c7975c30f287b166c653e49768f0f57eca3fd0244bd7559af.scope.
Oct 01 16:40:42 compute-0 podman[122617]: 2025-10-01 16:40:41.948029058 +0000 UTC m=+0.026979932 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:40:42 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:40:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6050db6fc9863caaf5c876471ac1c22959c50192a25f5712797d99f94d37d640/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:40:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6050db6fc9863caaf5c876471ac1c22959c50192a25f5712797d99f94d37d640/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:40:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6050db6fc9863caaf5c876471ac1c22959c50192a25f5712797d99f94d37d640/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:40:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6050db6fc9863caaf5c876471ac1c22959c50192a25f5712797d99f94d37d640/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:40:42 compute-0 podman[122617]: 2025-10-01 16:40:42.058178535 +0000 UTC m=+0.137129389 container init 733adb3171faaf0c7975c30f287b166c653e49768f0f57eca3fd0244bd7559af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_haibt, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:40:42 compute-0 podman[122617]: 2025-10-01 16:40:42.066402889 +0000 UTC m=+0.145353723 container start 733adb3171faaf0c7975c30f287b166c653e49768f0f57eca3fd0244bd7559af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Oct 01 16:40:42 compute-0 podman[122617]: 2025-10-01 16:40:42.071423313 +0000 UTC m=+0.150374157 container attach 733adb3171faaf0c7975c30f287b166c653e49768f0f57eca3fd0244bd7559af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_haibt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:40:42 compute-0 sudo[122689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgxiqwroypcihvudkhwwosalfppgmsym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336841.7412791-82-181341202202683/AnsiballZ_stat.py'
Oct 01 16:40:42 compute-0 sudo[122689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:42 compute-0 python3.9[122691]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:40:42 compute-0 sudo[122689]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:42 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Oct 01 16:40:42 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Oct 01 16:40:42 compute-0 sudo[122767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvpuitjytfkoozanodrkifiecubwrszz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336841.7412791-82-181341202202683/AnsiballZ_file.py'
Oct 01 16:40:42 compute-0 sudo[122767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:42 compute-0 ceph-mon[74273]: pgmap v308: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:42 compute-0 ceph-mon[74273]: 8.1c scrub starts
Oct 01 16:40:42 compute-0 ceph-mon[74273]: 8.1c scrub ok
Oct 01 16:40:42 compute-0 sad_haibt[122657]: {
Oct 01 16:40:42 compute-0 sad_haibt[122657]:     "0": [
Oct 01 16:40:42 compute-0 sad_haibt[122657]:         {
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             "devices": [
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "/dev/loop3"
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             ],
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             "lv_name": "ceph_lv0",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             "lv_size": "21470642176",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             "name": "ceph_lv0",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             "tags": {
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "ceph.cluster_name": "ceph",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "ceph.crush_device_class": "",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "ceph.encrypted": "0",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "ceph.osd_id": "0",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "ceph.type": "block",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "ceph.vdo": "0"
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             },
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             "type": "block",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             "vg_name": "ceph_vg0"
Oct 01 16:40:42 compute-0 sad_haibt[122657]:         }
Oct 01 16:40:42 compute-0 sad_haibt[122657]:     ],
Oct 01 16:40:42 compute-0 sad_haibt[122657]:     "1": [
Oct 01 16:40:42 compute-0 sad_haibt[122657]:         {
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             "devices": [
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "/dev/loop4"
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             ],
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             "lv_name": "ceph_lv1",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             "lv_size": "21470642176",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             "name": "ceph_lv1",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             "tags": {
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "ceph.cluster_name": "ceph",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "ceph.crush_device_class": "",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "ceph.encrypted": "0",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "ceph.osd_id": "1",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "ceph.type": "block",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "ceph.vdo": "0"
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             },
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             "type": "block",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             "vg_name": "ceph_vg1"
Oct 01 16:40:42 compute-0 sad_haibt[122657]:         }
Oct 01 16:40:42 compute-0 sad_haibt[122657]:     ],
Oct 01 16:40:42 compute-0 sad_haibt[122657]:     "2": [
Oct 01 16:40:42 compute-0 sad_haibt[122657]:         {
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             "devices": [
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "/dev/loop5"
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             ],
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             "lv_name": "ceph_lv2",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             "lv_size": "21470642176",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             "name": "ceph_lv2",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             "tags": {
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "ceph.cluster_name": "ceph",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "ceph.crush_device_class": "",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "ceph.encrypted": "0",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "ceph.osd_id": "2",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "ceph.type": "block",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:                 "ceph.vdo": "0"
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             },
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             "type": "block",
Oct 01 16:40:42 compute-0 sad_haibt[122657]:             "vg_name": "ceph_vg2"
Oct 01 16:40:42 compute-0 sad_haibt[122657]:         }
Oct 01 16:40:42 compute-0 sad_haibt[122657]:     ]
Oct 01 16:40:42 compute-0 sad_haibt[122657]: }
Oct 01 16:40:42 compute-0 python3.9[122769]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:40:42 compute-0 systemd[1]: libpod-733adb3171faaf0c7975c30f287b166c653e49768f0f57eca3fd0244bd7559af.scope: Deactivated successfully.
Oct 01 16:40:42 compute-0 podman[122617]: 2025-10-01 16:40:42.842227464 +0000 UTC m=+0.921178298 container died 733adb3171faaf0c7975c30f287b166c653e49768f0f57eca3fd0244bd7559af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Oct 01 16:40:42 compute-0 sudo[122767]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-6050db6fc9863caaf5c876471ac1c22959c50192a25f5712797d99f94d37d640-merged.mount: Deactivated successfully.
Oct 01 16:40:42 compute-0 podman[122617]: 2025-10-01 16:40:42.901014419 +0000 UTC m=+0.979965253 container remove 733adb3171faaf0c7975c30f287b166c653e49768f0f57eca3fd0244bd7559af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_haibt, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:40:42 compute-0 systemd[1]: libpod-conmon-733adb3171faaf0c7975c30f287b166c653e49768f0f57eca3fd0244bd7559af.scope: Deactivated successfully.
Oct 01 16:40:42 compute-0 sudo[122416]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:43 compute-0 sudo[122811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:40:43 compute-0 sudo[122811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:40:43 compute-0 sudo[122811]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:43 compute-0 sudo[122872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:40:43 compute-0 sudo[122872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:40:43 compute-0 sudo[122872]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:43 compute-0 sudo[122914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:40:43 compute-0 sudo[122914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:40:43 compute-0 sudo[122914]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:43 compute-0 sudo[122961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 16:40:43 compute-0 sudo[122961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:40:43 compute-0 sudo[123036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvvdcdcoavrqsvbaatwtyskgegkmrmcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336842.9858906-82-71373560365241/AnsiballZ_stat.py'
Oct 01 16:40:43 compute-0 sudo[123036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:43 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:43 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.d scrub starts
Oct 01 16:40:43 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.d scrub ok
Oct 01 16:40:43 compute-0 python3.9[123038]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:40:43 compute-0 sudo[123036]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:43 compute-0 podman[123082]: 2025-10-01 16:40:43.683271316 +0000 UTC m=+0.062418805 container create ae0ec88b2c755a83c6b5d7ff98c31b9f804301b80d1dee69f3485dc659312000 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_spence, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:40:43 compute-0 ceph-mon[74273]: 9.1 scrub starts
Oct 01 16:40:43 compute-0 ceph-mon[74273]: 9.1 scrub ok
Oct 01 16:40:43 compute-0 systemd[1]: Started libpod-conmon-ae0ec88b2c755a83c6b5d7ff98c31b9f804301b80d1dee69f3485dc659312000.scope.
Oct 01 16:40:43 compute-0 podman[123082]: 2025-10-01 16:40:43.65278425 +0000 UTC m=+0.031931799 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:40:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:40:43 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:40:43 compute-0 podman[123082]: 2025-10-01 16:40:43.788213508 +0000 UTC m=+0.167360967 container init ae0ec88b2c755a83c6b5d7ff98c31b9f804301b80d1dee69f3485dc659312000 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_spence, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:40:43 compute-0 podman[123082]: 2025-10-01 16:40:43.796833476 +0000 UTC m=+0.175980955 container start ae0ec88b2c755a83c6b5d7ff98c31b9f804301b80d1dee69f3485dc659312000 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:40:43 compute-0 podman[123082]: 2025-10-01 16:40:43.800989723 +0000 UTC m=+0.180137182 container attach ae0ec88b2c755a83c6b5d7ff98c31b9f804301b80d1dee69f3485dc659312000 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:40:43 compute-0 compassionate_spence[123121]: 167 167
Oct 01 16:40:43 compute-0 systemd[1]: libpod-ae0ec88b2c755a83c6b5d7ff98c31b9f804301b80d1dee69f3485dc659312000.scope: Deactivated successfully.
Oct 01 16:40:43 compute-0 podman[123082]: 2025-10-01 16:40:43.80452572 +0000 UTC m=+0.183673199 container died ae0ec88b2c755a83c6b5d7ff98c31b9f804301b80d1dee69f3485dc659312000 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 01 16:40:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-d356a01def8cc75606293fe199ce40dc0f244f5b4c6e90794bb847732680495a-merged.mount: Deactivated successfully.
Oct 01 16:40:43 compute-0 podman[123082]: 2025-10-01 16:40:43.847020225 +0000 UTC m=+0.226167694 container remove ae0ec88b2c755a83c6b5d7ff98c31b9f804301b80d1dee69f3485dc659312000 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 01 16:40:43 compute-0 systemd[1]: libpod-conmon-ae0ec88b2c755a83c6b5d7ff98c31b9f804301b80d1dee69f3485dc659312000.scope: Deactivated successfully.
Oct 01 16:40:43 compute-0 sudo[123190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpshngussvrudtqomnedmrnznlulraam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336842.9858906-82-71373560365241/AnsiballZ_file.py'
Oct 01 16:40:43 compute-0 sudo[123190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:44 compute-0 podman[123199]: 2025-10-01 16:40:44.025046909 +0000 UTC m=+0.046371261 container create 070c454d604836ea3284c7d8a7d98dd5e3d3d3caf085ebd6f8ce9c4badea1eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_jones, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:40:44 compute-0 python3.9[123193]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:40:44 compute-0 systemd[1]: Started libpod-conmon-070c454d604836ea3284c7d8a7d98dd5e3d3d3caf085ebd6f8ce9c4badea1eba.scope.
Oct 01 16:40:44 compute-0 sudo[123190]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:44 compute-0 podman[123199]: 2025-10-01 16:40:44.000818557 +0000 UTC m=+0.022142889 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:40:44 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:40:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/383afcbbfe92f54aba36c5a29bd1479232577deea72f44aa67763a914b66554b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:40:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/383afcbbfe92f54aba36c5a29bd1479232577deea72f44aa67763a914b66554b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:40:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/383afcbbfe92f54aba36c5a29bd1479232577deea72f44aa67763a914b66554b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:40:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/383afcbbfe92f54aba36c5a29bd1479232577deea72f44aa67763a914b66554b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:40:44 compute-0 podman[123199]: 2025-10-01 16:40:44.12375075 +0000 UTC m=+0.145075122 container init 070c454d604836ea3284c7d8a7d98dd5e3d3d3caf085ebd6f8ce9c4badea1eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:40:44 compute-0 podman[123199]: 2025-10-01 16:40:44.137783027 +0000 UTC m=+0.159107359 container start 070c454d604836ea3284c7d8a7d98dd5e3d3d3caf085ebd6f8ce9c4badea1eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_jones, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 01 16:40:44 compute-0 podman[123199]: 2025-10-01 16:40:44.141751953 +0000 UTC m=+0.163076285 container attach 070c454d604836ea3284c7d8a7d98dd5e3d3d3caf085ebd6f8ce9c4badea1eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_jones, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:40:44 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Oct 01 16:40:44 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Oct 01 16:40:44 compute-0 sudo[123370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdkkjotcvkalpybkkxgmvwalfcapbwom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336844.2606964-105-121586216535471/AnsiballZ_file.py'
Oct 01 16:40:44 compute-0 sudo[123370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:44 compute-0 ceph-mon[74273]: pgmap v309: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:44 compute-0 ceph-mon[74273]: 9.d scrub starts
Oct 01 16:40:44 compute-0 ceph-mon[74273]: 9.d scrub ok
Oct 01 16:40:44 compute-0 python3.9[123372]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:40:44 compute-0 sudo[123370]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:44 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Oct 01 16:40:45 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Oct 01 16:40:45 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Oct 01 16:40:45 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Oct 01 16:40:45 compute-0 sleepy_jones[123216]: {
Oct 01 16:40:45 compute-0 sleepy_jones[123216]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 16:40:45 compute-0 sleepy_jones[123216]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:40:45 compute-0 sleepy_jones[123216]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 16:40:45 compute-0 sleepy_jones[123216]:         "osd_id": 2,
Oct 01 16:40:45 compute-0 sleepy_jones[123216]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:40:45 compute-0 sleepy_jones[123216]:         "type": "bluestore"
Oct 01 16:40:45 compute-0 sleepy_jones[123216]:     },
Oct 01 16:40:45 compute-0 sleepy_jones[123216]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 16:40:45 compute-0 sleepy_jones[123216]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:40:45 compute-0 sleepy_jones[123216]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 16:40:45 compute-0 sleepy_jones[123216]:         "osd_id": 0,
Oct 01 16:40:45 compute-0 sleepy_jones[123216]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:40:45 compute-0 sleepy_jones[123216]:         "type": "bluestore"
Oct 01 16:40:45 compute-0 sleepy_jones[123216]:     },
Oct 01 16:40:45 compute-0 sleepy_jones[123216]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 16:40:45 compute-0 sleepy_jones[123216]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:40:45 compute-0 sleepy_jones[123216]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 16:40:45 compute-0 sleepy_jones[123216]:         "osd_id": 1,
Oct 01 16:40:45 compute-0 sleepy_jones[123216]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:40:45 compute-0 sleepy_jones[123216]:         "type": "bluestore"
Oct 01 16:40:45 compute-0 sleepy_jones[123216]:     }
Oct 01 16:40:45 compute-0 sleepy_jones[123216]: }
Oct 01 16:40:45 compute-0 systemd[1]: libpod-070c454d604836ea3284c7d8a7d98dd5e3d3d3caf085ebd6f8ce9c4badea1eba.scope: Deactivated successfully.
Oct 01 16:40:45 compute-0 systemd[1]: libpod-070c454d604836ea3284c7d8a7d98dd5e3d3d3caf085ebd6f8ce9c4badea1eba.scope: Consumed 1.080s CPU time.
Oct 01 16:40:45 compute-0 podman[123199]: 2025-10-01 16:40:45.216945796 +0000 UTC m=+1.238270178 container died 070c454d604836ea3284c7d8a7d98dd5e3d3d3caf085ebd6f8ce9c4badea1eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_jones, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 01 16:40:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-383afcbbfe92f54aba36c5a29bd1479232577deea72f44aa67763a914b66554b-merged.mount: Deactivated successfully.
Oct 01 16:40:45 compute-0 podman[123199]: 2025-10-01 16:40:45.277076972 +0000 UTC m=+1.298401294 container remove 070c454d604836ea3284c7d8a7d98dd5e3d3d3caf085ebd6f8ce9c4badea1eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_jones, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:40:45 compute-0 systemd[1]: libpod-conmon-070c454d604836ea3284c7d8a7d98dd5e3d3d3caf085ebd6f8ce9c4badea1eba.scope: Deactivated successfully.
Oct 01 16:40:45 compute-0 sudo[122961]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:40:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:40:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:40:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:40:45 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 326267f0-c9a1-40c5-a774-c63840591cdf does not exist
Oct 01 16:40:45 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 5065e28d-d2c3-43cb-94f6-665a607e67be does not exist
Oct 01 16:40:45 compute-0 sudo[123562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvursmzxcrilstsnvhfqfjcuvnlfqlsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336844.9932811-113-73722652955407/AnsiballZ_stat.py'
Oct 01 16:40:45 compute-0 sudo[123562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:45 compute-0 sudo[123563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:40:45 compute-0 sudo[123563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:40:45 compute-0 sudo[123563]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:45 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:45 compute-0 sudo[123590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 16:40:45 compute-0 sudo[123590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:40:45 compute-0 sudo[123590]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:45 compute-0 python3.9[123573]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:40:45 compute-0 sudo[123562]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:45 compute-0 ceph-mon[74273]: 9.9 scrub starts
Oct 01 16:40:45 compute-0 ceph-mon[74273]: 9.9 scrub ok
Oct 01 16:40:45 compute-0 ceph-mon[74273]: 5.19 scrub starts
Oct 01 16:40:45 compute-0 ceph-mon[74273]: 5.19 scrub ok
Oct 01 16:40:45 compute-0 ceph-mon[74273]: 3.16 scrub starts
Oct 01 16:40:45 compute-0 ceph-mon[74273]: 3.16 scrub ok
Oct 01 16:40:45 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:40:45 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:40:45 compute-0 sudo[123690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plkhzvkcyqtmubpydsefexweglmobeda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336844.9932811-113-73722652955407/AnsiballZ_file.py'
Oct 01 16:40:45 compute-0 sudo[123690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:45 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Oct 01 16:40:45 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Oct 01 16:40:46 compute-0 python3.9[123692]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:40:46 compute-0 sudo[123690]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:46 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.b deep-scrub starts
Oct 01 16:40:46 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.b deep-scrub ok
Oct 01 16:40:46 compute-0 sudo[123842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqwnwfriehwfajfzigbckyvcpdltaxfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336846.1799836-125-102129129848864/AnsiballZ_stat.py'
Oct 01 16:40:46 compute-0 sudo[123842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:46 compute-0 python3.9[123844]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:40:46 compute-0 ceph-mon[74273]: pgmap v310: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:46 compute-0 ceph-mon[74273]: 4.10 scrub starts
Oct 01 16:40:46 compute-0 ceph-mon[74273]: 4.10 scrub ok
Oct 01 16:40:46 compute-0 sudo[123842]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:46 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 7.1 deep-scrub starts
Oct 01 16:40:46 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 7.1 deep-scrub ok
Oct 01 16:40:47 compute-0 sudo[123920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfxwbnaxxgchhvakogzgysiaeshmjyyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336846.1799836-125-102129129848864/AnsiballZ_file.py'
Oct 01 16:40:47 compute-0 sudo[123920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:47 compute-0 python3.9[123922]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:40:47 compute-0 sudo[123920]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:47 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:47 compute-0 ceph-mon[74273]: 9.b deep-scrub starts
Oct 01 16:40:47 compute-0 ceph-mon[74273]: 9.b deep-scrub ok
Oct 01 16:40:47 compute-0 ceph-mon[74273]: 7.1 deep-scrub starts
Oct 01 16:40:47 compute-0 ceph-mon[74273]: 7.1 deep-scrub ok
Oct 01 16:40:47 compute-0 sudo[124072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqvuszmkwrcmscwumkhjouqycsyoizrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336847.3398306-137-119040596671877/AnsiballZ_systemd.py'
Oct 01 16:40:47 compute-0 sudo[124072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:48 compute-0 python3.9[124074]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:40:48 compute-0 systemd[1]: Reloading.
Oct 01 16:40:48 compute-0 systemd-sysv-generator[124104]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:40:48 compute-0 systemd-rc-local-generator[124100]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:40:48 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Oct 01 16:40:48 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Oct 01 16:40:48 compute-0 sudo[124072]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:48 compute-0 ceph-mon[74273]: pgmap v311: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:40:49 compute-0 sudo[124262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkhwqpbkwcvkthogknekrexgtildwgpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336848.7603014-145-24410785190576/AnsiballZ_stat.py'
Oct 01 16:40:49 compute-0 sudo[124262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:49 compute-0 python3.9[124264]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:40:49 compute-0 sudo[124262]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:49 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:49 compute-0 sudo[124340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjfrppebgglgnleuhqfioytcmlklmgwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336848.7603014-145-24410785190576/AnsiballZ_file.py'
Oct 01 16:40:49 compute-0 sudo[124340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:49 compute-0 python3.9[124342]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:40:49 compute-0 sudo[124340]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:49 compute-0 ceph-mon[74273]: 6.7 scrub starts
Oct 01 16:40:49 compute-0 ceph-mon[74273]: 6.7 scrub ok
Oct 01 16:40:50 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Oct 01 16:40:50 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Oct 01 16:40:50 compute-0 sudo[124492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzijktbbbpghylteumjpuxndpnkqmkkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336849.867684-157-154500662780848/AnsiballZ_stat.py'
Oct 01 16:40:50 compute-0 sudo[124492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:50 compute-0 python3.9[124494]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:40:50 compute-0 sudo[124492]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:50 compute-0 sudo[124570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhggqwkhqgekksjmvujwhbfttuktlepw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336849.867684-157-154500662780848/AnsiballZ_file.py'
Oct 01 16:40:50 compute-0 sudo[124570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:50 compute-0 ceph-mon[74273]: pgmap v312: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:50 compute-0 ceph-mon[74273]: 3.11 scrub starts
Oct 01 16:40:50 compute-0 ceph-mon[74273]: 3.11 scrub ok
Oct 01 16:40:50 compute-0 python3.9[124572]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:40:50 compute-0 sudo[124570]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:51 compute-0 sudo[124722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuwfpvsjusdwaavcgjqyhiuuhedtabix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336851.049841-169-168923685164688/AnsiballZ_systemd.py'
Oct 01 16:40:51 compute-0 sudo[124722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:51 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:51 compute-0 python3.9[124724]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:40:51 compute-0 systemd[1]: Reloading.
Oct 01 16:40:51 compute-0 systemd-rc-local-generator[124754]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:40:51 compute-0 systemd-sysv-generator[124758]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:40:52 compute-0 systemd[1]: Starting Create netns directory...
Oct 01 16:40:52 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 01 16:40:52 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 01 16:40:52 compute-0 systemd[1]: Finished Create netns directory.
Oct 01 16:40:52 compute-0 sudo[124722]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:52 compute-0 ceph-mon[74273]: pgmap v313: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:52 compute-0 python3.9[124915]: ansible-ansible.builtin.service_facts Invoked
Oct 01 16:40:52 compute-0 network[124932]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 01 16:40:52 compute-0 network[124933]: 'network-scripts' will be removed from distribution in near future.
Oct 01 16:40:52 compute-0 network[124934]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 01 16:40:53 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:40:54 compute-0 ceph-mon[74273]: pgmap v314: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:55 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:56 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.e scrub starts
Oct 01 16:40:56 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.e scrub ok
Oct 01 16:40:56 compute-0 sudo[125197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obgdxxcdyymyebswhwgmiqxpfgssewia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336856.077178-195-273705364756085/AnsiballZ_stat.py'
Oct 01 16:40:56 compute-0 sudo[125197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:56 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Oct 01 16:40:56 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Oct 01 16:40:56 compute-0 python3.9[125199]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:40:56 compute-0 sudo[125197]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:56 compute-0 ceph-mon[74273]: pgmap v315: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:56 compute-0 ceph-mon[74273]: 9.e scrub starts
Oct 01 16:40:56 compute-0 ceph-mon[74273]: 9.e scrub ok
Oct 01 16:40:56 compute-0 sudo[125275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okypbanbnqfstleaymzevrmmfsewkurt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336856.077178-195-273705364756085/AnsiballZ_file.py'
Oct 01 16:40:56 compute-0 sudo[125275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:57 compute-0 python3.9[125277]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:40:57 compute-0 sudo[125275]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:57 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:57 compute-0 sudo[125427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnlmfveonozzupqlnprcasahxkeljtme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336857.3164618-208-47951588469832/AnsiballZ_file.py'
Oct 01 16:40:57 compute-0 sudo[125427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:57 compute-0 python3.9[125429]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:40:57 compute-0 sudo[125427]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:57 compute-0 ceph-mon[74273]: 6.3 scrub starts
Oct 01 16:40:57 compute-0 ceph-mon[74273]: 6.3 scrub ok
Oct 01 16:40:58 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Oct 01 16:40:58 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Oct 01 16:40:58 compute-0 sudo[125579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciugadidhjkdmadlpiefnizcqpjxuvbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336857.9862835-216-117329532242326/AnsiballZ_stat.py'
Oct 01 16:40:58 compute-0 sudo[125579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:58 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Oct 01 16:40:58 compute-0 python3.9[125581]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:40:58 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Oct 01 16:40:58 compute-0 sudo[125579]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:40:58 compute-0 sudo[125657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jafrprhdgfwcnsluxjfehjidbgeblhnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336857.9862835-216-117329532242326/AnsiballZ_file.py'
Oct 01 16:40:58 compute-0 ceph-mon[74273]: pgmap v316: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:58 compute-0 ceph-mon[74273]: 9.6 scrub starts
Oct 01 16:40:58 compute-0 sudo[125657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:40:58 compute-0 ceph-mon[74273]: 9.6 scrub ok
Oct 01 16:40:58 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.f scrub starts
Oct 01 16:40:58 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.f scrub ok
Oct 01 16:40:59 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Oct 01 16:40:59 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Oct 01 16:40:59 compute-0 python3.9[125659]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:40:59 compute-0 sudo[125657]: pam_unix(sudo:session): session closed for user root
Oct 01 16:40:59 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:40:59 compute-0 ceph-mon[74273]: 9.1d scrub starts
Oct 01 16:40:59 compute-0 ceph-mon[74273]: 9.1d scrub ok
Oct 01 16:40:59 compute-0 ceph-mon[74273]: 4.f scrub starts
Oct 01 16:40:59 compute-0 ceph-mon[74273]: 4.f scrub ok
Oct 01 16:40:59 compute-0 ceph-mon[74273]: 6.8 scrub starts
Oct 01 16:40:59 compute-0 ceph-mon[74273]: 6.8 scrub ok
Oct 01 16:40:59 compute-0 sudo[125809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcadrorbzgcwwhqiemyswzzchqqbkykn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336859.3973267-231-43484186770428/AnsiballZ_timezone.py'
Oct 01 16:40:59 compute-0 sudo[125809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:00 compute-0 python3.9[125811]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 01 16:41:00 compute-0 systemd[1]: Starting Time & Date Service...
Oct 01 16:41:00 compute-0 systemd[1]: Started Time & Date Service.
Oct 01 16:41:00 compute-0 sudo[125809]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:00 compute-0 ceph-mon[74273]: pgmap v317: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:00 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Oct 01 16:41:00 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Oct 01 16:41:00 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Oct 01 16:41:00 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Oct 01 16:41:01 compute-0 sudo[125965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqjloyhuvhftahpgtqfwzzecqlfbafyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336860.6116138-240-73026729083665/AnsiballZ_file.py'
Oct 01 16:41:01 compute-0 sudo[125965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:01 compute-0 python3.9[125967]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:41:01 compute-0 sudo[125965]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:01 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:01 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Oct 01 16:41:01 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Oct 01 16:41:01 compute-0 sudo[126117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlpkssfhxnildwrhbqckrjsryvnmgfpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336861.3955746-248-257538746564762/AnsiballZ_stat.py'
Oct 01 16:41:01 compute-0 sudo[126117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:01 compute-0 ceph-mon[74273]: 9.17 scrub starts
Oct 01 16:41:01 compute-0 ceph-mon[74273]: 4.14 scrub starts
Oct 01 16:41:01 compute-0 ceph-mon[74273]: 9.17 scrub ok
Oct 01 16:41:01 compute-0 ceph-mon[74273]: 4.14 scrub ok
Oct 01 16:41:01 compute-0 python3.9[126119]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:41:01 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.f deep-scrub starts
Oct 01 16:41:01 compute-0 sudo[126117]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:02 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.f deep-scrub ok
Oct 01 16:41:02 compute-0 sudo[126195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiufnreugtmgtsotbtckuypvpsttfzmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336861.3955746-248-257538746564762/AnsiballZ_file.py'
Oct 01 16:41:02 compute-0 sudo[126195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:02 compute-0 python3.9[126197]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:41:02 compute-0 sudo[126195]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:02 compute-0 ceph-mon[74273]: pgmap v318: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:02 compute-0 ceph-mon[74273]: 9.5 scrub starts
Oct 01 16:41:02 compute-0 ceph-mon[74273]: 9.5 scrub ok
Oct 01 16:41:02 compute-0 ceph-mon[74273]: 9.f deep-scrub starts
Oct 01 16:41:02 compute-0 ceph-mon[74273]: 9.f deep-scrub ok
Oct 01 16:41:02 compute-0 sudo[126347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxmbmjcexisyqicwzqfvwnfrxjbqbcfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336862.6687973-260-108373307277340/AnsiballZ_stat.py'
Oct 01 16:41:02 compute-0 sudo[126347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:03 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.d scrub starts
Oct 01 16:41:03 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.d scrub ok
Oct 01 16:41:03 compute-0 python3.9[126349]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:41:03 compute-0 sudo[126347]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:03 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:03 compute-0 sudo[126425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmjoqtzmxshlkktzewtjrxecljngsvyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336862.6687973-260-108373307277340/AnsiballZ_file.py'
Oct 01 16:41:03 compute-0 sudo[126425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:03 compute-0 python3.9[126427]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.u4dcl3tn recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:41:03 compute-0 sudo[126425]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:41:03 compute-0 ceph-mon[74273]: 4.d scrub starts
Oct 01 16:41:03 compute-0 ceph-mon[74273]: 4.d scrub ok
Oct 01 16:41:04 compute-0 sudo[126577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzntgwmhfmtlmdlhukceepipaddzxqva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336863.8361254-272-98965125920235/AnsiballZ_stat.py'
Oct 01 16:41:04 compute-0 sudo[126577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:04 compute-0 python3.9[126579]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:41:04 compute-0 sudo[126577]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:04 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Oct 01 16:41:04 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Oct 01 16:41:04 compute-0 sudo[126655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exckuooeeucbsrswxrkmtvqpqyevudch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336863.8361254-272-98965125920235/AnsiballZ_file.py'
Oct 01 16:41:04 compute-0 sudo[126655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:04 compute-0 ceph-mon[74273]: pgmap v319: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:04 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Oct 01 16:41:05 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Oct 01 16:41:05 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Oct 01 16:41:05 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Oct 01 16:41:05 compute-0 python3.9[126657]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:41:05 compute-0 sudo[126655]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:05 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:05 compute-0 sudo[126807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxxdntreyzspaeavnuaulzhzjystfzwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336865.2200053-285-160299778165641/AnsiballZ_command.py'
Oct 01 16:41:05 compute-0 sudo[126807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:05 compute-0 ceph-mon[74273]: 9.11 scrub starts
Oct 01 16:41:05 compute-0 ceph-mon[74273]: 9.11 scrub ok
Oct 01 16:41:05 compute-0 ceph-mon[74273]: 9.7 scrub starts
Oct 01 16:41:05 compute-0 ceph-mon[74273]: 6.1 scrub starts
Oct 01 16:41:05 compute-0 ceph-mon[74273]: 9.7 scrub ok
Oct 01 16:41:05 compute-0 ceph-mon[74273]: 6.1 scrub ok
Oct 01 16:41:05 compute-0 python3.9[126809]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:41:05 compute-0 sudo[126807]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:06 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Oct 01 16:41:06 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Oct 01 16:41:06 compute-0 sudo[126960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zewyioyoodznvndjberzgdaiuxxgjbpx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759336866.2064147-293-81057298473016/AnsiballZ_edpm_nftables_from_files.py'
Oct 01 16:41:06 compute-0 sudo[126960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:06 compute-0 ceph-mon[74273]: pgmap v320: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:06 compute-0 ceph-mon[74273]: 4.9 scrub starts
Oct 01 16:41:06 compute-0 ceph-mon[74273]: 4.9 scrub ok
Oct 01 16:41:06 compute-0 python3[126962]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 01 16:41:06 compute-0 sudo[126960]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:07 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:07 compute-0 sudo[127112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvncirjlvnplbaojvvnfqynmjpzvqrvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336867.0983632-301-17154734958012/AnsiballZ_stat.py'
Oct 01 16:41:07 compute-0 sudo[127112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:07 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Oct 01 16:41:07 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Oct 01 16:41:07 compute-0 python3.9[127114]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:41:07 compute-0 sudo[127112]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:08 compute-0 sudo[127190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bexyhyzjifdjywyenrwxwnlcbosoeaqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336867.0983632-301-17154734958012/AnsiballZ_file.py'
Oct 01 16:41:08 compute-0 sudo[127190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:08 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.4 deep-scrub starts
Oct 01 16:41:08 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.4 deep-scrub ok
Oct 01 16:41:08 compute-0 python3.9[127192]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:41:08 compute-0 sudo[127190]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:41:08 compute-0 sudo[127342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-coyqdmtcsklamdxjewmhcryvjwvbbclx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336868.4063153-313-101610732190502/AnsiballZ_stat.py'
Oct 01 16:41:08 compute-0 sudo[127342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:08 compute-0 ceph-mon[74273]: pgmap v321: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:08 compute-0 ceph-mon[74273]: 6.5 scrub starts
Oct 01 16:41:08 compute-0 ceph-mon[74273]: 6.5 scrub ok
Oct 01 16:41:08 compute-0 ceph-mon[74273]: 4.4 deep-scrub starts
Oct 01 16:41:08 compute-0 ceph-mon[74273]: 4.4 deep-scrub ok
Oct 01 16:41:09 compute-0 python3.9[127344]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:41:09 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Oct 01 16:41:09 compute-0 sudo[127342]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:09 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Oct 01 16:41:09 compute-0 sudo[127420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbjiexficuakmavbkrpytjmxamygiijg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336868.4063153-313-101610732190502/AnsiballZ_file.py'
Oct 01 16:41:09 compute-0 sudo[127420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:09 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:09 compute-0 python3.9[127422]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:41:09 compute-0 sudo[127420]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:09 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Oct 01 16:41:09 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Oct 01 16:41:09 compute-0 ceph-mon[74273]: 4.12 scrub starts
Oct 01 16:41:09 compute-0 ceph-mon[74273]: 4.12 scrub ok
Oct 01 16:41:10 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.2 deep-scrub starts
Oct 01 16:41:10 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.2 deep-scrub ok
Oct 01 16:41:10 compute-0 sudo[127572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-roiwptvmitkbmfgcgcimmaahkyplofvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336869.9008968-325-247890328857608/AnsiballZ_stat.py'
Oct 01 16:41:10 compute-0 sudo[127572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:10 compute-0 python3.9[127574]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:41:10 compute-0 sudo[127572]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:10 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 6.a scrub starts
Oct 01 16:41:10 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 6.a scrub ok
Oct 01 16:41:10 compute-0 sudo[127650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nizblaflihqfvdaiwhxifpeothawqjmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336869.9008968-325-247890328857608/AnsiballZ_file.py'
Oct 01 16:41:10 compute-0 sudo[127650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:10 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Oct 01 16:41:10 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Oct 01 16:41:10 compute-0 ceph-mon[74273]: pgmap v322: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:10 compute-0 ceph-mon[74273]: 6.9 scrub starts
Oct 01 16:41:10 compute-0 ceph-mon[74273]: 6.9 scrub ok
Oct 01 16:41:10 compute-0 ceph-mon[74273]: 4.2 deep-scrub starts
Oct 01 16:41:10 compute-0 ceph-mon[74273]: 4.2 deep-scrub ok
Oct 01 16:41:11 compute-0 python3.9[127652]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:41:11 compute-0 sudo[127650]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:11 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Oct 01 16:41:11 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Oct 01 16:41:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_16:41:11
Oct 01 16:41:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 16:41:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 16:41:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['.rgw.root', 'backups', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'volumes', '.mgr', 'default.rgw.control', 'images']
Oct 01 16:41:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 16:41:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:41:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:41:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:41:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:41:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:41:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:41:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 16:41:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:41:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 16:41:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:41:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:41:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:41:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:41:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:41:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:41:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:41:11 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:11 compute-0 sudo[127802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvkfgleqdlgkzvcrtcqaszlfqzcwwmsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336871.2513013-337-235593123339617/AnsiballZ_stat.py'
Oct 01 16:41:11 compute-0 sudo[127802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:11 compute-0 ceph-mon[74273]: 6.a scrub starts
Oct 01 16:41:11 compute-0 ceph-mon[74273]: 6.a scrub ok
Oct 01 16:41:11 compute-0 ceph-mon[74273]: 9.18 scrub starts
Oct 01 16:41:11 compute-0 ceph-mon[74273]: 9.18 scrub ok
Oct 01 16:41:11 compute-0 ceph-mon[74273]: 4.5 scrub starts
Oct 01 16:41:11 compute-0 ceph-mon[74273]: 4.5 scrub ok
Oct 01 16:41:11 compute-0 python3.9[127804]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:41:11 compute-0 sudo[127802]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:12 compute-0 sudo[127880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljtxazvlkozzssjsgvmoejqlapjfpqio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336871.2513013-337-235593123339617/AnsiballZ_file.py'
Oct 01 16:41:12 compute-0 sudo[127880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:12 compute-0 python3.9[127882]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:41:12 compute-0 sudo[127880]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:12 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.16 deep-scrub starts
Oct 01 16:41:12 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.16 deep-scrub ok
Oct 01 16:41:12 compute-0 ceph-mon[74273]: pgmap v323: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:13 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Oct 01 16:41:13 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Oct 01 16:41:13 compute-0 sudo[128032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fugmyfxdkizrelojfyvysrgnhnjrfuve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336872.6945755-349-41026696630000/AnsiballZ_stat.py'
Oct 01 16:41:13 compute-0 sudo[128032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:13 compute-0 python3.9[128034]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:41:13 compute-0 sudo[128032]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:13 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:13 compute-0 sudo[128110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjabntaaldsvebodkjgxjemgzznanzrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336872.6945755-349-41026696630000/AnsiballZ_file.py'
Oct 01 16:41:13 compute-0 sudo[128110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:41:13 compute-0 python3.9[128112]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:41:13 compute-0 sudo[128110]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:13 compute-0 ceph-mon[74273]: 9.16 deep-scrub starts
Oct 01 16:41:13 compute-0 ceph-mon[74273]: 9.16 deep-scrub ok
Oct 01 16:41:13 compute-0 ceph-mon[74273]: 4.8 scrub starts
Oct 01 16:41:13 compute-0 ceph-mon[74273]: 4.8 scrub ok
Oct 01 16:41:13 compute-0 ceph-mon[74273]: pgmap v324: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:14 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Oct 01 16:41:14 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Oct 01 16:41:14 compute-0 sudo[128262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfxpwwezdnuzdsoymmqwvsgfppdwztpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336874.0200071-362-96638437036142/AnsiballZ_command.py'
Oct 01 16:41:14 compute-0 sudo[128262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:14 compute-0 python3.9[128264]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:41:14 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Oct 01 16:41:14 compute-0 sudo[128262]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:14 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Oct 01 16:41:14 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.8 deep-scrub starts
Oct 01 16:41:14 compute-0 ceph-mon[74273]: 4.7 scrub starts
Oct 01 16:41:14 compute-0 ceph-mon[74273]: 4.7 scrub ok
Oct 01 16:41:14 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.8 deep-scrub ok
Oct 01 16:41:15 compute-0 sudo[128417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkjwyuycwopzdjyybggxpvtzcuayvruo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336874.795659-370-51374812045078/AnsiballZ_blockinfile.py'
Oct 01 16:41:15 compute-0 sudo[128417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:15 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:15 compute-0 python3.9[128419]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:41:15 compute-0 sudo[128417]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:15 compute-0 ceph-mon[74273]: 9.1c scrub starts
Oct 01 16:41:15 compute-0 ceph-mon[74273]: 9.1c scrub ok
Oct 01 16:41:15 compute-0 ceph-mon[74273]: 9.8 deep-scrub starts
Oct 01 16:41:15 compute-0 ceph-mon[74273]: 9.8 deep-scrub ok
Oct 01 16:41:15 compute-0 ceph-mon[74273]: pgmap v325: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:16 compute-0 sudo[128569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gztibrczcaeiybqcjjgsccbewktkemme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336875.8033977-379-13704929648097/AnsiballZ_file.py'
Oct 01 16:41:16 compute-0 sudo[128569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:16 compute-0 python3.9[128571]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:41:16 compute-0 sudo[128569]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:16 compute-0 sudo[128721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbfvpywljghnmzfjqnhnzkhefhjbxmqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336876.4627216-379-215453279605204/AnsiballZ_file.py'
Oct 01 16:41:16 compute-0 sudo[128721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:16 compute-0 python3.9[128723]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:41:17 compute-0 sudo[128721]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:17 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:17 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Oct 01 16:41:17 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Oct 01 16:41:17 compute-0 sudo[128873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shuytbmhdbvrxrgatubcveowtwsvynsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336877.1909773-394-229818874731547/AnsiballZ_mount.py'
Oct 01 16:41:17 compute-0 sudo[128873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:18 compute-0 python3.9[128875]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 01 16:41:18 compute-0 sudo[128873]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:18 compute-0 ceph-mon[74273]: pgmap v326: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:18 compute-0 ceph-mon[74273]: 9.1e scrub starts
Oct 01 16:41:18 compute-0 ceph-mon[74273]: 9.1e scrub ok
Oct 01 16:41:18 compute-0 sudo[129025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sytbozywhztpkgpsrbcgmcgimahwihji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336878.188027-394-179640770101509/AnsiballZ_mount.py'
Oct 01 16:41:18 compute-0 sudo[129025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:18 compute-0 python3.9[129027]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 01 16:41:18 compute-0 sudo[129025]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:41:19 compute-0 sshd-session[121029]: Connection closed by 192.168.122.30 port 52808
Oct 01 16:41:19 compute-0 sshd-session[121026]: pam_unix(sshd:session): session closed for user zuul
Oct 01 16:41:19 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Oct 01 16:41:19 compute-0 systemd[1]: session-40.scope: Consumed 31.366s CPU time.
Oct 01 16:41:19 compute-0 systemd-logind[788]: Session 40 logged out. Waiting for processes to exit.
Oct 01 16:41:19 compute-0 systemd-logind[788]: Removed session 40.
Oct 01 16:41:19 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:20 compute-0 ceph-mon[74273]: pgmap v327: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 16:41:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:41:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 16:41:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:41:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:41:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:41:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:41:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:41:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:41:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:41:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:41:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:41:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 16:41:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:41:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:41:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:41:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 16:41:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:41:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 16:41:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:41:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:41:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:41:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 16:41:21 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:22 compute-0 ceph-mon[74273]: pgmap v328: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:23 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:41:24 compute-0 ceph-mon[74273]: pgmap v329: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:24 compute-0 sshd-session[129052]: Accepted publickey for zuul from 192.168.122.30 port 34678 ssh2: ECDSA SHA256:cAu4I/kPoFUKOLOQB71BUt6Th09G4PIJ2iHT8DD8gEY
Oct 01 16:41:24 compute-0 systemd-logind[788]: New session 41 of user zuul.
Oct 01 16:41:24 compute-0 systemd[1]: Started Session 41 of User zuul.
Oct 01 16:41:24 compute-0 sshd-session[129052]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 16:41:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.c scrub starts
Oct 01 16:41:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.c scrub ok
Oct 01 16:41:25 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:25 compute-0 ceph-mon[74273]: 9.c scrub starts
Oct 01 16:41:25 compute-0 ceph-mon[74273]: 9.c scrub ok
Oct 01 16:41:25 compute-0 sudo[129205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsmwvingbfpdebxizfzhvuaopbfyajqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336884.879298-16-254235717838760/AnsiballZ_tempfile.py'
Oct 01 16:41:25 compute-0 sudo[129205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:25 compute-0 python3.9[129207]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct 01 16:41:25 compute-0 sudo[129205]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:26 compute-0 ceph-mon[74273]: pgmap v330: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:26 compute-0 sudo[129357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cydhdghwvzmhfdibuakkqaswapzmjvmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336885.936905-28-41475409440451/AnsiballZ_stat.py'
Oct 01 16:41:26 compute-0 sudo[129357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:26 compute-0 python3.9[129359]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:41:26 compute-0 sudo[129357]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:27 compute-0 sudo[129511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpztudimrwqskfkbriublawikddcgjhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336886.9337204-36-245346077347893/AnsiballZ_slurp.py'
Oct 01 16:41:27 compute-0 sudo[129511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:27 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:27 compute-0 python3.9[129513]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Oct 01 16:41:27 compute-0 sudo[129511]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:27 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 6.f scrub starts
Oct 01 16:41:27 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 6.f scrub ok
Oct 01 16:41:28 compute-0 sudo[129663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdyuvscjsjeuboammsyquqymyjbhtrjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336887.7541606-44-113561678905010/AnsiballZ_stat.py'
Oct 01 16:41:28 compute-0 sudo[129663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:28 compute-0 python3.9[129665]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.b3msery_ follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:41:28 compute-0 sudo[129663]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:28 compute-0 ceph-mon[74273]: pgmap v331: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:28 compute-0 ceph-mon[74273]: 6.f scrub starts
Oct 01 16:41:28 compute-0 ceph-mon[74273]: 6.f scrub ok
Oct 01 16:41:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:41:28 compute-0 sudo[129788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbxejuglphwpsklnvteuwcejjiyaozxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336887.7541606-44-113561678905010/AnsiballZ_copy.py'
Oct 01 16:41:28 compute-0 sudo[129788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:29 compute-0 python3.9[129790]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.b3msery_ mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759336887.7541606-44-113561678905010/.source.b3msery_ _original_basename=.lhbwiypc follow=False checksum=5d2e9b39eb910f223ec294959d8171e17740e729 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:41:29 compute-0 sudo[129788]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:29 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:29 compute-0 sudo[129940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcyzmffmtwbttohakrdgwwadwkbcsiaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336889.1775174-59-79559643379492/AnsiballZ_setup.py'
Oct 01 16:41:29 compute-0 sudo[129940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:29 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Oct 01 16:41:29 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Oct 01 16:41:30 compute-0 python3.9[129942]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:41:30 compute-0 sudo[129940]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:30 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 01 16:41:30 compute-0 ceph-mon[74273]: pgmap v332: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 6.2 deep-scrub starts
Oct 01 16:41:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 6.2 deep-scrub ok
Oct 01 16:41:30 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Oct 01 16:41:30 compute-0 sudo[130094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujfdewsmbwjlygvjdbrwefrusggfpern ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336890.4067001-68-150832881613313/AnsiballZ_blockinfile.py'
Oct 01 16:41:30 compute-0 sudo[130094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:30 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Oct 01 16:41:31 compute-0 python3.9[130096]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCz7gvtcbMKzpFnHk1f4agzt9I90wetCA2EaLBu1oNJgTT3PtXCey882cflGOcGiO6eA2djvYpIUL+o7dwRLRBNZ97kA04YOAxeYgxIAXDoxPAbfWV8bVry0kTPdKZonohal9Yr3OlzFdBEj6ZVjrAYD3ZOiXeisDyUeOpVoUNWE7DR9kGSu0fuebmAAVWWsrP1IR+DWBG491Cc3cMgrCzQLjDCGcjk1OyXJiyHYAlu+Zef+3kC7YM4l9GpgaFsQFTQE1JkpkqN7qwI47UUE8Z7RUJR9Oeu5Jq+Mjo3b0N3yscTa/IkuG8z9eObxEv523hvSPy1A2EyyVpJYUWJ0AA70tn2el30bWrMoX8lIUwDIuGiwWtXi7w8XpCoOxwzaRgvZ7sHXk2tAuQAHJhpaWIImdqHvhsm35BsBrfRTgZ28SlY2IidIM26CK0JdMFTDUdetjZUsT3KsCrwpJBI+znBCqyzLG3y8iIpbcetM/g+g0OD6im4a7bmbQiWmJVDta8=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP/WM+tWUlUfKM2Ij44JLzsmgyV7ZneIlfyqQnDhdJi9
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMLBvCDWMwltEj/RodBE8oenZIUSaxU7mHDpOkUqLs1NZFXgaYsbb2fSdVyrhZx1ae8i/pDWxipoAGqK53fnMAo=
                                              create=True mode=0644 path=/tmp/ansible.b3msery_ state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:41:31 compute-0 sudo[130094]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:31 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:31 compute-0 ceph-mon[74273]: 6.6 scrub starts
Oct 01 16:41:31 compute-0 ceph-mon[74273]: 6.6 scrub ok
Oct 01 16:41:31 compute-0 ceph-mon[74273]: 9.13 scrub starts
Oct 01 16:41:31 compute-0 ceph-mon[74273]: 9.13 scrub ok
Oct 01 16:41:31 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 6.e scrub starts
Oct 01 16:41:31 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 6.e scrub ok
Oct 01 16:41:31 compute-0 sudo[130246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouuhhaeduyjcvwfqfyfpgthsxlvxncro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336891.4082265-76-97148801718761/AnsiballZ_command.py'
Oct 01 16:41:31 compute-0 sudo[130246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:32 compute-0 python3.9[130248]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.b3msery_' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:41:32 compute-0 sudo[130246]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:32 compute-0 ceph-mon[74273]: 6.2 deep-scrub starts
Oct 01 16:41:32 compute-0 ceph-mon[74273]: 6.2 deep-scrub ok
Oct 01 16:41:32 compute-0 ceph-mon[74273]: pgmap v333: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:32 compute-0 sudo[130400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfjgkbkymhnwgtqaptmuvszighhizcrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336892.3839526-84-271897764483477/AnsiballZ_file.py'
Oct 01 16:41:32 compute-0 sudo[130400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:32 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Oct 01 16:41:33 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Oct 01 16:41:33 compute-0 python3.9[130402]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.b3msery_ state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:41:33 compute-0 sudo[130400]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:33 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:33 compute-0 sshd-session[129055]: Connection closed by 192.168.122.30 port 34678
Oct 01 16:41:33 compute-0 sshd-session[129052]: pam_unix(sshd:session): session closed for user zuul
Oct 01 16:41:33 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Oct 01 16:41:33 compute-0 systemd-logind[788]: Session 41 logged out. Waiting for processes to exit.
Oct 01 16:41:33 compute-0 systemd[1]: session-41.scope: Consumed 5.385s CPU time.
Oct 01 16:41:33 compute-0 systemd-logind[788]: Removed session 41.
Oct 01 16:41:33 compute-0 ceph-mon[74273]: 6.e scrub starts
Oct 01 16:41:33 compute-0 ceph-mon[74273]: 6.e scrub ok
Oct 01 16:41:33 compute-0 ceph-mon[74273]: 9.19 scrub starts
Oct 01 16:41:33 compute-0 ceph-mon[74273]: 9.19 scrub ok
Oct 01 16:41:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:41:34 compute-0 ceph-mon[74273]: pgmap v334: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:35 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:36 compute-0 ceph-mon[74273]: pgmap v335: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:37 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:37 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 6.c deep-scrub starts
Oct 01 16:41:37 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 6.c deep-scrub ok
Oct 01 16:41:38 compute-0 ceph-mon[74273]: pgmap v336: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:41:38 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Oct 01 16:41:38 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Oct 01 16:41:39 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:39 compute-0 ceph-mon[74273]: 6.c deep-scrub starts
Oct 01 16:41:39 compute-0 ceph-mon[74273]: 6.c deep-scrub ok
Oct 01 16:41:39 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 6.b scrub starts
Oct 01 16:41:39 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 6.b scrub ok
Oct 01 16:41:40 compute-0 ceph-mon[74273]: 6.4 scrub starts
Oct 01 16:41:40 compute-0 ceph-mon[74273]: 6.4 scrub ok
Oct 01 16:41:40 compute-0 ceph-mon[74273]: pgmap v337: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:40 compute-0 sshd-session[130427]: Accepted publickey for zuul from 192.168.122.30 port 59792 ssh2: ECDSA SHA256:cAu4I/kPoFUKOLOQB71BUt6Th09G4PIJ2iHT8DD8gEY
Oct 01 16:41:40 compute-0 systemd-logind[788]: New session 42 of user zuul.
Oct 01 16:41:40 compute-0 systemd[1]: Started Session 42 of User zuul.
Oct 01 16:41:40 compute-0 sshd-session[130427]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 16:41:40 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 6.d deep-scrub starts
Oct 01 16:41:40 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 6.d deep-scrub ok
Oct 01 16:41:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:41:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:41:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:41:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:41:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:41:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:41:41 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:41 compute-0 ceph-mon[74273]: 6.b scrub starts
Oct 01 16:41:41 compute-0 ceph-mon[74273]: 6.b scrub ok
Oct 01 16:41:41 compute-0 python3.9[130580]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:41:42 compute-0 ceph-mon[74273]: 6.d deep-scrub starts
Oct 01 16:41:42 compute-0 ceph-mon[74273]: 6.d deep-scrub ok
Oct 01 16:41:42 compute-0 ceph-mon[74273]: pgmap v338: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:42 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Oct 01 16:41:42 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Oct 01 16:41:42 compute-0 sudo[130734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnwhitiikexrqtaqskekmquanqxxeqxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336902.2281516-32-262382920652114/AnsiballZ_systemd.py'
Oct 01 16:41:42 compute-0 sudo[130734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:43 compute-0 python3.9[130736]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 01 16:41:43 compute-0 sudo[130734]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:43 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:43 compute-0 ceph-mon[74273]: 9.15 scrub starts
Oct 01 16:41:43 compute-0 ceph-mon[74273]: 9.15 scrub ok
Oct 01 16:41:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:41:43 compute-0 sudo[130888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fiqcsyramuabdxtzvhbghmwbwzpibmbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336903.599531-40-113314482467485/AnsiballZ_systemd.py'
Oct 01 16:41:43 compute-0 sudo[130888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:44 compute-0 python3.9[130890]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 01 16:41:44 compute-0 sudo[130888]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:44 compute-0 ceph-mon[74273]: pgmap v339: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:44 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Oct 01 16:41:44 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Oct 01 16:41:45 compute-0 sudo[131041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecysimpqicowdxpeockpsgzpfzcebhtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336904.5875866-49-194860543404810/AnsiballZ_command.py'
Oct 01 16:41:45 compute-0 sudo[131041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:45 compute-0 python3.9[131043]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:41:45 compute-0 sudo[131041]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:45 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:45 compute-0 sudo[131066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:41:45 compute-0 sudo[131066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:41:45 compute-0 sudo[131066]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:45 compute-0 sudo[131095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:41:45 compute-0 sudo[131095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:41:45 compute-0 sudo[131095]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:45 compute-0 ceph-mon[74273]: 9.1f scrub starts
Oct 01 16:41:45 compute-0 ceph-mon[74273]: 9.1f scrub ok
Oct 01 16:41:45 compute-0 sudo[131159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:41:45 compute-0 sudo[131159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:41:45 compute-0 sudo[131159]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:45 compute-0 sudo[131196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 16:41:45 compute-0 sudo[131196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:41:46 compute-0 sudo[131311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vthjamqlttjkzsqgeltpycnecpencbra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336905.5821667-57-88583331989789/AnsiballZ_stat.py'
Oct 01 16:41:46 compute-0 sudo[131311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:46 compute-0 sudo[131196]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:46 compute-0 python3.9[131313]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:41:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:41:46 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:41:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 16:41:46 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:41:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 16:41:46 compute-0 sudo[131311]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:46 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:41:46 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 82db9156-e3b4-48fb-a0b8-b63b6a7a0aec does not exist
Oct 01 16:41:46 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 78ca883c-db8c-4da1-a612-9169f8788626 does not exist
Oct 01 16:41:46 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 1123512a-824f-4679-8fb1-88d869b1b184 does not exist
Oct 01 16:41:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 16:41:46 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:41:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 16:41:46 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:41:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:41:46 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:41:46 compute-0 sudo[131328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:41:46 compute-0 sudo[131328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:41:46 compute-0 sudo[131328]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:46 compute-0 sudo[131377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:41:46 compute-0 sudo[131377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:41:46 compute-0 sudo[131377]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:46 compute-0 sudo[131402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:41:46 compute-0 ceph-mon[74273]: pgmap v340: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:41:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:41:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:41:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:41:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:41:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:41:46 compute-0 sudo[131402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:41:46 compute-0 sudo[131402]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:46 compute-0 sudo[131450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 16:41:46 compute-0 sudo[131450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:41:47 compute-0 podman[131588]: 2025-10-01 16:41:47.165114637 +0000 UTC m=+0.049648513 container create dd19c1acbbef1990f34cb7519ead66551a22c8902d2d90cbb362c2e021c79529 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:41:47 compute-0 systemd[1]: Started libpod-conmon-dd19c1acbbef1990f34cb7519ead66551a22c8902d2d90cbb362c2e021c79529.scope.
Oct 01 16:41:47 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:41:47 compute-0 podman[131588]: 2025-10-01 16:41:47.143511992 +0000 UTC m=+0.028045928 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:41:47 compute-0 sudo[131637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaaftbysnxjgchjwajoshflejytelwhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336906.634743-66-157099504910234/AnsiballZ_file.py'
Oct 01 16:41:47 compute-0 sudo[131637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:47 compute-0 podman[131588]: 2025-10-01 16:41:47.252132188 +0000 UTC m=+0.136666124 container init dd19c1acbbef1990f34cb7519ead66551a22c8902d2d90cbb362c2e021c79529 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_ganguly, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 01 16:41:47 compute-0 podman[131588]: 2025-10-01 16:41:47.262221068 +0000 UTC m=+0.146754974 container start dd19c1acbbef1990f34cb7519ead66551a22c8902d2d90cbb362c2e021c79529 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_ganguly, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:41:47 compute-0 podman[131588]: 2025-10-01 16:41:47.266417365 +0000 UTC m=+0.150951321 container attach dd19c1acbbef1990f34cb7519ead66551a22c8902d2d90cbb362c2e021c79529 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 01 16:41:47 compute-0 agitated_ganguly[131631]: 167 167
Oct 01 16:41:47 compute-0 systemd[1]: libpod-dd19c1acbbef1990f34cb7519ead66551a22c8902d2d90cbb362c2e021c79529.scope: Deactivated successfully.
Oct 01 16:41:47 compute-0 conmon[131631]: conmon dd19c1acbbef1990f34c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dd19c1acbbef1990f34cb7519ead66551a22c8902d2d90cbb362c2e021c79529.scope/container/memory.events
Oct 01 16:41:47 compute-0 podman[131588]: 2025-10-01 16:41:47.272174882 +0000 UTC m=+0.156708778 container died dd19c1acbbef1990f34cb7519ead66551a22c8902d2d90cbb362c2e021c79529 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_ganguly, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:41:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-3059f3f1c1625a84f3293b8cddb30d9fd3462754bc9ba4bc7506fafa36c73b2e-merged.mount: Deactivated successfully.
Oct 01 16:41:47 compute-0 podman[131588]: 2025-10-01 16:41:47.322683708 +0000 UTC m=+0.207217594 container remove dd19c1acbbef1990f34cb7519ead66551a22c8902d2d90cbb362c2e021c79529 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_ganguly, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:41:47 compute-0 systemd[1]: libpod-conmon-dd19c1acbbef1990f34cb7519ead66551a22c8902d2d90cbb362c2e021c79529.scope: Deactivated successfully.
Oct 01 16:41:47 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:47 compute-0 python3.9[131639]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:41:47 compute-0 sudo[131637]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:47 compute-0 podman[131660]: 2025-10-01 16:41:47.509590282 +0000 UTC m=+0.043825413 container create 2fb1187ebd2606f190e510b7173a2ec67e663765e842494297c08102c028b903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_curran, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Oct 01 16:41:47 compute-0 systemd[1]: Started libpod-conmon-2fb1187ebd2606f190e510b7173a2ec67e663765e842494297c08102c028b903.scope.
Oct 01 16:41:47 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:41:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617486ba8fe72407ddc9baa830bd2f1dca492a60c420c73f64dd0a9a3c4ab425/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:41:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617486ba8fe72407ddc9baa830bd2f1dca492a60c420c73f64dd0a9a3c4ab425/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:41:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617486ba8fe72407ddc9baa830bd2f1dca492a60c420c73f64dd0a9a3c4ab425/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:41:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617486ba8fe72407ddc9baa830bd2f1dca492a60c420c73f64dd0a9a3c4ab425/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:41:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617486ba8fe72407ddc9baa830bd2f1dca492a60c420c73f64dd0a9a3c4ab425/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:41:47 compute-0 podman[131660]: 2025-10-01 16:41:47.492392514 +0000 UTC m=+0.026627665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:41:47 compute-0 podman[131660]: 2025-10-01 16:41:47.591738781 +0000 UTC m=+0.125973982 container init 2fb1187ebd2606f190e510b7173a2ec67e663765e842494297c08102c028b903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_curran, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:41:47 compute-0 podman[131660]: 2025-10-01 16:41:47.611255158 +0000 UTC m=+0.145490289 container start 2fb1187ebd2606f190e510b7173a2ec67e663765e842494297c08102c028b903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_curran, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 01 16:41:47 compute-0 podman[131660]: 2025-10-01 16:41:47.615833624 +0000 UTC m=+0.150068785 container attach 2fb1187ebd2606f190e510b7173a2ec67e663765e842494297c08102c028b903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:41:47 compute-0 sshd-session[130430]: Connection closed by 192.168.122.30 port 59792
Oct 01 16:41:47 compute-0 sshd-session[130427]: pam_unix(sshd:session): session closed for user zuul
Oct 01 16:41:47 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Oct 01 16:41:47 compute-0 systemd[1]: session-42.scope: Consumed 4.441s CPU time.
Oct 01 16:41:47 compute-0 systemd-logind[788]: Session 42 logged out. Waiting for processes to exit.
Oct 01 16:41:47 compute-0 systemd-logind[788]: Removed session 42.
Oct 01 16:41:48 compute-0 ceph-mon[74273]: pgmap v341: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:48 compute-0 eager_curran[131701]: --> passed data devices: 0 physical, 3 LVM
Oct 01 16:41:48 compute-0 eager_curran[131701]: --> relative data size: 1.0
Oct 01 16:41:48 compute-0 eager_curran[131701]: --> All data devices are unavailable
Oct 01 16:41:48 compute-0 systemd[1]: libpod-2fb1187ebd2606f190e510b7173a2ec67e663765e842494297c08102c028b903.scope: Deactivated successfully.
Oct 01 16:41:48 compute-0 systemd[1]: libpod-2fb1187ebd2606f190e510b7173a2ec67e663765e842494297c08102c028b903.scope: Consumed 1.090s CPU time.
Oct 01 16:41:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:41:48 compute-0 podman[131731]: 2025-10-01 16:41:48.799134074 +0000 UTC m=+0.026539540 container died 2fb1187ebd2606f190e510b7173a2ec67e663765e842494297c08102c028b903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:41:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-617486ba8fe72407ddc9baa830bd2f1dca492a60c420c73f64dd0a9a3c4ab425-merged.mount: Deactivated successfully.
Oct 01 16:41:48 compute-0 podman[131731]: 2025-10-01 16:41:48.844150627 +0000 UTC m=+0.071556073 container remove 2fb1187ebd2606f190e510b7173a2ec67e663765e842494297c08102c028b903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 01 16:41:48 compute-0 systemd[1]: libpod-conmon-2fb1187ebd2606f190e510b7173a2ec67e663765e842494297c08102c028b903.scope: Deactivated successfully.
Oct 01 16:41:48 compute-0 sudo[131450]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:48 compute-0 sudo[131746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:41:48 compute-0 sudo[131746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:41:48 compute-0 sudo[131746]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:49 compute-0 sudo[131771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:41:49 compute-0 sudo[131771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:41:49 compute-0 sudo[131771]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:49 compute-0 sudo[131796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:41:49 compute-0 sudo[131796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:41:49 compute-0 sudo[131796]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:49 compute-0 sudo[131821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 16:41:49 compute-0 sudo[131821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:41:49 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:49 compute-0 podman[131886]: 2025-10-01 16:41:49.629610658 +0000 UTC m=+0.043289604 container create 7a249a45afdeb6e38d9f84ab3dc447a81c4fe84242eae74e547eee8f94d7d302 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 01 16:41:49 compute-0 systemd[1]: Started libpod-conmon-7a249a45afdeb6e38d9f84ab3dc447a81c4fe84242eae74e547eee8f94d7d302.scope.
Oct 01 16:41:49 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:41:49 compute-0 podman[131886]: 2025-10-01 16:41:49.612372129 +0000 UTC m=+0.026051085 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:41:49 compute-0 podman[131886]: 2025-10-01 16:41:49.725773461 +0000 UTC m=+0.139452397 container init 7a249a45afdeb6e38d9f84ab3dc447a81c4fe84242eae74e547eee8f94d7d302 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 01 16:41:49 compute-0 podman[131886]: 2025-10-01 16:41:49.731866785 +0000 UTC m=+0.145545721 container start 7a249a45afdeb6e38d9f84ab3dc447a81c4fe84242eae74e547eee8f94d7d302 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Oct 01 16:41:49 compute-0 podman[131886]: 2025-10-01 16:41:49.735210208 +0000 UTC m=+0.148889164 container attach 7a249a45afdeb6e38d9f84ab3dc447a81c4fe84242eae74e547eee8f94d7d302 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_elgamal, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 01 16:41:49 compute-0 hardcore_elgamal[131903]: 167 167
Oct 01 16:41:49 compute-0 systemd[1]: libpod-7a249a45afdeb6e38d9f84ab3dc447a81c4fe84242eae74e547eee8f94d7d302.scope: Deactivated successfully.
Oct 01 16:41:49 compute-0 podman[131886]: 2025-10-01 16:41:49.738142219 +0000 UTC m=+0.151821155 container died 7a249a45afdeb6e38d9f84ab3dc447a81c4fe84242eae74e547eee8f94d7d302 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 01 16:41:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-462127fe305364524db2be6cc3a78f30f4d0c4b129fd90563e088196e3d62d7c-merged.mount: Deactivated successfully.
Oct 01 16:41:49 compute-0 podman[131886]: 2025-10-01 16:41:49.781339748 +0000 UTC m=+0.195018724 container remove 7a249a45afdeb6e38d9f84ab3dc447a81c4fe84242eae74e547eee8f94d7d302 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:41:49 compute-0 systemd[1]: libpod-conmon-7a249a45afdeb6e38d9f84ab3dc447a81c4fe84242eae74e547eee8f94d7d302.scope: Deactivated successfully.
Oct 01 16:41:49 compute-0 podman[131928]: 2025-10-01 16:41:49.95986142 +0000 UTC m=+0.055004439 container create f834a028dc45a776fdba7f127051f71341c1ff6960fe4050431f434c13792fc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_rhodes, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:41:50 compute-0 systemd[1]: Started libpod-conmon-f834a028dc45a776fdba7f127051f71341c1ff6960fe4050431f434c13792fc2.scope.
Oct 01 16:41:50 compute-0 podman[131928]: 2025-10-01 16:41:49.932019453 +0000 UTC m=+0.027162482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:41:50 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c4edc5e6496a633e2a5db3e416aea39c92207a1ee59b6f622cc8fbf854979e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c4edc5e6496a633e2a5db3e416aea39c92207a1ee59b6f622cc8fbf854979e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c4edc5e6496a633e2a5db3e416aea39c92207a1ee59b6f622cc8fbf854979e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c4edc5e6496a633e2a5db3e416aea39c92207a1ee59b6f622cc8fbf854979e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:41:50 compute-0 podman[131928]: 2025-10-01 16:41:50.069587222 +0000 UTC m=+0.164730321 container init f834a028dc45a776fdba7f127051f71341c1ff6960fe4050431f434c13792fc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:41:50 compute-0 podman[131928]: 2025-10-01 16:41:50.08058866 +0000 UTC m=+0.175731679 container start f834a028dc45a776fdba7f127051f71341c1ff6960fe4050431f434c13792fc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_rhodes, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Oct 01 16:41:50 compute-0 podman[131928]: 2025-10-01 16:41:50.084168904 +0000 UTC m=+0.179311953 container attach f834a028dc45a776fdba7f127051f71341c1ff6960fe4050431f434c13792fc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 01 16:41:50 compute-0 ceph-mon[74273]: pgmap v342: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:50 compute-0 great_rhodes[131944]: {
Oct 01 16:41:50 compute-0 great_rhodes[131944]:     "0": [
Oct 01 16:41:50 compute-0 great_rhodes[131944]:         {
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             "devices": [
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "/dev/loop3"
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             ],
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             "lv_name": "ceph_lv0",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             "lv_size": "21470642176",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             "name": "ceph_lv0",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             "tags": {
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "ceph.cluster_name": "ceph",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "ceph.crush_device_class": "",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "ceph.encrypted": "0",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "ceph.osd_id": "0",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "ceph.type": "block",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "ceph.vdo": "0"
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             },
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             "type": "block",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             "vg_name": "ceph_vg0"
Oct 01 16:41:50 compute-0 great_rhodes[131944]:         }
Oct 01 16:41:50 compute-0 great_rhodes[131944]:     ],
Oct 01 16:41:50 compute-0 great_rhodes[131944]:     "1": [
Oct 01 16:41:50 compute-0 great_rhodes[131944]:         {
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             "devices": [
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "/dev/loop4"
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             ],
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             "lv_name": "ceph_lv1",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             "lv_size": "21470642176",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             "name": "ceph_lv1",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             "tags": {
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "ceph.cluster_name": "ceph",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "ceph.crush_device_class": "",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "ceph.encrypted": "0",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "ceph.osd_id": "1",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "ceph.type": "block",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "ceph.vdo": "0"
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             },
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             "type": "block",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             "vg_name": "ceph_vg1"
Oct 01 16:41:50 compute-0 great_rhodes[131944]:         }
Oct 01 16:41:50 compute-0 great_rhodes[131944]:     ],
Oct 01 16:41:50 compute-0 great_rhodes[131944]:     "2": [
Oct 01 16:41:50 compute-0 great_rhodes[131944]:         {
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             "devices": [
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "/dev/loop5"
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             ],
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             "lv_name": "ceph_lv2",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             "lv_size": "21470642176",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             "name": "ceph_lv2",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             "tags": {
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "ceph.cluster_name": "ceph",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "ceph.crush_device_class": "",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "ceph.encrypted": "0",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "ceph.osd_id": "2",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "ceph.type": "block",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:                 "ceph.vdo": "0"
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             },
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             "type": "block",
Oct 01 16:41:50 compute-0 great_rhodes[131944]:             "vg_name": "ceph_vg2"
Oct 01 16:41:50 compute-0 great_rhodes[131944]:         }
Oct 01 16:41:50 compute-0 great_rhodes[131944]:     ]
Oct 01 16:41:50 compute-0 great_rhodes[131944]: }
Oct 01 16:41:50 compute-0 systemd[1]: libpod-f834a028dc45a776fdba7f127051f71341c1ff6960fe4050431f434c13792fc2.scope: Deactivated successfully.
Oct 01 16:41:50 compute-0 podman[131928]: 2025-10-01 16:41:50.859774257 +0000 UTC m=+0.954917306 container died f834a028dc45a776fdba7f127051f71341c1ff6960fe4050431f434c13792fc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:41:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-06c4edc5e6496a633e2a5db3e416aea39c92207a1ee59b6f622cc8fbf854979e-merged.mount: Deactivated successfully.
Oct 01 16:41:50 compute-0 podman[131928]: 2025-10-01 16:41:50.922791059 +0000 UTC m=+1.017934078 container remove f834a028dc45a776fdba7f127051f71341c1ff6960fe4050431f434c13792fc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_rhodes, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:41:50 compute-0 systemd[1]: libpod-conmon-f834a028dc45a776fdba7f127051f71341c1ff6960fe4050431f434c13792fc2.scope: Deactivated successfully.
Oct 01 16:41:50 compute-0 sudo[131821]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:51 compute-0 sudo[131967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:41:51 compute-0 sudo[131967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:41:51 compute-0 sudo[131967]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:51 compute-0 sudo[131992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:41:51 compute-0 sudo[131992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:41:51 compute-0 sudo[131992]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:51 compute-0 sudo[132017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:41:51 compute-0 sudo[132017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:41:51 compute-0 sudo[132017]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:51 compute-0 sudo[132042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 16:41:51 compute-0 sudo[132042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:41:51 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:51 compute-0 podman[132107]: 2025-10-01 16:41:51.713628396 +0000 UTC m=+0.044287456 container create 84905bc6de60d27750b5a955b2737ab4c2eb59c42072aed090bef0e8f0032197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_villani, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Oct 01 16:41:51 compute-0 systemd[1]: Started libpod-conmon-84905bc6de60d27750b5a955b2737ab4c2eb59c42072aed090bef0e8f0032197.scope.
Oct 01 16:41:51 compute-0 podman[132107]: 2025-10-01 16:41:51.693371501 +0000 UTC m=+0.024030551 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:41:51 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:41:51 compute-0 podman[132107]: 2025-10-01 16:41:51.81351609 +0000 UTC m=+0.144175190 container init 84905bc6de60d27750b5a955b2737ab4c2eb59c42072aed090bef0e8f0032197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_villani, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 01 16:41:51 compute-0 podman[132107]: 2025-10-01 16:41:51.821937225 +0000 UTC m=+0.152596255 container start 84905bc6de60d27750b5a955b2737ab4c2eb59c42072aed090bef0e8f0032197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 01 16:41:51 compute-0 podman[132107]: 2025-10-01 16:41:51.826809946 +0000 UTC m=+0.157469056 container attach 84905bc6de60d27750b5a955b2737ab4c2eb59c42072aed090bef0e8f0032197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_villani, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Oct 01 16:41:51 compute-0 clever_villani[132124]: 167 167
Oct 01 16:41:51 compute-0 systemd[1]: libpod-84905bc6de60d27750b5a955b2737ab4c2eb59c42072aed090bef0e8f0032197.scope: Deactivated successfully.
Oct 01 16:41:51 compute-0 podman[132107]: 2025-10-01 16:41:51.830086026 +0000 UTC m=+0.160745046 container died 84905bc6de60d27750b5a955b2737ab4c2eb59c42072aed090bef0e8f0032197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_villani, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:41:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-747cf05d5b94e18c26c6d6494a5aa7c5eec727f99ece297da91c7f1c544dd595-merged.mount: Deactivated successfully.
Oct 01 16:41:51 compute-0 podman[132107]: 2025-10-01 16:41:51.879870904 +0000 UTC m=+0.210529934 container remove 84905bc6de60d27750b5a955b2737ab4c2eb59c42072aed090bef0e8f0032197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 01 16:41:51 compute-0 systemd[1]: libpod-conmon-84905bc6de60d27750b5a955b2737ab4c2eb59c42072aed090bef0e8f0032197.scope: Deactivated successfully.
Oct 01 16:41:52 compute-0 podman[132149]: 2025-10-01 16:41:52.050003024 +0000 UTC m=+0.052248978 container create 990e8a28699b29368011376056228a23e85d4e81d1d0979ff6583a9b299f9fcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_jackson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 01 16:41:52 compute-0 systemd[1]: Started libpod-conmon-990e8a28699b29368011376056228a23e85d4e81d1d0979ff6583a9b299f9fcb.scope.
Oct 01 16:41:52 compute-0 podman[132149]: 2025-10-01 16:41:52.027342214 +0000 UTC m=+0.029588148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:41:52 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:41:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1491792aada6bc1cbddc46eeb208ad3a13eec060591a430338e49bd8aca8741e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:41:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1491792aada6bc1cbddc46eeb208ad3a13eec060591a430338e49bd8aca8741e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:41:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1491792aada6bc1cbddc46eeb208ad3a13eec060591a430338e49bd8aca8741e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:41:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1491792aada6bc1cbddc46eeb208ad3a13eec060591a430338e49bd8aca8741e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:41:52 compute-0 podman[132149]: 2025-10-01 16:41:52.146162936 +0000 UTC m=+0.148408870 container init 990e8a28699b29368011376056228a23e85d4e81d1d0979ff6583a9b299f9fcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_jackson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:41:52 compute-0 podman[132149]: 2025-10-01 16:41:52.154926938 +0000 UTC m=+0.157172862 container start 990e8a28699b29368011376056228a23e85d4e81d1d0979ff6583a9b299f9fcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 01 16:41:52 compute-0 podman[132149]: 2025-10-01 16:41:52.159489233 +0000 UTC m=+0.161735167 container attach 990e8a28699b29368011376056228a23e85d4e81d1d0979ff6583a9b299f9fcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_jackson, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 16:41:52 compute-0 ceph-mon[74273]: pgmap v343: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:53 compute-0 tender_jackson[132165]: {
Oct 01 16:41:53 compute-0 tender_jackson[132165]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 16:41:53 compute-0 tender_jackson[132165]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:41:53 compute-0 tender_jackson[132165]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 16:41:53 compute-0 tender_jackson[132165]:         "osd_id": 2,
Oct 01 16:41:53 compute-0 tender_jackson[132165]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:41:53 compute-0 tender_jackson[132165]:         "type": "bluestore"
Oct 01 16:41:53 compute-0 tender_jackson[132165]:     },
Oct 01 16:41:53 compute-0 tender_jackson[132165]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 16:41:53 compute-0 tender_jackson[132165]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:41:53 compute-0 tender_jackson[132165]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 16:41:53 compute-0 tender_jackson[132165]:         "osd_id": 0,
Oct 01 16:41:53 compute-0 tender_jackson[132165]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:41:53 compute-0 tender_jackson[132165]:         "type": "bluestore"
Oct 01 16:41:53 compute-0 tender_jackson[132165]:     },
Oct 01 16:41:53 compute-0 tender_jackson[132165]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 16:41:53 compute-0 tender_jackson[132165]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:41:53 compute-0 tender_jackson[132165]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 16:41:53 compute-0 tender_jackson[132165]:         "osd_id": 1,
Oct 01 16:41:53 compute-0 tender_jackson[132165]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:41:53 compute-0 tender_jackson[132165]:         "type": "bluestore"
Oct 01 16:41:53 compute-0 tender_jackson[132165]:     }
Oct 01 16:41:53 compute-0 tender_jackson[132165]: }
Oct 01 16:41:53 compute-0 systemd[1]: libpod-990e8a28699b29368011376056228a23e85d4e81d1d0979ff6583a9b299f9fcb.scope: Deactivated successfully.
Oct 01 16:41:53 compute-0 podman[132149]: 2025-10-01 16:41:53.200399956 +0000 UTC m=+1.202645950 container died 990e8a28699b29368011376056228a23e85d4e81d1d0979ff6583a9b299f9fcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 01 16:41:53 compute-0 systemd[1]: libpod-990e8a28699b29368011376056228a23e85d4e81d1d0979ff6583a9b299f9fcb.scope: Consumed 1.054s CPU time.
Oct 01 16:41:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-1491792aada6bc1cbddc46eeb208ad3a13eec060591a430338e49bd8aca8741e-merged.mount: Deactivated successfully.
Oct 01 16:41:53 compute-0 podman[132149]: 2025-10-01 16:41:53.267106068 +0000 UTC m=+1.269351982 container remove 990e8a28699b29368011376056228a23e85d4e81d1d0979ff6583a9b299f9fcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_jackson, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 01 16:41:53 compute-0 systemd[1]: libpod-conmon-990e8a28699b29368011376056228a23e85d4e81d1d0979ff6583a9b299f9fcb.scope: Deactivated successfully.
Oct 01 16:41:53 compute-0 sudo[132042]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:41:53 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:41:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:41:53 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:41:53 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev de17b607-2cbe-4aeb-b6cd-b26cc97953ac does not exist
Oct 01 16:41:53 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 23617b8b-3cf3-4478-be58-17d70cd9e2de does not exist
Oct 01 16:41:53 compute-0 sudo[132211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:41:53 compute-0 sudo[132211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:41:53 compute-0 sudo[132211]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:53 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:53 compute-0 sudo[132236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 16:41:53 compute-0 sudo[132236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:41:53 compute-0 sudo[132236]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:53 compute-0 sshd-session[132261]: Accepted publickey for zuul from 192.168.122.30 port 44372 ssh2: ECDSA SHA256:cAu4I/kPoFUKOLOQB71BUt6Th09G4PIJ2iHT8DD8gEY
Oct 01 16:41:53 compute-0 systemd-logind[788]: New session 43 of user zuul.
Oct 01 16:41:53 compute-0 systemd[1]: Started Session 43 of User zuul.
Oct 01 16:41:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:41:53 compute-0 sshd-session[132261]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 16:41:54 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:41:54 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:41:54 compute-0 ceph-mon[74273]: pgmap v344: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:54 compute-0 sshd-session[70530]: Received disconnect from 38.129.56.198 port 51506:11: disconnected by user
Oct 01 16:41:54 compute-0 sshd-session[70530]: Disconnected from user zuul 38.129.56.198 port 51506
Oct 01 16:41:54 compute-0 sshd-session[70527]: pam_unix(sshd:session): session closed for user zuul
Oct 01 16:41:54 compute-0 systemd-logind[788]: Session 18 logged out. Waiting for processes to exit.
Oct 01 16:41:54 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Oct 01 16:41:54 compute-0 systemd[1]: session-18.scope: Consumed 1min 23.100s CPU time.
Oct 01 16:41:54 compute-0 systemd-logind[788]: Removed session 18.
Oct 01 16:41:54 compute-0 python3.9[132414]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:41:55 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:55 compute-0 sudo[132568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nscekduefgvkwivoxfxrybsyznjuqdzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336915.4256623-34-68486478710283/AnsiballZ_setup.py'
Oct 01 16:41:55 compute-0 sudo[132568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:56 compute-0 python3.9[132570]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 16:41:56 compute-0 sudo[132568]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:56 compute-0 ceph-mon[74273]: pgmap v345: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:56 compute-0 sudo[132652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hylqzgvjvqyexrlsuuzczxekbtrejwgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336915.4256623-34-68486478710283/AnsiballZ_dnf.py'
Oct 01 16:41:56 compute-0 sudo[132652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:41:57 compute-0 python3.9[132654]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 01 16:41:57 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:58 compute-0 sudo[132652]: pam_unix(sudo:session): session closed for user root
Oct 01 16:41:58 compute-0 ceph-mon[74273]: pgmap v346: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:41:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:41:59 compute-0 python3.9[132805]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:41:59 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:00 compute-0 ceph-mon[74273]: pgmap v347: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:00 compute-0 python3.9[132956]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 01 16:42:01 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:01 compute-0 python3.9[133106]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:42:02 compute-0 python3.9[133256]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:42:02 compute-0 ceph-mon[74273]: pgmap v348: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:02 compute-0 sshd-session[132264]: Connection closed by 192.168.122.30 port 44372
Oct 01 16:42:02 compute-0 sshd-session[132261]: pam_unix(sshd:session): session closed for user zuul
Oct 01 16:42:02 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Oct 01 16:42:02 compute-0 systemd[1]: session-43.scope: Consumed 6.508s CPU time.
Oct 01 16:42:02 compute-0 systemd-logind[788]: Session 43 logged out. Waiting for processes to exit.
Oct 01 16:42:02 compute-0 systemd-logind[788]: Removed session 43.
Oct 01 16:42:03 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:42:04 compute-0 ceph-mon[74273]: pgmap v349: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:05 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:06 compute-0 ceph-mon[74273]: pgmap v350: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:07 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:08 compute-0 ceph-mon[74273]: pgmap v351: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:42:09 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:09 compute-0 sshd-session[133281]: Accepted publickey for zuul from 192.168.122.30 port 47868 ssh2: ECDSA SHA256:cAu4I/kPoFUKOLOQB71BUt6Th09G4PIJ2iHT8DD8gEY
Oct 01 16:42:09 compute-0 systemd-logind[788]: New session 44 of user zuul.
Oct 01 16:42:09 compute-0 systemd[1]: Started Session 44 of User zuul.
Oct 01 16:42:09 compute-0 sshd-session[133281]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 16:42:10 compute-0 ceph-mon[74273]: pgmap v352: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:10 compute-0 python3.9[133434]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:42:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_16:42:11
Oct 01 16:42:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 16:42:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 16:42:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', 'vms', '.rgw.root', 'cephfs.cephfs.data', '.mgr', 'images', 'volumes', 'backups']
Oct 01 16:42:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 16:42:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:42:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:42:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:42:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:42:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:42:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:42:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 16:42:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:42:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 16:42:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:42:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:42:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:42:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:42:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:42:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:42:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:42:11 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:12 compute-0 sudo[133588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrfhbassfgnbhsdclfyrhrsrsfqglzkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336931.7572749-50-184860506865178/AnsiballZ_file.py'
Oct 01 16:42:12 compute-0 sudo[133588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:12 compute-0 python3.9[133590]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:42:12 compute-0 sudo[133588]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:12 compute-0 ceph-mon[74273]: pgmap v353: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:12 compute-0 sudo[133740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxmyklonrrsrkbxcjavuzalpdnldeeci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336932.6409073-50-67386193289362/AnsiballZ_file.py'
Oct 01 16:42:12 compute-0 sudo[133740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:13 compute-0 python3.9[133742]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:42:13 compute-0 sudo[133740]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:13 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:42:14 compute-0 sudo[133893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axeeidhavwnabjyvfegkakqsaxsoewqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336933.4124594-65-112285643565887/AnsiballZ_stat.py'
Oct 01 16:42:14 compute-0 sudo[133893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:14 compute-0 python3.9[133895]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:42:14 compute-0 sudo[133893]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:14 compute-0 ceph-mon[74273]: pgmap v354: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:14 compute-0 sudo[134016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdwumivdjysmusgsmusuvqwfcnfswsjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336933.4124594-65-112285643565887/AnsiballZ_copy.py'
Oct 01 16:42:14 compute-0 sudo[134016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:15 compute-0 python3.9[134018]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336933.4124594-65-112285643565887/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=11b88ea92146b50fd52c7fcfa02f816bfedcbd4d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:42:15 compute-0 sudo[134016]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:15 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:15 compute-0 sudo[134168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvkhezoqonfteccwwfvdzhenptbwbbbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336935.218115-65-271564712099925/AnsiballZ_stat.py'
Oct 01 16:42:15 compute-0 sudo[134168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:15 compute-0 python3.9[134170]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:42:15 compute-0 sudo[134168]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:16 compute-0 sudo[134292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvsdfngqdtfpcjmdompivwtqfabuymne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336935.218115-65-271564712099925/AnsiballZ_copy.py'
Oct 01 16:42:16 compute-0 sudo[134292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:16 compute-0 python3.9[134294]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336935.218115-65-271564712099925/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=a5ad90ac5da92cf05ba699490cfd0bda03db0ca8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:42:16 compute-0 sudo[134292]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:16 compute-0 ceph-mon[74273]: pgmap v355: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:16 compute-0 sudo[134444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcgvlyityuvubxnrkarixnculhxjxvmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336936.4902554-65-90090790130444/AnsiballZ_stat.py'
Oct 01 16:42:16 compute-0 sudo[134444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:17 compute-0 python3.9[134446]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:42:17 compute-0 sudo[134444]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:17 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:17 compute-0 sshd-session[133743]: Invalid user admin123 from 80.94.95.115 port 26668
Oct 01 16:42:17 compute-0 sudo[134567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuqfxqrszrahbhaclsfsrwmzefvtvwtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336936.4902554-65-90090790130444/AnsiballZ_copy.py'
Oct 01 16:42:17 compute-0 sudo[134567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:17 compute-0 sshd-session[133743]: Connection closed by invalid user admin123 80.94.95.115 port 26668 [preauth]
Oct 01 16:42:17 compute-0 python3.9[134569]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336936.4902554-65-90090790130444/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=b32c1c08197a9b42408a5926fe7995c8d476c6df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:42:17 compute-0 rsyslogd[1001]: imjournal from <np0005464933:sshd-session>: begin to drop messages due to rate-limiting
Oct 01 16:42:17 compute-0 sudo[134567]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:18 compute-0 sudo[134719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyqmclslbfqqolmddgallweecezrdjdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336937.891336-109-81753883725607/AnsiballZ_file.py'
Oct 01 16:42:18 compute-0 sudo[134719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:18 compute-0 python3.9[134721]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:42:18 compute-0 sudo[134719]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:18 compute-0 ceph-mon[74273]: pgmap v356: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:42:18 compute-0 sudo[134871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odczvqoddfxhzhfcrpogjuctellptiex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336938.6272461-109-151205857333313/AnsiballZ_file.py'
Oct 01 16:42:18 compute-0 sudo[134871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:19 compute-0 python3.9[134873]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:42:19 compute-0 sudo[134871]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:19 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:19 compute-0 sudo[135023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxhiivyrkpzneyztsqlemrxyjjoufjab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336939.3674164-124-172167761927530/AnsiballZ_stat.py'
Oct 01 16:42:19 compute-0 sudo[135023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:19 compute-0 python3.9[135025]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:42:19 compute-0 sudo[135023]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:20 compute-0 sudo[135146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhvckmvyxzqvkdokpfgabuocqromxxlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336939.3674164-124-172167761927530/AnsiballZ_copy.py'
Oct 01 16:42:20 compute-0 sudo[135146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:20 compute-0 python3.9[135148]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336939.3674164-124-172167761927530/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=fe1f0627d0d164669dfa60d64258004fef4f1fbc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:42:20 compute-0 sudo[135146]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:20 compute-0 ceph-mon[74273]: pgmap v357: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 16:42:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:42:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 16:42:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:42:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:42:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:42:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:42:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:42:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:42:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:42:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:42:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:42:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 16:42:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:42:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:42:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:42:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 16:42:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:42:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 16:42:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:42:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:42:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:42:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 16:42:21 compute-0 sudo[135298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyyecpnzeildaynxssmvfqyzbsqpedkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336940.7378776-124-185874188476793/AnsiballZ_stat.py'
Oct 01 16:42:21 compute-0 sudo[135298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:21 compute-0 python3.9[135300]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:42:21 compute-0 sudo[135298]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:21 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:21 compute-0 sudo[135421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgwsatofsfxtgggvcuigsxfneimghiec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336940.7378776-124-185874188476793/AnsiballZ_copy.py'
Oct 01 16:42:21 compute-0 sudo[135421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:21 compute-0 python3.9[135423]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336940.7378776-124-185874188476793/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=11fc616797532d3c0ceae8b8d2804f995d7f090a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:42:21 compute-0 sudo[135421]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:22 compute-0 sudo[135573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbtzzyqacmhxkqpehzfgtvxgmuwkmbue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336942.1057832-124-34866570831925/AnsiballZ_stat.py'
Oct 01 16:42:22 compute-0 sudo[135573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:22 compute-0 ceph-mon[74273]: pgmap v358: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:22 compute-0 python3.9[135575]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:42:22 compute-0 sudo[135573]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:23 compute-0 sudo[135696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chykeehgviukpazymeejidefqowzltzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336942.1057832-124-34866570831925/AnsiballZ_copy.py'
Oct 01 16:42:23 compute-0 sudo[135696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:23 compute-0 python3.9[135698]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336942.1057832-124-34866570831925/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=c1370eb8d46c1505af3bd4ddec0e476cc540bef4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:42:23 compute-0 sudo[135696]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:23 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:42:23 compute-0 sudo[135848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyuhroavsqzcrfufrgwkhdduonvnfavc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336943.5751338-168-173753528299840/AnsiballZ_file.py'
Oct 01 16:42:23 compute-0 sudo[135848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:24 compute-0 python3.9[135850]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:42:24 compute-0 sudo[135848]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:24 compute-0 ceph-mon[74273]: pgmap v359: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:24 compute-0 sudo[136000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhjxyormhaxnecdqszstsmvpnmcjylfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336944.4006188-168-90578658493601/AnsiballZ_file.py'
Oct 01 16:42:24 compute-0 sudo[136000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:24 compute-0 python3.9[136002]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:42:24 compute-0 sudo[136000]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:25 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:25 compute-0 sudo[136152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-saxylkflxxrevbybsglnsaodaejsyqka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336945.136782-183-255913539314649/AnsiballZ_stat.py'
Oct 01 16:42:25 compute-0 sudo[136152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:25 compute-0 python3.9[136154]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:42:25 compute-0 sudo[136152]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:26 compute-0 sudo[136275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzvgvzcxrynjhiyyemknwabhbmlzbbwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336945.136782-183-255913539314649/AnsiballZ_copy.py'
Oct 01 16:42:26 compute-0 sudo[136275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:26 compute-0 python3.9[136277]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336945.136782-183-255913539314649/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=56d7970581b30d8eb94527defb2a51e8938af163 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:42:26 compute-0 sudo[136275]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:26 compute-0 ceph-mon[74273]: pgmap v360: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:26 compute-0 sudo[136427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owgxmdhrqnxkvceaiiihpjkkuqzxhzql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336946.4405603-183-208821146263234/AnsiballZ_stat.py'
Oct 01 16:42:26 compute-0 sudo[136427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:27 compute-0 python3.9[136429]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:42:27 compute-0 sudo[136427]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:27 compute-0 sudo[136550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxthtvizmcbwjimhpmuxhdwljfmsqyjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336946.4405603-183-208821146263234/AnsiballZ_copy.py'
Oct 01 16:42:27 compute-0 sudo[136550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:27 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:27 compute-0 python3.9[136552]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336946.4405603-183-208821146263234/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=11fc616797532d3c0ceae8b8d2804f995d7f090a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:42:27 compute-0 sudo[136550]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:28 compute-0 sudo[136702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzzazmzjjdtdgiipwkkkipcxuxuiguhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336947.7786016-183-271101820375953/AnsiballZ_stat.py'
Oct 01 16:42:28 compute-0 sudo[136702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:28 compute-0 python3.9[136704]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:42:28 compute-0 sudo[136702]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:28 compute-0 ceph-mon[74273]: pgmap v361: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:28 compute-0 sudo[136825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asevzksqsnkckqodvsqliykwbvrhbmma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336947.7786016-183-271101820375953/AnsiballZ_copy.py'
Oct 01 16:42:28 compute-0 sudo[136825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:42:28 compute-0 python3.9[136827]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336947.7786016-183-271101820375953/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=1d02bb1f15b0e49099b4ef85105ef58f2f77634b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:42:28 compute-0 sudo[136825]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:29 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:30 compute-0 sudo[136977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwmccnvtinjrgqnrdzknoxonlvbsqogf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336949.6795046-243-2615253518127/AnsiballZ_file.py'
Oct 01 16:42:30 compute-0 sudo[136977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:30 compute-0 python3.9[136979]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:42:30 compute-0 sudo[136977]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:30 compute-0 ceph-mon[74273]: pgmap v362: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:30 compute-0 sudo[137129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qevtmvirvzwfvkifyozoifbaxbapfxed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336950.4805925-251-268535074294175/AnsiballZ_stat.py'
Oct 01 16:42:30 compute-0 sudo[137129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:31 compute-0 python3.9[137131]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:42:31 compute-0 sudo[137129]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:31 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:31 compute-0 sudo[137252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsmczqtvtgfalfpglvizpgwpnycwhzol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336950.4805925-251-268535074294175/AnsiballZ_copy.py'
Oct 01 16:42:31 compute-0 sudo[137252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:31 compute-0 python3.9[137254]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336950.4805925-251-268535074294175/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bf4be44dc9b0cb27bebca4408e722e3ce3fb0177 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:42:31 compute-0 sudo[137252]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:32 compute-0 sudo[137404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baxfbaszdbwrhunvegudlcpblypgexyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336952.0246925-267-257273887559469/AnsiballZ_file.py'
Oct 01 16:42:32 compute-0 sudo[137404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:32 compute-0 python3.9[137406]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:42:32 compute-0 sudo[137404]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:32 compute-0 ceph-mon[74273]: pgmap v363: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:33 compute-0 sudo[137556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzscdnehthttfzpcwsmouaxzzdhuyrad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336952.800672-275-275772909843792/AnsiballZ_stat.py'
Oct 01 16:42:33 compute-0 sudo[137556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:33 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:33 compute-0 python3.9[137558]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:42:33 compute-0 sudo[137556]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:42:33 compute-0 sudo[137679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-comcailmyfyseetvrwxdkbimzlpyvxzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336952.800672-275-275772909843792/AnsiballZ_copy.py'
Oct 01 16:42:33 compute-0 sudo[137679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:34 compute-0 python3.9[137681]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336952.800672-275-275772909843792/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bf4be44dc9b0cb27bebca4408e722e3ce3fb0177 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:42:34 compute-0 sudo[137679]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:34 compute-0 ceph-mon[74273]: pgmap v364: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:34 compute-0 sudo[137831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytegxqfauczxbsnhdhtmvkrdjcrmivsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336954.3941789-291-221507096948559/AnsiballZ_file.py'
Oct 01 16:42:34 compute-0 sudo[137831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:35 compute-0 python3.9[137833]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:42:35 compute-0 sudo[137831]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:35 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:35 compute-0 sudo[137983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bomaenycbtqxutompltwmdujffdkqxuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336955.2272813-299-277797126190648/AnsiballZ_stat.py'
Oct 01 16:42:35 compute-0 sudo[137983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:35 compute-0 python3.9[137985]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:42:35 compute-0 sudo[137983]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:36 compute-0 sudo[138106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blpddcsyaclyxeriemyeotzazcjiioiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336955.2272813-299-277797126190648/AnsiballZ_copy.py'
Oct 01 16:42:36 compute-0 sudo[138106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:36 compute-0 python3.9[138108]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336955.2272813-299-277797126190648/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bf4be44dc9b0cb27bebca4408e722e3ce3fb0177 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:42:36 compute-0 sudo[138106]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:36 compute-0 ceph-mon[74273]: pgmap v365: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:37 compute-0 sudo[138258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwmqzxxicovfqtokwayqrgbfghvpfkfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336956.7638392-315-83955641081237/AnsiballZ_file.py'
Oct 01 16:42:37 compute-0 sudo[138258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:37 compute-0 python3.9[138260]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:42:37 compute-0 sudo[138258]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:37 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:38 compute-0 sudo[138410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iniwgnhzddkcvnozozotlzupeobernqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336957.6286545-323-234370566708281/AnsiballZ_stat.py'
Oct 01 16:42:38 compute-0 sudo[138410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:38 compute-0 python3.9[138412]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:42:38 compute-0 sudo[138410]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:38 compute-0 ceph-mon[74273]: pgmap v366: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:38 compute-0 sudo[138533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbwbouwvuoyycdetuqrrexgzacgcghce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336957.6286545-323-234370566708281/AnsiballZ_copy.py'
Oct 01 16:42:38 compute-0 sudo[138533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:42:38 compute-0 python3.9[138535]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336957.6286545-323-234370566708281/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bf4be44dc9b0cb27bebca4408e722e3ce3fb0177 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:42:38 compute-0 sudo[138533]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:39 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:39 compute-0 sudo[138685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azbidquqpybxilmtdujuqgmhbdxbumjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336959.1831586-339-31876354550600/AnsiballZ_file.py'
Oct 01 16:42:39 compute-0 sudo[138685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:39 compute-0 python3.9[138687]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:42:39 compute-0 sudo[138685]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:40 compute-0 sudo[138837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prcotrdfwapyqkpelxgptaopkaswupug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336959.9776046-347-223321166156375/AnsiballZ_stat.py'
Oct 01 16:42:40 compute-0 sudo[138837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:40 compute-0 python3.9[138839]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:42:40 compute-0 sudo[138837]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:40 compute-0 ceph-mon[74273]: pgmap v367: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:41 compute-0 sudo[138960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drvkzkobaffluchqnctvddasquavxzcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336959.9776046-347-223321166156375/AnsiballZ_copy.py'
Oct 01 16:42:41 compute-0 sudo[138960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:41 compute-0 python3.9[138962]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336959.9776046-347-223321166156375/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bf4be44dc9b0cb27bebca4408e722e3ce3fb0177 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:42:41 compute-0 sudo[138960]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:42:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:42:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:42:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:42:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:42:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:42:41 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:41 compute-0 sudo[139112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eccxdhgfjfeagxlcgkwmnjawndhlhvqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336961.5122051-363-62897630478252/AnsiballZ_file.py'
Oct 01 16:42:41 compute-0 sudo[139112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:42 compute-0 python3.9[139114]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:42:42 compute-0 sudo[139112]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:42 compute-0 sudo[139264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-roryfpdfpitwdarjxtgebvglujwnarye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336962.3252912-371-206884186652094/AnsiballZ_stat.py'
Oct 01 16:42:42 compute-0 sudo[139264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:42 compute-0 ceph-mon[74273]: pgmap v368: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:42 compute-0 python3.9[139266]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:42:42 compute-0 sudo[139264]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:43 compute-0 sudo[139387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhsrtzixurkzqmbpocbrqruuryyibetp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336962.3252912-371-206884186652094/AnsiballZ_copy.py'
Oct 01 16:42:43 compute-0 sudo[139387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:43 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:43 compute-0 python3.9[139389]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336962.3252912-371-206884186652094/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bf4be44dc9b0cb27bebca4408e722e3ce3fb0177 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:42:43 compute-0 sudo[139387]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:42:44 compute-0 sshd-session[133284]: Connection closed by 192.168.122.30 port 47868
Oct 01 16:42:44 compute-0 sshd-session[133281]: pam_unix(sshd:session): session closed for user zuul
Oct 01 16:42:44 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Oct 01 16:42:44 compute-0 systemd[1]: session-44.scope: Consumed 26.862s CPU time.
Oct 01 16:42:44 compute-0 systemd-logind[788]: Session 44 logged out. Waiting for processes to exit.
Oct 01 16:42:44 compute-0 systemd-logind[788]: Removed session 44.
Oct 01 16:42:44 compute-0 ceph-mon[74273]: pgmap v369: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:45 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:46 compute-0 ceph-mon[74273]: pgmap v370: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:47 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:48 compute-0 ceph-mon[74273]: pgmap v371: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:42:49 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:50 compute-0 sshd-session[139414]: Accepted publickey for zuul from 192.168.122.30 port 36956 ssh2: ECDSA SHA256:cAu4I/kPoFUKOLOQB71BUt6Th09G4PIJ2iHT8DD8gEY
Oct 01 16:42:50 compute-0 systemd-logind[788]: New session 45 of user zuul.
Oct 01 16:42:50 compute-0 systemd[1]: Started Session 45 of User zuul.
Oct 01 16:42:50 compute-0 sshd-session[139414]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 16:42:50 compute-0 ceph-mon[74273]: pgmap v372: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:51 compute-0 sudo[139567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elmacekbgnxmuxyyvcohuoeybzuauhdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336970.4460075-22-214278410963174/AnsiballZ_file.py'
Oct 01 16:42:51 compute-0 sudo[139567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:51 compute-0 python3.9[139569]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:42:51 compute-0 sudo[139567]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:51 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:52 compute-0 sudo[139719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvijhiucgehdlmhmbxgcbfeujspgdgay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336971.460435-34-214694114157286/AnsiballZ_stat.py'
Oct 01 16:42:52 compute-0 sudo[139719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:52 compute-0 python3.9[139721]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:42:52 compute-0 sudo[139719]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:52 compute-0 ceph-mon[74273]: pgmap v373: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:52 compute-0 sudo[139842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pntvcaqhqmnkgwkbjybomlmdnpjbjixn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336971.460435-34-214694114157286/AnsiballZ_copy.py'
Oct 01 16:42:52 compute-0 sudo[139842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:53 compute-0 python3.9[139844]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759336971.460435-34-214694114157286/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=636bce12226d86adf51a8a262c22c91f203e7646 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:42:53 compute-0 sudo[139842]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:53 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:53 compute-0 sudo[140001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzjcvrmesojpdhdszlcyykcudbvyudmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336973.2297757-34-61506396779622/AnsiballZ_stat.py'
Oct 01 16:42:53 compute-0 sudo[140001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:53 compute-0 sudo[139989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:42:53 compute-0 sudo[139989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:42:53 compute-0 sudo[139989]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:53 compute-0 sudo[140022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:42:53 compute-0 sudo[140022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:42:53 compute-0 sudo[140022]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:53 compute-0 sudo[140047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:42:53 compute-0 sudo[140047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:42:53 compute-0 sudo[140047]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:53 compute-0 python3.9[140016]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:42:53 compute-0 sudo[140001]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:53 compute-0 sudo[140072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 16:42:53 compute-0 sudo[140072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:42:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:42:54 compute-0 sudo[140235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmkbxccfcclphwctbzktfgjjjkobfere ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336973.2297757-34-61506396779622/AnsiballZ_copy.py'
Oct 01 16:42:54 compute-0 sudo[140235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:42:54 compute-0 sudo[140072]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:42:54 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:42:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 16:42:54 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:42:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 16:42:54 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:42:54 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 7206e93d-2a54-4f79-b081-9308f0c85929 does not exist
Oct 01 16:42:54 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev efac3b25-9124-41b3-b95e-4ef6dcece37f does not exist
Oct 01 16:42:54 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 801d62f0-04af-4f66-9b52-0d271ed48f96 does not exist
Oct 01 16:42:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 16:42:54 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:42:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 16:42:54 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:42:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:42:54 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:42:54 compute-0 python3.9[140239]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759336973.2297757-34-61506396779622/.source.conf _original_basename=ceph.conf follow=False checksum=ce1e3811bfc3321a6d8d0a89fc510653c338f215 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:42:54 compute-0 sudo[140235]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:54 compute-0 sudo[140252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:42:54 compute-0 sudo[140252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:42:54 compute-0 sudo[140252]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:54 compute-0 sudo[140300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:42:54 compute-0 sudo[140300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:42:54 compute-0 sudo[140300]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:54 compute-0 sudo[140326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:42:54 compute-0 sudo[140326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:42:54 compute-0 sudo[140326]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:54 compute-0 sshd-session[139417]: Connection closed by 192.168.122.30 port 36956
Oct 01 16:42:54 compute-0 sshd-session[139414]: pam_unix(sshd:session): session closed for user zuul
Oct 01 16:42:54 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Oct 01 16:42:54 compute-0 systemd[1]: session-45.scope: Consumed 3.006s CPU time.
Oct 01 16:42:54 compute-0 systemd-logind[788]: Session 45 logged out. Waiting for processes to exit.
Oct 01 16:42:54 compute-0 systemd-logind[788]: Removed session 45.
Oct 01 16:42:54 compute-0 sudo[140351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 16:42:54 compute-0 sudo[140351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:42:54 compute-0 ceph-mon[74273]: pgmap v374: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:54 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:42:54 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:42:54 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:42:54 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:42:54 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:42:54 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:42:55 compute-0 podman[140416]: 2025-10-01 16:42:55.054338932 +0000 UTC m=+0.034342601 container create 6966ebab88c83d2c57bf506f7d6ee9ba8d9428ceb0246f169d01dba9726ce4a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hellman, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 01 16:42:55 compute-0 systemd[1]: Started libpod-conmon-6966ebab88c83d2c57bf506f7d6ee9ba8d9428ceb0246f169d01dba9726ce4a4.scope.
Oct 01 16:42:55 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:42:55 compute-0 podman[140416]: 2025-10-01 16:42:55.113338622 +0000 UTC m=+0.093342311 container init 6966ebab88c83d2c57bf506f7d6ee9ba8d9428ceb0246f169d01dba9726ce4a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hellman, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 01 16:42:55 compute-0 podman[140416]: 2025-10-01 16:42:55.11862981 +0000 UTC m=+0.098633479 container start 6966ebab88c83d2c57bf506f7d6ee9ba8d9428ceb0246f169d01dba9726ce4a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:42:55 compute-0 podman[140416]: 2025-10-01 16:42:55.12156657 +0000 UTC m=+0.101570239 container attach 6966ebab88c83d2c57bf506f7d6ee9ba8d9428ceb0246f169d01dba9726ce4a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hellman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:42:55 compute-0 great_hellman[140433]: 167 167
Oct 01 16:42:55 compute-0 systemd[1]: libpod-6966ebab88c83d2c57bf506f7d6ee9ba8d9428ceb0246f169d01dba9726ce4a4.scope: Deactivated successfully.
Oct 01 16:42:55 compute-0 conmon[140433]: conmon 6966ebab88c83d2c57bf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6966ebab88c83d2c57bf506f7d6ee9ba8d9428ceb0246f169d01dba9726ce4a4.scope/container/memory.events
Oct 01 16:42:55 compute-0 podman[140416]: 2025-10-01 16:42:55.12543742 +0000 UTC m=+0.105441089 container died 6966ebab88c83d2c57bf506f7d6ee9ba8d9428ceb0246f169d01dba9726ce4a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hellman, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 01 16:42:55 compute-0 podman[140416]: 2025-10-01 16:42:55.038238506 +0000 UTC m=+0.018242195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:42:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-3804e1ab7c96692b6a67b31187bc29425a6dc32f26a6394a38951f2dd13edb0a-merged.mount: Deactivated successfully.
Oct 01 16:42:55 compute-0 podman[140416]: 2025-10-01 16:42:55.167385851 +0000 UTC m=+0.147389560 container remove 6966ebab88c83d2c57bf506f7d6ee9ba8d9428ceb0246f169d01dba9726ce4a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hellman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:42:55 compute-0 systemd[1]: libpod-conmon-6966ebab88c83d2c57bf506f7d6ee9ba8d9428ceb0246f169d01dba9726ce4a4.scope: Deactivated successfully.
Oct 01 16:42:55 compute-0 podman[140456]: 2025-10-01 16:42:55.405678201 +0000 UTC m=+0.064531003 container create 9b82a95421cc79db290cd73232dcba6743d4111c9ef9f0f88be57c7698cb3148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_allen, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 01 16:42:55 compute-0 systemd[1]: Started libpod-conmon-9b82a95421cc79db290cd73232dcba6743d4111c9ef9f0f88be57c7698cb3148.scope.
Oct 01 16:42:55 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:55 compute-0 podman[140456]: 2025-10-01 16:42:55.379973555 +0000 UTC m=+0.038826407 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:42:55 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/897971865bd86bbf40b039746092ef48281690695a6c12b1c5c552a64a9d9513/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/897971865bd86bbf40b039746092ef48281690695a6c12b1c5c552a64a9d9513/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/897971865bd86bbf40b039746092ef48281690695a6c12b1c5c552a64a9d9513/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/897971865bd86bbf40b039746092ef48281690695a6c12b1c5c552a64a9d9513/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/897971865bd86bbf40b039746092ef48281690695a6c12b1c5c552a64a9d9513/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:42:55 compute-0 podman[140456]: 2025-10-01 16:42:55.503440677 +0000 UTC m=+0.162293519 container init 9b82a95421cc79db290cd73232dcba6743d4111c9ef9f0f88be57c7698cb3148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_allen, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:42:55 compute-0 podman[140456]: 2025-10-01 16:42:55.518706876 +0000 UTC m=+0.177559678 container start 9b82a95421cc79db290cd73232dcba6743d4111c9ef9f0f88be57c7698cb3148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_allen, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 01 16:42:55 compute-0 podman[140456]: 2025-10-01 16:42:55.524177319 +0000 UTC m=+0.183030101 container attach 9b82a95421cc79db290cd73232dcba6743d4111c9ef9f0f88be57c7698cb3148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:42:56 compute-0 reverent_allen[140473]: --> passed data devices: 0 physical, 3 LVM
Oct 01 16:42:56 compute-0 reverent_allen[140473]: --> relative data size: 1.0
Oct 01 16:42:56 compute-0 reverent_allen[140473]: --> All data devices are unavailable
Oct 01 16:42:56 compute-0 systemd[1]: libpod-9b82a95421cc79db290cd73232dcba6743d4111c9ef9f0f88be57c7698cb3148.scope: Deactivated successfully.
Oct 01 16:42:56 compute-0 systemd[1]: libpod-9b82a95421cc79db290cd73232dcba6743d4111c9ef9f0f88be57c7698cb3148.scope: Consumed 1.128s CPU time.
Oct 01 16:42:56 compute-0 podman[140456]: 2025-10-01 16:42:56.686649583 +0000 UTC m=+1.345502385 container died 9b82a95421cc79db290cd73232dcba6743d4111c9ef9f0f88be57c7698cb3148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_allen, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 01 16:42:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-897971865bd86bbf40b039746092ef48281690695a6c12b1c5c552a64a9d9513-merged.mount: Deactivated successfully.
Oct 01 16:42:56 compute-0 podman[140456]: 2025-10-01 16:42:56.759132184 +0000 UTC m=+1.417984976 container remove 9b82a95421cc79db290cd73232dcba6743d4111c9ef9f0f88be57c7698cb3148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 01 16:42:56 compute-0 ceph-mon[74273]: pgmap v375: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:56 compute-0 systemd[1]: libpod-conmon-9b82a95421cc79db290cd73232dcba6743d4111c9ef9f0f88be57c7698cb3148.scope: Deactivated successfully.
Oct 01 16:42:56 compute-0 sudo[140351]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:56 compute-0 sudo[140514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:42:56 compute-0 sudo[140514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:42:56 compute-0 sudo[140514]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:56 compute-0 sudo[140539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:42:56 compute-0 sudo[140539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:42:56 compute-0 sudo[140539]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:57 compute-0 sudo[140564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:42:57 compute-0 sudo[140564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:42:57 compute-0 sudo[140564]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:57 compute-0 sudo[140589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 16:42:57 compute-0 sudo[140589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:42:57 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:57 compute-0 podman[140654]: 2025-10-01 16:42:57.508082246 +0000 UTC m=+0.067073257 container create d6f2c25254edf55f89a23443183f9ff19cb91b29a2c54c0c4b09f557e4b40401 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 01 16:42:57 compute-0 systemd[1]: Started libpod-conmon-d6f2c25254edf55f89a23443183f9ff19cb91b29a2c54c0c4b09f557e4b40401.scope.
Oct 01 16:42:57 compute-0 podman[140654]: 2025-10-01 16:42:57.479746153 +0000 UTC m=+0.038737214 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:42:57 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:42:57 compute-0 podman[140654]: 2025-10-01 16:42:57.606208823 +0000 UTC m=+0.165199804 container init d6f2c25254edf55f89a23443183f9ff19cb91b29a2c54c0c4b09f557e4b40401 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_feistel, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:42:57 compute-0 podman[140654]: 2025-10-01 16:42:57.612259057 +0000 UTC m=+0.171250058 container start d6f2c25254edf55f89a23443183f9ff19cb91b29a2c54c0c4b09f557e4b40401 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 01 16:42:57 compute-0 podman[140654]: 2025-10-01 16:42:57.616258725 +0000 UTC m=+0.175249716 container attach d6f2c25254edf55f89a23443183f9ff19cb91b29a2c54c0c4b09f557e4b40401 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 01 16:42:57 compute-0 peaceful_feistel[140671]: 167 167
Oct 01 16:42:57 compute-0 systemd[1]: libpod-d6f2c25254edf55f89a23443183f9ff19cb91b29a2c54c0c4b09f557e4b40401.scope: Deactivated successfully.
Oct 01 16:42:57 compute-0 podman[140654]: 2025-10-01 16:42:57.618882781 +0000 UTC m=+0.177873782 container died d6f2c25254edf55f89a23443183f9ff19cb91b29a2c54c0c4b09f557e4b40401 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_feistel, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 01 16:42:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-4586e8cf4cbb36a03aadafe4743d256d2f4af9f9ad209b1675307553184187d2-merged.mount: Deactivated successfully.
Oct 01 16:42:57 compute-0 podman[140654]: 2025-10-01 16:42:57.664563813 +0000 UTC m=+0.223554814 container remove d6f2c25254edf55f89a23443183f9ff19cb91b29a2c54c0c4b09f557e4b40401 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:42:57 compute-0 systemd[1]: libpod-conmon-d6f2c25254edf55f89a23443183f9ff19cb91b29a2c54c0c4b09f557e4b40401.scope: Deactivated successfully.
Oct 01 16:42:57 compute-0 podman[140695]: 2025-10-01 16:42:57.907049048 +0000 UTC m=+0.070590368 container create 8b5871f1a83daeaa3e365041aef24414c1a84f5622e7b3a07e7a67637dad5bf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_cray, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:42:57 compute-0 systemd[1]: Started libpod-conmon-8b5871f1a83daeaa3e365041aef24414c1a84f5622e7b3a07e7a67637dad5bf5.scope.
Oct 01 16:42:57 compute-0 podman[140695]: 2025-10-01 16:42:57.879270323 +0000 UTC m=+0.042811693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:42:57 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:42:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eca96fa21cc5069ff96f260d615870cf880fc205a6e52812fe465021d879ca2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:42:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eca96fa21cc5069ff96f260d615870cf880fc205a6e52812fe465021d879ca2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:42:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eca96fa21cc5069ff96f260d615870cf880fc205a6e52812fe465021d879ca2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:42:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eca96fa21cc5069ff96f260d615870cf880fc205a6e52812fe465021d879ca2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:42:58 compute-0 podman[140695]: 2025-10-01 16:42:58.018700812 +0000 UTC m=+0.182242192 container init 8b5871f1a83daeaa3e365041aef24414c1a84f5622e7b3a07e7a67637dad5bf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_cray, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:42:58 compute-0 podman[140695]: 2025-10-01 16:42:58.033771985 +0000 UTC m=+0.197313305 container start 8b5871f1a83daeaa3e365041aef24414c1a84f5622e7b3a07e7a67637dad5bf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Oct 01 16:42:58 compute-0 podman[140695]: 2025-10-01 16:42:58.038144974 +0000 UTC m=+0.201686304 container attach 8b5871f1a83daeaa3e365041aef24414c1a84f5622e7b3a07e7a67637dad5bf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 01 16:42:58 compute-0 musing_cray[140712]: {
Oct 01 16:42:58 compute-0 musing_cray[140712]:     "0": [
Oct 01 16:42:58 compute-0 musing_cray[140712]:         {
Oct 01 16:42:58 compute-0 musing_cray[140712]:             "devices": [
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "/dev/loop3"
Oct 01 16:42:58 compute-0 musing_cray[140712]:             ],
Oct 01 16:42:58 compute-0 musing_cray[140712]:             "lv_name": "ceph_lv0",
Oct 01 16:42:58 compute-0 musing_cray[140712]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:42:58 compute-0 musing_cray[140712]:             "lv_size": "21470642176",
Oct 01 16:42:58 compute-0 musing_cray[140712]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:42:58 compute-0 musing_cray[140712]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:42:58 compute-0 musing_cray[140712]:             "name": "ceph_lv0",
Oct 01 16:42:58 compute-0 musing_cray[140712]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:42:58 compute-0 musing_cray[140712]:             "tags": {
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "ceph.cluster_name": "ceph",
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "ceph.crush_device_class": "",
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "ceph.encrypted": "0",
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "ceph.osd_id": "0",
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "ceph.type": "block",
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "ceph.vdo": "0"
Oct 01 16:42:58 compute-0 musing_cray[140712]:             },
Oct 01 16:42:58 compute-0 musing_cray[140712]:             "type": "block",
Oct 01 16:42:58 compute-0 musing_cray[140712]:             "vg_name": "ceph_vg0"
Oct 01 16:42:58 compute-0 musing_cray[140712]:         }
Oct 01 16:42:58 compute-0 musing_cray[140712]:     ],
Oct 01 16:42:58 compute-0 musing_cray[140712]:     "1": [
Oct 01 16:42:58 compute-0 musing_cray[140712]:         {
Oct 01 16:42:58 compute-0 musing_cray[140712]:             "devices": [
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "/dev/loop4"
Oct 01 16:42:58 compute-0 musing_cray[140712]:             ],
Oct 01 16:42:58 compute-0 musing_cray[140712]:             "lv_name": "ceph_lv1",
Oct 01 16:42:58 compute-0 musing_cray[140712]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:42:58 compute-0 musing_cray[140712]:             "lv_size": "21470642176",
Oct 01 16:42:58 compute-0 musing_cray[140712]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:42:58 compute-0 musing_cray[140712]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:42:58 compute-0 musing_cray[140712]:             "name": "ceph_lv1",
Oct 01 16:42:58 compute-0 musing_cray[140712]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:42:58 compute-0 musing_cray[140712]:             "tags": {
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "ceph.cluster_name": "ceph",
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "ceph.crush_device_class": "",
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "ceph.encrypted": "0",
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "ceph.osd_id": "1",
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "ceph.type": "block",
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "ceph.vdo": "0"
Oct 01 16:42:58 compute-0 musing_cray[140712]:             },
Oct 01 16:42:58 compute-0 musing_cray[140712]:             "type": "block",
Oct 01 16:42:58 compute-0 musing_cray[140712]:             "vg_name": "ceph_vg1"
Oct 01 16:42:58 compute-0 musing_cray[140712]:         }
Oct 01 16:42:58 compute-0 musing_cray[140712]:     ],
Oct 01 16:42:58 compute-0 musing_cray[140712]:     "2": [
Oct 01 16:42:58 compute-0 musing_cray[140712]:         {
Oct 01 16:42:58 compute-0 musing_cray[140712]:             "devices": [
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "/dev/loop5"
Oct 01 16:42:58 compute-0 musing_cray[140712]:             ],
Oct 01 16:42:58 compute-0 musing_cray[140712]:             "lv_name": "ceph_lv2",
Oct 01 16:42:58 compute-0 musing_cray[140712]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:42:58 compute-0 musing_cray[140712]:             "lv_size": "21470642176",
Oct 01 16:42:58 compute-0 musing_cray[140712]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:42:58 compute-0 musing_cray[140712]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:42:58 compute-0 musing_cray[140712]:             "name": "ceph_lv2",
Oct 01 16:42:58 compute-0 musing_cray[140712]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:42:58 compute-0 musing_cray[140712]:             "tags": {
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "ceph.cluster_name": "ceph",
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "ceph.crush_device_class": "",
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "ceph.encrypted": "0",
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "ceph.osd_id": "2",
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "ceph.type": "block",
Oct 01 16:42:58 compute-0 musing_cray[140712]:                 "ceph.vdo": "0"
Oct 01 16:42:58 compute-0 musing_cray[140712]:             },
Oct 01 16:42:58 compute-0 musing_cray[140712]:             "type": "block",
Oct 01 16:42:58 compute-0 musing_cray[140712]:             "vg_name": "ceph_vg2"
Oct 01 16:42:58 compute-0 musing_cray[140712]:         }
Oct 01 16:42:58 compute-0 musing_cray[140712]:     ]
Oct 01 16:42:58 compute-0 musing_cray[140712]: }
Oct 01 16:42:58 compute-0 ceph-mon[74273]: pgmap v376: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:42:58 compute-0 systemd[1]: libpod-8b5871f1a83daeaa3e365041aef24414c1a84f5622e7b3a07e7a67637dad5bf5.scope: Deactivated successfully.
Oct 01 16:42:58 compute-0 podman[140695]: 2025-10-01 16:42:58.808374934 +0000 UTC m=+0.971916284 container died 8b5871f1a83daeaa3e365041aef24414c1a84f5622e7b3a07e7a67637dad5bf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_cray, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:42:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-eca96fa21cc5069ff96f260d615870cf880fc205a6e52812fe465021d879ca2f-merged.mount: Deactivated successfully.
Oct 01 16:42:58 compute-0 podman[140695]: 2025-10-01 16:42:58.86798541 +0000 UTC m=+1.031526690 container remove 8b5871f1a83daeaa3e365041aef24414c1a84f5622e7b3a07e7a67637dad5bf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_cray, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 01 16:42:58 compute-0 systemd[1]: libpod-conmon-8b5871f1a83daeaa3e365041aef24414c1a84f5622e7b3a07e7a67637dad5bf5.scope: Deactivated successfully.
Oct 01 16:42:58 compute-0 sudo[140589]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:58 compute-0 sudo[140733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:42:58 compute-0 sudo[140733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:42:58 compute-0 sudo[140733]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:59 compute-0 sudo[140758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:42:59 compute-0 sudo[140758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:42:59 compute-0 sudo[140758]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:59 compute-0 sudo[140783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:42:59 compute-0 sudo[140783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:42:59 compute-0 sudo[140783]: pam_unix(sudo:session): session closed for user root
Oct 01 16:42:59 compute-0 sudo[140808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 16:42:59 compute-0 sudo[140808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:42:59 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:42:59 compute-0 podman[140874]: 2025-10-01 16:42:59.676349478 +0000 UTC m=+0.059749806 container create 0c644589257799a9c0712780635b58508161963810477fa0ecf9e51442257ff1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:42:59 compute-0 systemd[1]: Started libpod-conmon-0c644589257799a9c0712780635b58508161963810477fa0ecf9e51442257ff1.scope.
Oct 01 16:42:59 compute-0 podman[140874]: 2025-10-01 16:42:59.644699933 +0000 UTC m=+0.028100321 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:42:59 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:42:59 compute-0 podman[140874]: 2025-10-01 16:42:59.767676137 +0000 UTC m=+0.151076505 container init 0c644589257799a9c0712780635b58508161963810477fa0ecf9e51442257ff1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_booth, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 01 16:42:59 compute-0 podman[140874]: 2025-10-01 16:42:59.779443819 +0000 UTC m=+0.162844137 container start 0c644589257799a9c0712780635b58508161963810477fa0ecf9e51442257ff1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:42:59 compute-0 podman[140874]: 2025-10-01 16:42:59.783714534 +0000 UTC m=+0.167114852 container attach 0c644589257799a9c0712780635b58508161963810477fa0ecf9e51442257ff1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_booth, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 01 16:42:59 compute-0 nostalgic_booth[140891]: 167 167
Oct 01 16:42:59 compute-0 systemd[1]: libpod-0c644589257799a9c0712780635b58508161963810477fa0ecf9e51442257ff1.scope: Deactivated successfully.
Oct 01 16:42:59 compute-0 conmon[140891]: conmon 0c644589257799a9c071 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0c644589257799a9c0712780635b58508161963810477fa0ecf9e51442257ff1.scope/container/memory.events
Oct 01 16:42:59 compute-0 podman[140874]: 2025-10-01 16:42:59.788206521 +0000 UTC m=+0.171606859 container died 0c644589257799a9c0712780635b58508161963810477fa0ecf9e51442257ff1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 01 16:42:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf7b1c568f2cb9d6dc5dc735d32eb336145543c44156d39bf1a485e1618330b6-merged.mount: Deactivated successfully.
Oct 01 16:42:59 compute-0 podman[140874]: 2025-10-01 16:42:59.847810485 +0000 UTC m=+0.231210803 container remove 0c644589257799a9c0712780635b58508161963810477fa0ecf9e51442257ff1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 01 16:42:59 compute-0 systemd[1]: libpod-conmon-0c644589257799a9c0712780635b58508161963810477fa0ecf9e51442257ff1.scope: Deactivated successfully.
Oct 01 16:43:00 compute-0 podman[140917]: 2025-10-01 16:43:00.069565708 +0000 UTC m=+0.059009522 container create 0813f9e63cfd18897faa8d0e464d680b3e09afd0f9206b2812a590fb26818225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_borg, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:43:00 compute-0 systemd[1]: Started libpod-conmon-0813f9e63cfd18897faa8d0e464d680b3e09afd0f9206b2812a590fb26818225.scope.
Oct 01 16:43:00 compute-0 podman[140917]: 2025-10-01 16:43:00.042822568 +0000 UTC m=+0.032266462 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:43:00 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:43:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d22f07d8d3bbda7e5302a1a43ca7507a541779dccb177aac7bec222098f5fbb5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:43:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d22f07d8d3bbda7e5302a1a43ca7507a541779dccb177aac7bec222098f5fbb5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:43:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d22f07d8d3bbda7e5302a1a43ca7507a541779dccb177aac7bec222098f5fbb5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:43:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d22f07d8d3bbda7e5302a1a43ca7507a541779dccb177aac7bec222098f5fbb5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:43:00 compute-0 podman[140917]: 2025-10-01 16:43:00.188325204 +0000 UTC m=+0.177769078 container init 0813f9e63cfd18897faa8d0e464d680b3e09afd0f9206b2812a590fb26818225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_borg, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 01 16:43:00 compute-0 podman[140917]: 2025-10-01 16:43:00.200023916 +0000 UTC m=+0.189467760 container start 0813f9e63cfd18897faa8d0e464d680b3e09afd0f9206b2812a590fb26818225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_borg, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 16:43:00 compute-0 podman[140917]: 2025-10-01 16:43:00.204436151 +0000 UTC m=+0.193880005 container attach 0813f9e63cfd18897faa8d0e464d680b3e09afd0f9206b2812a590fb26818225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 01 16:43:00 compute-0 sshd-session[140939]: Accepted publickey for zuul from 192.168.122.30 port 49692 ssh2: ECDSA SHA256:cAu4I/kPoFUKOLOQB71BUt6Th09G4PIJ2iHT8DD8gEY
Oct 01 16:43:00 compute-0 systemd-logind[788]: New session 46 of user zuul.
Oct 01 16:43:00 compute-0 systemd[1]: Started Session 46 of User zuul.
Oct 01 16:43:00 compute-0 sshd-session[140939]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 16:43:00 compute-0 ceph-mon[74273]: pgmap v377: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:01 compute-0 musing_borg[140934]: {
Oct 01 16:43:01 compute-0 musing_borg[140934]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 16:43:01 compute-0 musing_borg[140934]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:43:01 compute-0 musing_borg[140934]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 16:43:01 compute-0 musing_borg[140934]:         "osd_id": 2,
Oct 01 16:43:01 compute-0 musing_borg[140934]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:43:01 compute-0 musing_borg[140934]:         "type": "bluestore"
Oct 01 16:43:01 compute-0 musing_borg[140934]:     },
Oct 01 16:43:01 compute-0 musing_borg[140934]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 16:43:01 compute-0 musing_borg[140934]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:43:01 compute-0 musing_borg[140934]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 16:43:01 compute-0 musing_borg[140934]:         "osd_id": 0,
Oct 01 16:43:01 compute-0 musing_borg[140934]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:43:01 compute-0 musing_borg[140934]:         "type": "bluestore"
Oct 01 16:43:01 compute-0 musing_borg[140934]:     },
Oct 01 16:43:01 compute-0 musing_borg[140934]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 16:43:01 compute-0 musing_borg[140934]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:43:01 compute-0 musing_borg[140934]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 16:43:01 compute-0 musing_borg[140934]:         "osd_id": 1,
Oct 01 16:43:01 compute-0 musing_borg[140934]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:43:01 compute-0 musing_borg[140934]:         "type": "bluestore"
Oct 01 16:43:01 compute-0 musing_borg[140934]:     }
Oct 01 16:43:01 compute-0 musing_borg[140934]: }
Oct 01 16:43:01 compute-0 systemd[1]: libpod-0813f9e63cfd18897faa8d0e464d680b3e09afd0f9206b2812a590fb26818225.scope: Deactivated successfully.
Oct 01 16:43:01 compute-0 podman[140917]: 2025-10-01 16:43:01.204240245 +0000 UTC m=+1.193684049 container died 0813f9e63cfd18897faa8d0e464d680b3e09afd0f9206b2812a590fb26818225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_borg, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 01 16:43:01 compute-0 systemd[1]: libpod-0813f9e63cfd18897faa8d0e464d680b3e09afd0f9206b2812a590fb26818225.scope: Consumed 1.013s CPU time.
Oct 01 16:43:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-d22f07d8d3bbda7e5302a1a43ca7507a541779dccb177aac7bec222098f5fbb5-merged.mount: Deactivated successfully.
Oct 01 16:43:01 compute-0 podman[140917]: 2025-10-01 16:43:01.283699878 +0000 UTC m=+1.273143682 container remove 0813f9e63cfd18897faa8d0e464d680b3e09afd0f9206b2812a590fb26818225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_borg, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:43:01 compute-0 systemd[1]: libpod-conmon-0813f9e63cfd18897faa8d0e464d680b3e09afd0f9206b2812a590fb26818225.scope: Deactivated successfully.
Oct 01 16:43:01 compute-0 sudo[140808]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:43:01 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:43:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:43:01 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:43:01 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 9c930079-4b4b-45c9-a01f-c2b17545be20 does not exist
Oct 01 16:43:01 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 61fa1558-4013-409a-bf78-f986aa254bdc does not exist
Oct 01 16:43:01 compute-0 sudo[141087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:43:01 compute-0 sudo[141087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:43:01 compute-0 sudo[141087]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:01 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:01 compute-0 sudo[141134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 16:43:01 compute-0 sudo[141134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:43:01 compute-0 sudo[141134]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:01 compute-0 python3.9[141182]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:43:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:43:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:43:02 compute-0 ceph-mon[74273]: pgmap v378: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:02 compute-0 sudo[141338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwftkhtvmmihtaoqtxnfatnbbdycnrkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336982.26039-34-31364479668907/AnsiballZ_file.py'
Oct 01 16:43:02 compute-0 sudo[141338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:03 compute-0 python3.9[141340]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:43:03 compute-0 sudo[141338]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:03 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:03 compute-0 sudo[141490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftinofmhowqufnpkorhxgiefapqoytan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336983.3047101-34-230039637086956/AnsiballZ_file.py'
Oct 01 16:43:03 compute-0 sudo[141490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:43:03 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Oct 01 16:43:03 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:43:03.800533) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 16:43:03 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Oct 01 16:43:03 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336983800563, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1693, "num_deletes": 252, "total_data_size": 2427076, "memory_usage": 2475992, "flush_reason": "Manual Compaction"}
Oct 01 16:43:03 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Oct 01 16:43:03 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336983810370, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1434404, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7329, "largest_seqno": 9021, "table_properties": {"data_size": 1428729, "index_size": 2558, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16711, "raw_average_key_size": 20, "raw_value_size": 1415283, "raw_average_value_size": 1777, "num_data_blocks": 120, "num_entries": 796, "num_filter_entries": 796, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336829, "oldest_key_time": 1759336829, "file_creation_time": 1759336983, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Oct 01 16:43:03 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 9871 microseconds, and 3681 cpu microseconds.
Oct 01 16:43:03 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 16:43:03 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:43:03.810407) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1434404 bytes OK
Oct 01 16:43:03 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:43:03.810421) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Oct 01 16:43:03 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:43:03.812187) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Oct 01 16:43:03 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:43:03.812202) EVENT_LOG_v1 {"time_micros": 1759336983812197, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 16:43:03 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:43:03.812221) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 16:43:03 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2419461, prev total WAL file size 2419461, number of live WAL files 2.
Oct 01 16:43:03 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 16:43:03 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:43:03.813130) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323533' seq:0, type:0; will stop at (end)
Oct 01 16:43:03 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 16:43:03 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1400KB)], [20(7358KB)]
Oct 01 16:43:03 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336983813225, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 8969036, "oldest_snapshot_seqno": -1}
Oct 01 16:43:03 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3391 keys, 6995818 bytes, temperature: kUnknown
Oct 01 16:43:03 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336983852411, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 6995818, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6969984, "index_size": 16253, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8517, "raw_key_size": 81108, "raw_average_key_size": 23, "raw_value_size": 6905566, "raw_average_value_size": 2036, "num_data_blocks": 719, "num_entries": 3391, "num_filter_entries": 3391, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336399, "oldest_key_time": 0, "file_creation_time": 1759336983, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Oct 01 16:43:03 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 16:43:03 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:43:03.852639) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 6995818 bytes
Oct 01 16:43:03 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:43:03.854102) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 228.5 rd, 178.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 7.2 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(11.1) write-amplify(4.9) OK, records in: 3835, records dropped: 444 output_compression: NoCompression
Oct 01 16:43:03 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:43:03.854126) EVENT_LOG_v1 {"time_micros": 1759336983854115, "job": 6, "event": "compaction_finished", "compaction_time_micros": 39253, "compaction_time_cpu_micros": 18692, "output_level": 6, "num_output_files": 1, "total_output_size": 6995818, "num_input_records": 3835, "num_output_records": 3391, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 16:43:03 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 16:43:03 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336983854512, "job": 6, "event": "table_file_deletion", "file_number": 22}
Oct 01 16:43:03 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 16:43:03 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336983856073, "job": 6, "event": "table_file_deletion", "file_number": 20}
Oct 01 16:43:03 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:43:03.813068) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:43:03 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:43:03.856142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:43:03 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:43:03.856147) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:43:03 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:43:03.856150) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:43:03 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:43:03.856154) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:43:03 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:43:03.856156) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:43:03 compute-0 python3.9[141492]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:43:03 compute-0 sudo[141490]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:04 compute-0 python3.9[141642]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:43:04 compute-0 ceph-mon[74273]: pgmap v379: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:05 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:05 compute-0 sudo[141792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgwzjuxbqmyqgnltxkzmfsymhvvhahkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336984.9425988-57-139183169553704/AnsiballZ_seboolean.py'
Oct 01 16:43:05 compute-0 sudo[141792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:05 compute-0 python3.9[141794]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 01 16:43:06 compute-0 ceph-mon[74273]: pgmap v380: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:06 compute-0 sudo[141792]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:07 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:07 compute-0 sudo[141948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbpvjiujmimuwhklnzysrmrvnqewptgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336987.2374833-67-276739318566067/AnsiballZ_setup.py'
Oct 01 16:43:07 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Oct 01 16:43:07 compute-0 sudo[141948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:07 compute-0 python3.9[141950]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 16:43:08 compute-0 sudo[141948]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:08 compute-0 sudo[142032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqseibboeegilgbunmrmssjucvmktpmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336987.2374833-67-276739318566067/AnsiballZ_dnf.py'
Oct 01 16:43:08 compute-0 sudo[142032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:43:08 compute-0 ceph-mon[74273]: pgmap v381: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:09 compute-0 python3.9[142034]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 16:43:09 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:10 compute-0 sudo[142032]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:10 compute-0 ceph-mon[74273]: pgmap v382: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:11 compute-0 sudo[142185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcugbghwwoasnjviyzblxyuqfercozdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336990.3713546-79-8084074657662/AnsiballZ_systemd.py'
Oct 01 16:43:11 compute-0 sudo[142185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_16:43:11
Oct 01 16:43:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 16:43:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 16:43:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', '.mgr', 'vms', 'volumes', 'backups', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'default.rgw.log']
Oct 01 16:43:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 16:43:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:43:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:43:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:43:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:43:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:43:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:43:11 compute-0 python3.9[142187]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 01 16:43:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 16:43:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 16:43:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:43:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:43:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:43:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:43:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:43:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:43:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:43:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:43:11 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:11 compute-0 sudo[142185]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:12 compute-0 sudo[142340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqtlmmbowtpbzgmzbgthbmzsotxsltwg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759336991.6581826-87-23945702589537/AnsiballZ_edpm_nftables_snippet.py'
Oct 01 16:43:12 compute-0 sudo[142340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:12 compute-0 python3[142342]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Oct 01 16:43:12 compute-0 sudo[142340]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:12 compute-0 ceph-mon[74273]: pgmap v383: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:13 compute-0 sudo[142492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lebtrenfezlgcdpojbxphhsstiqsawoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336992.8127117-96-206309271834454/AnsiballZ_file.py'
Oct 01 16:43:13 compute-0 sudo[142492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:13 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:13 compute-0 python3.9[142494]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:43:13 compute-0 sudo[142492]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:43:14 compute-0 sudo[142644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scsslxmkwjmpsgszhhblovfcmpymamzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336993.6424882-104-167548461501596/AnsiballZ_stat.py'
Oct 01 16:43:14 compute-0 sudo[142644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:14 compute-0 python3.9[142646]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:43:14 compute-0 sudo[142644]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:14 compute-0 sudo[142722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omlkhwfbzlnofxmmgjadyiycnlcekllo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336993.6424882-104-167548461501596/AnsiballZ_file.py'
Oct 01 16:43:14 compute-0 sudo[142722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:14 compute-0 python3.9[142724]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:43:14 compute-0 sudo[142722]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:14 compute-0 ceph-mon[74273]: pgmap v384: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:15 compute-0 sudo[142874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnptjsbkvqkpkheopcgwkigkukdpfujo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336994.9790072-116-255933961314672/AnsiballZ_stat.py'
Oct 01 16:43:15 compute-0 sudo[142874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:15 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:15 compute-0 python3.9[142876]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:43:15 compute-0 sudo[142874]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:15 compute-0 sudo[142952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftpetriqukcpixllxulfnztcwiljtqyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336994.9790072-116-255933961314672/AnsiballZ_file.py'
Oct 01 16:43:15 compute-0 sudo[142952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:16 compute-0 python3.9[142954]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.f0yjk2m3 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:43:16 compute-0 sudo[142952]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:16 compute-0 sudo[143104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cucjgcquedifdugbkbqkmiluijrvnfjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336996.1914752-128-97714634142293/AnsiballZ_stat.py'
Oct 01 16:43:16 compute-0 sudo[143104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:16 compute-0 python3.9[143106]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:43:16 compute-0 sudo[143104]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:16 compute-0 ceph-mon[74273]: pgmap v385: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:17 compute-0 sudo[143182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzqhgpadyqbqrvytblvygxozsvfpcveh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336996.1914752-128-97714634142293/AnsiballZ_file.py'
Oct 01 16:43:17 compute-0 sudo[143182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:17 compute-0 python3.9[143184]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:43:17 compute-0 sudo[143182]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:17 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:17 compute-0 sudo[143334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szmbnwbgufifqxkglyqbvzqumjkctadr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336997.473747-141-133894782767446/AnsiballZ_command.py'
Oct 01 16:43:18 compute-0 sudo[143334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:18 compute-0 python3.9[143336]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:43:18 compute-0 sudo[143334]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:43:18 compute-0 ceph-mon[74273]: pgmap v386: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:18 compute-0 sudo[143487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkxougltjysarsfuveecebrrqyelnyto ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759336998.4203238-149-110135894379533/AnsiballZ_edpm_nftables_from_files.py'
Oct 01 16:43:18 compute-0 sudo[143487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:19 compute-0 python3[143489]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 01 16:43:19 compute-0 sudo[143487]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:19 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:19 compute-0 sudo[143639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqfnmrxpeztkknyjcktbcwxuxlyoxzfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336999.3421547-157-163905243735789/AnsiballZ_stat.py'
Oct 01 16:43:19 compute-0 sudo[143639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:19 compute-0 python3.9[143641]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:43:20 compute-0 sudo[143639]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:20 compute-0 sudo[143764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nejemyetvwhizycepjwfcgyxtupgtrmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759336999.3421547-157-163905243735789/AnsiballZ_copy.py'
Oct 01 16:43:20 compute-0 sudo[143764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:20 compute-0 python3.9[143766]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336999.3421547-157-163905243735789/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:43:20 compute-0 sudo[143764]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 16:43:20 compute-0 ceph-mon[74273]: pgmap v387: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:43:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 16:43:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:43:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:43:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:43:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:43:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:43:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:43:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:43:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:43:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:43:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 16:43:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:43:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:43:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:43:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 16:43:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:43:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 16:43:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:43:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:43:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:43:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 16:43:21 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:21 compute-0 sudo[143916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcnvnioozujcskndqqkrmjkzuelmmcem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337001.0476377-172-154679978886724/AnsiballZ_stat.py'
Oct 01 16:43:21 compute-0 sudo[143916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:21 compute-0 python3.9[143918]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:43:21 compute-0 sudo[143916]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:22 compute-0 sudo[144041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fttxhixncfodokakarmxmbicdjkxtmee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337001.0476377-172-154679978886724/AnsiballZ_copy.py'
Oct 01 16:43:22 compute-0 sudo[144041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:22 compute-0 python3.9[144043]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759337001.0476377-172-154679978886724/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:43:22 compute-0 sudo[144041]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:22 compute-0 sudo[144193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwpldmcnalcnwbdqkzhoesaaflwnersy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337002.4515965-187-36572520923718/AnsiballZ_stat.py'
Oct 01 16:43:22 compute-0 sudo[144193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:22 compute-0 ceph-mon[74273]: pgmap v388: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:23 compute-0 python3.9[144195]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:43:23 compute-0 sudo[144193]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:23 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:23 compute-0 sudo[144318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijydetftpccmrmerbegkavfafluwhkfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337002.4515965-187-36572520923718/AnsiballZ_copy.py'
Oct 01 16:43:23 compute-0 sudo[144318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:23 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 16:43:23 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Cumulative writes: 2025 writes, 9017 keys, 2025 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                           Cumulative WAL: 2025 writes, 2025 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2025 writes, 9017 keys, 2025 commit groups, 1.0 writes per commit group, ingest: 11.45 MB, 0.02 MB/s
                                           Interval WAL: 2025 writes, 2025 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     62.8      0.14              0.03         3    0.046       0      0       0.0       0.0
                                             L6      1/0    6.67 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    205.5    180.4      0.08              0.04         2    0.038    7164    734       0.0       0.0
                                            Sum      1/0    6.67 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     73.9    105.1      0.21              0.07         5    0.043    7164    734       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    117.3    166.5      0.13              0.07         4    0.034    7164    734       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    205.5    180.4      0.08              0.04         2    0.038    7164    734       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    148.0      0.06              0.03         2    0.029       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.08              0.00         1    0.079       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.008, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.2 seconds
                                           Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5647d11d91f0#2 capacity: 308.00 MB usage: 600.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 9.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(35,513.89 KB,0.162937%) FilterBlock(6,27.80 KB,0.00881344%) IndexBlock(6,59.17 KB,0.0187614%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 01 16:43:23 compute-0 python3.9[144320]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759337002.4515965-187-36572520923718/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:43:23 compute-0 sudo[144318]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:43:24 compute-0 sudo[144470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atvdcysotaiafrzspvrrxuejdqbdeipc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337003.900785-202-120273973379174/AnsiballZ_stat.py'
Oct 01 16:43:24 compute-0 sudo[144470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:24 compute-0 python3.9[144472]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:43:24 compute-0 sudo[144470]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:24 compute-0 ceph-mon[74273]: pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:25 compute-0 sudo[144595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzcpszzfnzhiqzglpbiadefsqalrkvmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337003.900785-202-120273973379174/AnsiballZ_copy.py'
Oct 01 16:43:25 compute-0 sudo[144595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:25 compute-0 python3.9[144597]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759337003.900785-202-120273973379174/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:43:25 compute-0 sudo[144595]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:25 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:25 compute-0 sudo[144747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-askmonzhjgnmgvuvcxowrobsukkrkitb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337005.4549088-217-202955990868379/AnsiballZ_stat.py'
Oct 01 16:43:25 compute-0 sudo[144747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:26 compute-0 python3.9[144749]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:43:26 compute-0 sudo[144747]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:26 compute-0 sudo[144872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfmiyonktbtnykomyfphfmfcwafbdljc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337005.4549088-217-202955990868379/AnsiballZ_copy.py'
Oct 01 16:43:26 compute-0 sudo[144872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:26 compute-0 python3.9[144874]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759337005.4549088-217-202955990868379/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:43:26 compute-0 sudo[144872]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:26 compute-0 ceph-mon[74273]: pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:27 compute-0 sudo[145024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rstanixxzwagrfrksyzjsswfppfccplt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337006.9595425-232-104857642425438/AnsiballZ_file.py'
Oct 01 16:43:27 compute-0 sudo[145024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:27 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:27 compute-0 python3.9[145026]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:43:27 compute-0 sudo[145024]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:28 compute-0 sudo[145176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqwlzyykmzhccecausgfbpezqhwsxogv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337007.7783964-240-21247378551615/AnsiballZ_command.py'
Oct 01 16:43:28 compute-0 sudo[145176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:28 compute-0 python3.9[145178]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:43:28 compute-0 sudo[145176]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:43:28 compute-0 ceph-mon[74273]: pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:29 compute-0 sudo[145331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpdzkwypziivpmvljrzubchylntszaeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337008.4465883-248-126632448572123/AnsiballZ_blockinfile.py'
Oct 01 16:43:29 compute-0 sudo[145331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:29 compute-0 python3.9[145333]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:43:29 compute-0 sudo[145331]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:29 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:29 compute-0 sudo[145483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbpnuigfkcymxobgmneopzudcnvehlke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337009.4964209-257-177258212400611/AnsiballZ_command.py'
Oct 01 16:43:29 compute-0 sudo[145483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:30 compute-0 python3.9[145485]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:43:30 compute-0 sudo[145483]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:30 compute-0 sudo[145636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwhchrxkjifehaaxgyvqerdrrqivsqpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337010.2901866-265-220557243111579/AnsiballZ_stat.py'
Oct 01 16:43:30 compute-0 sudo[145636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:30 compute-0 python3.9[145638]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:43:30 compute-0 sudo[145636]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:30 compute-0 ceph-mon[74273]: pgmap v392: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:31 compute-0 sudo[145790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shwgdlpyrxlxxzoyjcsnsfttimjqbpig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337011.068692-273-219345399881174/AnsiballZ_command.py'
Oct 01 16:43:31 compute-0 sudo[145790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:31 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:31 compute-0 python3.9[145792]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:43:31 compute-0 sudo[145790]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:32 compute-0 sudo[145945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqqizsxxnsbmiluguczohfhdzpwiowfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337011.8467321-281-158616399018257/AnsiballZ_file.py'
Oct 01 16:43:32 compute-0 sudo[145945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:32 compute-0 python3.9[145947]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:43:32 compute-0 sudo[145945]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:32 compute-0 ceph-mon[74273]: pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:33 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:33 compute-0 python3.9[146097]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:43:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:43:33 compute-0 ceph-mon[74273]: pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:34 compute-0 sudo[146248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvkkcsypnnqtvufaoslnzcwyudkafyyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337014.0653725-321-163315831921189/AnsiballZ_command.py'
Oct 01 16:43:34 compute-0 sudo[146248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:34 compute-0 python3.9[146250]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:d8:76:c8:90" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:43:34 compute-0 ovs-vsctl[146251]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:d8:76:c8:90 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct 01 16:43:34 compute-0 sudo[146248]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:35 compute-0 sudo[146401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqebqswvgyhzjxalyiqfvsefysxeoxey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337014.8145316-330-269734405685763/AnsiballZ_command.py'
Oct 01 16:43:35 compute-0 sudo[146401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:35 compute-0 python3.9[146403]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:43:35 compute-0 sudo[146401]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:35 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:35 compute-0 sudo[146556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxtdgywvrvamgabxjrhtjcahpxssshza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337015.5102544-338-93413931605189/AnsiballZ_command.py'
Oct 01 16:43:35 compute-0 sudo[146556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:36 compute-0 python3.9[146558]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:43:36 compute-0 ovs-vsctl[146559]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Oct 01 16:43:36 compute-0 sudo[146556]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:36 compute-0 ceph-mon[74273]: pgmap v395: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:36 compute-0 python3.9[146709]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:43:37 compute-0 sudo[146861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luufxrhmkfcrcfbtweomlhmoshxgwrao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337017.0155828-355-53601763165357/AnsiballZ_file.py'
Oct 01 16:43:37 compute-0 sudo[146861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:37 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:37 compute-0 python3.9[146863]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:43:37 compute-0 sudo[146861]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:38 compute-0 sudo[147013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azxvvusvblnvowkroqvzavhyfknwvmvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337017.7041075-363-146756853553684/AnsiballZ_stat.py'
Oct 01 16:43:38 compute-0 sudo[147013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:38 compute-0 python3.9[147015]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:43:38 compute-0 sudo[147013]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:38 compute-0 sudo[147091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uncehghxhmakvnerwubkysscanzpkuuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337017.7041075-363-146756853553684/AnsiballZ_file.py'
Oct 01 16:43:38 compute-0 sudo[147091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:38 compute-0 ceph-mon[74273]: pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:38 compute-0 python3.9[147093]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:43:38 compute-0 sudo[147091]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:43:39 compute-0 sudo[147243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxlmakaucqaiwhuqdeepijslzgnnfhwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337018.8794246-363-237782635008690/AnsiballZ_stat.py'
Oct 01 16:43:39 compute-0 sudo[147243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:39 compute-0 python3.9[147245]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:43:39 compute-0 sudo[147243]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:39 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:39 compute-0 sudo[147321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnvchqsnchvykkgtjgmjzpikpnjrdvqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337018.8794246-363-237782635008690/AnsiballZ_file.py'
Oct 01 16:43:39 compute-0 sudo[147321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:39 compute-0 python3.9[147323]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:43:39 compute-0 sudo[147321]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:40 compute-0 sudo[147473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdgmzzowcaoyfhhpcmbkjnthvfqesbsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337020.0652745-386-66082799103013/AnsiballZ_file.py'
Oct 01 16:43:40 compute-0 sudo[147473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:40 compute-0 ceph-mon[74273]: pgmap v397: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:40 compute-0 python3.9[147475]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:43:40 compute-0 sudo[147473]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:41 compute-0 sudo[147625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkyyahcadhroudgnjrozpnwtfwqdxxxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337020.8217454-394-239430539843777/AnsiballZ_stat.py'
Oct 01 16:43:41 compute-0 sudo[147625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:41 compute-0 python3.9[147627]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:43:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:43:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:43:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:43:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:43:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:43:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:43:41 compute-0 sudo[147625]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:41 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:41 compute-0 sudo[147703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psgsdlnyjbzsmazhfoezinmoerwwmrmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337020.8217454-394-239430539843777/AnsiballZ_file.py'
Oct 01 16:43:41 compute-0 sudo[147703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:41 compute-0 python3.9[147705]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:43:41 compute-0 sudo[147703]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:42 compute-0 sudo[147855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-difaxryhrclbklvypyafcdxddphdgauc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337022.1306088-406-207878263029652/AnsiballZ_stat.py'
Oct 01 16:43:42 compute-0 sudo[147855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:42 compute-0 ceph-mon[74273]: pgmap v398: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:42 compute-0 python3.9[147857]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:43:42 compute-0 sudo[147855]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:42 compute-0 sudo[147933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xveyeieigocxqhpxfwhxzfyvimiumxoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337022.1306088-406-207878263029652/AnsiballZ_file.py'
Oct 01 16:43:42 compute-0 sudo[147933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:43 compute-0 python3.9[147935]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:43:43 compute-0 sudo[147933]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:43 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:43 compute-0 sudo[148085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isuygyfrmvuibpqqbsgikkhguelkhtmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337023.310287-418-252673676817813/AnsiballZ_systemd.py'
Oct 01 16:43:43 compute-0 sudo[148085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:43:43 compute-0 python3.9[148087]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:43:43 compute-0 systemd[1]: Reloading.
Oct 01 16:43:44 compute-0 systemd-rc-local-generator[148116]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:43:44 compute-0 systemd-sysv-generator[148120]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:43:44 compute-0 sudo[148085]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:44 compute-0 ceph-mon[74273]: pgmap v399: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:44 compute-0 sudo[148275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aihxhhnvtpextsjygzfdezwydpscpkuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337024.4182997-426-126674419496813/AnsiballZ_stat.py'
Oct 01 16:43:44 compute-0 sudo[148275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:44 compute-0 python3.9[148277]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:43:44 compute-0 sudo[148275]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:45 compute-0 sudo[148353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aknlrhmsellrfitzqnrvpqcyxtdxzbrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337024.4182997-426-126674419496813/AnsiballZ_file.py'
Oct 01 16:43:45 compute-0 sudo[148353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:45 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:45 compute-0 python3.9[148355]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:43:45 compute-0 sudo[148353]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:46 compute-0 sudo[148505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awbffxsbofntbscybkddlidnbhjxcstk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337025.7473917-438-96935453829106/AnsiballZ_stat.py'
Oct 01 16:43:46 compute-0 sudo[148505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:46 compute-0 python3.9[148507]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:43:46 compute-0 sudo[148505]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:46 compute-0 ceph-mon[74273]: pgmap v400: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:46 compute-0 sudo[148583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvphupkcbfrgcwhogazjsndgrfdrpcld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337025.7473917-438-96935453829106/AnsiballZ_file.py'
Oct 01 16:43:46 compute-0 sudo[148583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:46 compute-0 python3.9[148585]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:43:46 compute-0 sudo[148583]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:47 compute-0 rsyslogd[1001]: imjournal: 906 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 01 16:43:47 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:47 compute-0 sudo[148735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtwfcvbuyxjnajeyjdquhshdxshgpmhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337027.1268418-450-24093501468026/AnsiballZ_systemd.py'
Oct 01 16:43:47 compute-0 sudo[148735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:47 compute-0 python3.9[148737]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:43:47 compute-0 systemd[1]: Reloading.
Oct 01 16:43:47 compute-0 systemd-rc-local-generator[148761]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:43:47 compute-0 systemd-sysv-generator[148765]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:43:48 compute-0 systemd[1]: Starting Create netns directory...
Oct 01 16:43:48 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 01 16:43:48 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 01 16:43:48 compute-0 systemd[1]: Finished Create netns directory.
Oct 01 16:43:48 compute-0 sudo[148735]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:48 compute-0 ceph-mon[74273]: pgmap v401: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:48 compute-0 sudo[148930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkvqoicsurvevqffhcfenagrnacneqxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337028.4342523-460-35348672138632/AnsiballZ_file.py'
Oct 01 16:43:48 compute-0 sudo[148930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:43:48 compute-0 python3.9[148932]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:43:48 compute-0 sudo[148930]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:49 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:49 compute-0 sudo[149082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zazkyqktffwlaalrfskzovjflpvsnxwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337029.1269317-468-195790934870062/AnsiballZ_stat.py'
Oct 01 16:43:49 compute-0 sudo[149082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:49 compute-0 python3.9[149084]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:43:49 compute-0 sudo[149082]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:50 compute-0 sudo[149205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpajcrnmsipefngnlytiozuyaqazynlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337029.1269317-468-195790934870062/AnsiballZ_copy.py'
Oct 01 16:43:50 compute-0 sudo[149205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:50 compute-0 python3.9[149207]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759337029.1269317-468-195790934870062/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:43:50 compute-0 sudo[149205]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:50 compute-0 ceph-mon[74273]: pgmap v402: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:51 compute-0 sudo[149357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llgkgwqjtvencugcqkqraqofepbwdkca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337030.7053514-485-111642806134481/AnsiballZ_file.py'
Oct 01 16:43:51 compute-0 sudo[149357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:51 compute-0 python3.9[149359]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:43:51 compute-0 sudo[149357]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:51 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:51 compute-0 sudo[149509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjwqbhpsiiceygwppeuwdidrpthaiflc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337031.4961476-493-140088325141657/AnsiballZ_stat.py'
Oct 01 16:43:51 compute-0 sudo[149509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:52 compute-0 python3.9[149511]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:43:52 compute-0 sudo[149509]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:52 compute-0 sudo[149632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbgdzcklmnnivfjhoszienaiuzbteyfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337031.4961476-493-140088325141657/AnsiballZ_copy.py'
Oct 01 16:43:52 compute-0 sudo[149632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:52 compute-0 ceph-mon[74273]: pgmap v403: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:52 compute-0 python3.9[149634]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759337031.4961476-493-140088325141657/.source.json _original_basename=.4e8if62l follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:43:52 compute-0 sudo[149632]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:53 compute-0 sudo[149784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpzlwfhosfmwkhbdqlwdhbevygabcdnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337032.768266-508-107278852777284/AnsiballZ_file.py'
Oct 01 16:43:53 compute-0 sudo[149784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:53 compute-0 python3.9[149786]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:43:53 compute-0 sudo[149784]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:53 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:43:53 compute-0 sudo[149936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdtlcdzmqoynptsrfbticznzujtnedpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337033.5293076-516-13837524404689/AnsiballZ_stat.py'
Oct 01 16:43:53 compute-0 sudo[149936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:54 compute-0 sudo[149936]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:54 compute-0 sudo[150059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yceehlijqdxlquxbubemedeiuzswjkoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337033.5293076-516-13837524404689/AnsiballZ_copy.py'
Oct 01 16:43:54 compute-0 sudo[150059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:54 compute-0 ceph-mon[74273]: pgmap v404: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:54 compute-0 sudo[150059]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:55 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:55 compute-0 sudo[150211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onbhitasnkfmaijndewuozuenvzsmrhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337035.034734-533-28235298874858/AnsiballZ_container_config_data.py'
Oct 01 16:43:55 compute-0 sudo[150211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:55 compute-0 python3.9[150213]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct 01 16:43:55 compute-0 sudo[150211]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:56 compute-0 sudo[150363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evfptpucdhsorrujggvzidggaaaonahi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337035.9882977-542-192031038961487/AnsiballZ_container_config_hash.py'
Oct 01 16:43:56 compute-0 sudo[150363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:56 compute-0 ceph-mon[74273]: pgmap v405: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:56 compute-0 python3.9[150365]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 01 16:43:56 compute-0 sudo[150363]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:57 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:57 compute-0 sudo[150515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hubylvcuncmhgjihrgykvzhjkwshdbqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337037.0509784-551-14201679355448/AnsiballZ_podman_container_info.py'
Oct 01 16:43:57 compute-0 sudo[150515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:57 compute-0 python3.9[150517]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 01 16:43:57 compute-0 sudo[150515]: pam_unix(sudo:session): session closed for user root
Oct 01 16:43:58 compute-0 ceph-mon[74273]: pgmap v406: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:43:59 compute-0 sudo[150693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzopgvjrnfutqnbxfrfbytekeqvnimjr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759337038.6488564-564-161635935115534/AnsiballZ_edpm_container_manage.py'
Oct 01 16:43:59 compute-0 sudo[150693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:43:59 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:43:59 compute-0 python3[150695]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 01 16:44:00 compute-0 ceph-mon[74273]: pgmap v407: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:01 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:01 compute-0 sudo[150758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:44:01 compute-0 sudo[150758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:44:01 compute-0 sudo[150758]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:01 compute-0 sudo[150783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:44:01 compute-0 sudo[150783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:44:01 compute-0 sudo[150783]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:01 compute-0 sudo[150808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:44:01 compute-0 sudo[150808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:44:01 compute-0 sudo[150808]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:01 compute-0 sudo[150833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 16:44:01 compute-0 sudo[150833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:44:02 compute-0 ceph-mon[74273]: pgmap v408: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:03 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:03 compute-0 sudo[150833]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:44:03 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:44:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 16:44:03 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:44:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 16:44:03 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:44:03 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 3045f2ec-000f-44a3-adb9-25f184a2ea57 does not exist
Oct 01 16:44:03 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 180b1f26-52ab-4c4e-a7ac-b544d418b1f0 does not exist
Oct 01 16:44:03 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 526074a7-7b2a-4d7f-abd6-94ff0931ddf6 does not exist
Oct 01 16:44:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 16:44:03 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:44:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 16:44:03 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:44:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:44:03 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:44:03 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:44:03 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:44:03 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:44:03 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:44:03 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:44:03 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:44:03 compute-0 sudo[150923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:44:03 compute-0 sudo[150923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:44:03 compute-0 sudo[150923]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:03 compute-0 sudo[150948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:44:03 compute-0 sudo[150948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:44:03 compute-0 sudo[150948]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:44:03 compute-0 sudo[150973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:44:03 compute-0 sudo[150973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:44:03 compute-0 sudo[150973]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:03 compute-0 sudo[150998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 16:44:03 compute-0 sudo[150998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:44:04 compute-0 podman[150709]: 2025-10-01 16:44:04.491266087 +0000 UTC m=+4.898215869 image pull ceb6fcca0131acbc0ff37d5322c126e14f8045fca848e7440fedac2d6444d8c2 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct 01 16:44:04 compute-0 podman[151082]: 2025-10-01 16:44:04.593102451 +0000 UTC m=+0.055613078 container create 2702e28416ee651e316bb78bcaaabb7401046c0715538411ecdf95f01c3ebc18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kalam, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:44:04 compute-0 systemd[1]: Started libpod-conmon-2702e28416ee651e316bb78bcaaabb7401046c0715538411ecdf95f01c3ebc18.scope.
Oct 01 16:44:04 compute-0 podman[151082]: 2025-10-01 16:44:04.566848466 +0000 UTC m=+0.029359163 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:44:04 compute-0 ceph-mon[74273]: pgmap v409: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:04 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:44:04 compute-0 podman[151107]: 2025-10-01 16:44:04.674504344 +0000 UTC m=+0.075192789 container create 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible)
Oct 01 16:44:04 compute-0 podman[151107]: 2025-10-01 16:44:04.64060621 +0000 UTC m=+0.041294715 image pull ceb6fcca0131acbc0ff37d5322c126e14f8045fca848e7440fedac2d6444d8c2 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct 01 16:44:04 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Oct 01 16:44:04 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:44:04.678152) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 16:44:04 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Oct 01 16:44:04 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337044678221, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 704, "num_deletes": 251, "total_data_size": 855239, "memory_usage": 867920, "flush_reason": "Manual Compaction"}
Oct 01 16:44:04 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Oct 01 16:44:04 compute-0 python3[150695]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct 01 16:44:04 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337044687717, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 847324, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9022, "largest_seqno": 9725, "table_properties": {"data_size": 843722, "index_size": 1446, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7821, "raw_average_key_size": 18, "raw_value_size": 836494, "raw_average_value_size": 1963, "num_data_blocks": 68, "num_entries": 426, "num_filter_entries": 426, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336984, "oldest_key_time": 1759336984, "file_creation_time": 1759337044, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Oct 01 16:44:04 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 9633 microseconds, and 5832 cpu microseconds.
Oct 01 16:44:04 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 16:44:04 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:44:04.687786) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 847324 bytes OK
Oct 01 16:44:04 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:44:04.687815) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Oct 01 16:44:04 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:44:04.689119) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Oct 01 16:44:04 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:44:04.689145) EVENT_LOG_v1 {"time_micros": 1759337044689136, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 16:44:04 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:44:04.689172) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 16:44:04 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 851596, prev total WAL file size 851596, number of live WAL files 2.
Oct 01 16:44:04 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 16:44:04 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:44:04.691397) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Oct 01 16:44:04 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 16:44:04 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(827KB)], [23(6831KB)]
Oct 01 16:44:04 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337044691461, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 7843142, "oldest_snapshot_seqno": -1}
Oct 01 16:44:04 compute-0 podman[151082]: 2025-10-01 16:44:04.692112089 +0000 UTC m=+0.154622736 container init 2702e28416ee651e316bb78bcaaabb7401046c0715538411ecdf95f01c3ebc18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 01 16:44:04 compute-0 podman[151082]: 2025-10-01 16:44:04.699253759 +0000 UTC m=+0.161764356 container start 2702e28416ee651e316bb78bcaaabb7401046c0715538411ecdf95f01c3ebc18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 01 16:44:04 compute-0 podman[151082]: 2025-10-01 16:44:04.70277779 +0000 UTC m=+0.165288427 container attach 2702e28416ee651e316bb78bcaaabb7401046c0715538411ecdf95f01c3ebc18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:44:04 compute-0 compassionate_kalam[151122]: 167 167
Oct 01 16:44:04 compute-0 systemd[1]: libpod-2702e28416ee651e316bb78bcaaabb7401046c0715538411ecdf95f01c3ebc18.scope: Deactivated successfully.
Oct 01 16:44:04 compute-0 podman[151082]: 2025-10-01 16:44:04.705628457 +0000 UTC m=+0.168139094 container died 2702e28416ee651e316bb78bcaaabb7401046c0715538411ecdf95f01c3ebc18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 01 16:44:04 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3305 keys, 6072066 bytes, temperature: kUnknown
Oct 01 16:44:04 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337044735521, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6072066, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6048150, "index_size": 14527, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8325, "raw_key_size": 80112, "raw_average_key_size": 24, "raw_value_size": 5986580, "raw_average_value_size": 1811, "num_data_blocks": 632, "num_entries": 3305, "num_filter_entries": 3305, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336399, "oldest_key_time": 0, "file_creation_time": 1759337044, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Oct 01 16:44:04 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 16:44:04 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:44:04.735703) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6072066 bytes
Oct 01 16:44:04 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:44:04.737840) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 177.8 rd, 137.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 6.7 +0.0 blob) out(5.8 +0.0 blob), read-write-amplify(16.4) write-amplify(7.2) OK, records in: 3817, records dropped: 512 output_compression: NoCompression
Oct 01 16:44:04 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:44:04.737857) EVENT_LOG_v1 {"time_micros": 1759337044737848, "job": 8, "event": "compaction_finished", "compaction_time_micros": 44105, "compaction_time_cpu_micros": 32527, "output_level": 6, "num_output_files": 1, "total_output_size": 6072066, "num_input_records": 3817, "num_output_records": 3305, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 16:44:04 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 16:44:04 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337044738062, "job": 8, "event": "table_file_deletion", "file_number": 25}
Oct 01 16:44:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-aef2c630e5ca0ad33e9ab60ee20ce0238fa7b47e14e310c7b5e92b83698ea304-merged.mount: Deactivated successfully.
Oct 01 16:44:04 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 16:44:04 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337044739032, "job": 8, "event": "table_file_deletion", "file_number": 23}
Oct 01 16:44:04 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:44:04.690069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:44:04 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:44:04.739124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:44:04 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:44:04.739131) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:44:04 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:44:04.739135) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:44:04 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:44:04.739137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:44:04 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:44:04.739140) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:44:04 compute-0 podman[151082]: 2025-10-01 16:44:04.767976241 +0000 UTC m=+0.230486848 container remove 2702e28416ee651e316bb78bcaaabb7401046c0715538411ecdf95f01c3ebc18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Oct 01 16:44:04 compute-0 systemd[1]: libpod-conmon-2702e28416ee651e316bb78bcaaabb7401046c0715538411ecdf95f01c3ebc18.scope: Deactivated successfully.
Oct 01 16:44:04 compute-0 sudo[150693]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:04 compute-0 podman[151182]: 2025-10-01 16:44:04.959426254 +0000 UTC m=+0.040640773 container create 0d431ca64cda8985389d4a937a205053958cd83f4674a83edc4a11df42eff0c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_volhard, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 01 16:44:04 compute-0 systemd[1]: Started libpod-conmon-0d431ca64cda8985389d4a937a205053958cd83f4674a83edc4a11df42eff0c2.scope.
Oct 01 16:44:05 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:44:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd0a7af2f4014e4c00d04f61b8a9b5b84527d26466936f4f91b7c500ff20ac23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:44:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd0a7af2f4014e4c00d04f61b8a9b5b84527d26466936f4f91b7c500ff20ac23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:44:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd0a7af2f4014e4c00d04f61b8a9b5b84527d26466936f4f91b7c500ff20ac23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:44:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd0a7af2f4014e4c00d04f61b8a9b5b84527d26466936f4f91b7c500ff20ac23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:44:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd0a7af2f4014e4c00d04f61b8a9b5b84527d26466936f4f91b7c500ff20ac23/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:44:05 compute-0 podman[151182]: 2025-10-01 16:44:04.938958771 +0000 UTC m=+0.020173340 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:44:05 compute-0 podman[151182]: 2025-10-01 16:44:05.045603138 +0000 UTC m=+0.126817677 container init 0d431ca64cda8985389d4a937a205053958cd83f4674a83edc4a11df42eff0c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Oct 01 16:44:05 compute-0 podman[151182]: 2025-10-01 16:44:05.053125219 +0000 UTC m=+0.134339738 container start 0d431ca64cda8985389d4a937a205053958cd83f4674a83edc4a11df42eff0c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 01 16:44:05 compute-0 podman[151182]: 2025-10-01 16:44:05.066941461 +0000 UTC m=+0.148156000 container attach 0d431ca64cda8985389d4a937a205053958cd83f4674a83edc4a11df42eff0c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:44:05 compute-0 sudo[151346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsigbovfscyrahndaqizvetxnsojwjts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337045.0137818-572-171244932124751/AnsiballZ_stat.py'
Oct 01 16:44:05 compute-0 sudo[151346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:05 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:05 compute-0 python3.9[151348]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:44:05 compute-0 sudo[151346]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:06 compute-0 youthful_volhard[151217]: --> passed data devices: 0 physical, 3 LVM
Oct 01 16:44:06 compute-0 youthful_volhard[151217]: --> relative data size: 1.0
Oct 01 16:44:06 compute-0 youthful_volhard[151217]: --> All data devices are unavailable
Oct 01 16:44:06 compute-0 systemd[1]: libpod-0d431ca64cda8985389d4a937a205053958cd83f4674a83edc4a11df42eff0c2.scope: Deactivated successfully.
Oct 01 16:44:06 compute-0 podman[151182]: 2025-10-01 16:44:06.212132467 +0000 UTC m=+1.293347006 container died 0d431ca64cda8985389d4a937a205053958cd83f4674a83edc4a11df42eff0c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 01 16:44:06 compute-0 systemd[1]: libpod-0d431ca64cda8985389d4a937a205053958cd83f4674a83edc4a11df42eff0c2.scope: Consumed 1.076s CPU time.
Oct 01 16:44:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd0a7af2f4014e4c00d04f61b8a9b5b84527d26466936f4f91b7c500ff20ac23-merged.mount: Deactivated successfully.
Oct 01 16:44:06 compute-0 podman[151182]: 2025-10-01 16:44:06.281169504 +0000 UTC m=+1.362384033 container remove 0d431ca64cda8985389d4a937a205053958cd83f4674a83edc4a11df42eff0c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_volhard, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 01 16:44:06 compute-0 systemd[1]: libpod-conmon-0d431ca64cda8985389d4a937a205053958cd83f4674a83edc4a11df42eff0c2.scope: Deactivated successfully.
Oct 01 16:44:06 compute-0 sudo[150998]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:06 compute-0 sudo[151538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avhydslossrvubjxbbiaszrxhquarnki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337045.925986-581-68978764831033/AnsiballZ_file.py'
Oct 01 16:44:06 compute-0 sudo[151538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:06 compute-0 sudo[151539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:44:06 compute-0 sudo[151539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:44:06 compute-0 sudo[151539]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:06 compute-0 sudo[151566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:44:06 compute-0 sudo[151566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:44:06 compute-0 sudo[151566]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:06 compute-0 sudo[151591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:44:06 compute-0 sudo[151591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:44:06 compute-0 sudo[151591]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:06 compute-0 python3.9[151558]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:44:06 compute-0 sudo[151538]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:06 compute-0 sudo[151616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 16:44:06 compute-0 sudo[151616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:44:06 compute-0 ceph-mon[74273]: pgmap v410: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:06 compute-0 sudo[151740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhuvhyzerjtsugtlzqsiecftktlrdskl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337045.925986-581-68978764831033/AnsiballZ_stat.py'
Oct 01 16:44:06 compute-0 sudo[151740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:06 compute-0 podman[151756]: 2025-10-01 16:44:06.882111954 +0000 UTC m=+0.042279544 container create fd5549e85f8aa479ec8675d0b4b63d21a90c9cec6f5e9b09dc6b55480f98cc27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_satoshi, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:44:06 compute-0 systemd[1]: Started libpod-conmon-fd5549e85f8aa479ec8675d0b4b63d21a90c9cec6f5e9b09dc6b55480f98cc27.scope.
Oct 01 16:44:06 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:44:06 compute-0 podman[151756]: 2025-10-01 16:44:06.861259661 +0000 UTC m=+0.021427301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:44:06 compute-0 podman[151756]: 2025-10-01 16:44:06.967669569 +0000 UTC m=+0.127837239 container init fd5549e85f8aa479ec8675d0b4b63d21a90c9cec6f5e9b09dc6b55480f98cc27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:44:06 compute-0 podman[151756]: 2025-10-01 16:44:06.976250344 +0000 UTC m=+0.136417944 container start fd5549e85f8aa479ec8675d0b4b63d21a90c9cec6f5e9b09dc6b55480f98cc27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_satoshi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 01 16:44:06 compute-0 friendly_satoshi[151772]: 167 167
Oct 01 16:44:06 compute-0 podman[151756]: 2025-10-01 16:44:06.980869962 +0000 UTC m=+0.141037642 container attach fd5549e85f8aa479ec8675d0b4b63d21a90c9cec6f5e9b09dc6b55480f98cc27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:44:06 compute-0 systemd[1]: libpod-fd5549e85f8aa479ec8675d0b4b63d21a90c9cec6f5e9b09dc6b55480f98cc27.scope: Deactivated successfully.
Oct 01 16:44:06 compute-0 podman[151756]: 2025-10-01 16:44:06.983790345 +0000 UTC m=+0.143957935 container died fd5549e85f8aa479ec8675d0b4b63d21a90c9cec6f5e9b09dc6b55480f98cc27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_satoshi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:44:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-aefcbe82b3bec80f5d7aceba3727def8f7aba1ec1da71cc5ee287b5694cd99d5-merged.mount: Deactivated successfully.
Oct 01 16:44:07 compute-0 podman[151756]: 2025-10-01 16:44:07.023662186 +0000 UTC m=+0.183829816 container remove fd5549e85f8aa479ec8675d0b4b63d21a90c9cec6f5e9b09dc6b55480f98cc27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_satoshi, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:44:07 compute-0 systemd[1]: libpod-conmon-fd5549e85f8aa479ec8675d0b4b63d21a90c9cec6f5e9b09dc6b55480f98cc27.scope: Deactivated successfully.
Oct 01 16:44:07 compute-0 python3.9[151753]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:44:07 compute-0 sudo[151740]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:07 compute-0 podman[151819]: 2025-10-01 16:44:07.195446099 +0000 UTC m=+0.044273683 container create f6e8a9aa83b256b295991430eaa60f4c4c558fcff737474d771ca516d3d028da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bohr, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 01 16:44:07 compute-0 systemd[1]: Started libpod-conmon-f6e8a9aa83b256b295991430eaa60f4c4c558fcff737474d771ca516d3d028da.scope.
Oct 01 16:44:07 compute-0 podman[151819]: 2025-10-01 16:44:07.178676091 +0000 UTC m=+0.027503695 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:44:07 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:44:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09ae3c0a7cd6636f9cc3add2a9bda5a34444f8a6699f133bd605698a1129f0f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:44:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09ae3c0a7cd6636f9cc3add2a9bda5a34444f8a6699f133bd605698a1129f0f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:44:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09ae3c0a7cd6636f9cc3add2a9bda5a34444f8a6699f133bd605698a1129f0f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:44:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09ae3c0a7cd6636f9cc3add2a9bda5a34444f8a6699f133bd605698a1129f0f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:44:07 compute-0 podman[151819]: 2025-10-01 16:44:07.303548563 +0000 UTC m=+0.152376177 container init f6e8a9aa83b256b295991430eaa60f4c4c558fcff737474d771ca516d3d028da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:44:07 compute-0 podman[151819]: 2025-10-01 16:44:07.315869946 +0000 UTC m=+0.164697570 container start f6e8a9aa83b256b295991430eaa60f4c4c558fcff737474d771ca516d3d028da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bohr, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:44:07 compute-0 podman[151819]: 2025-10-01 16:44:07.320048189 +0000 UTC m=+0.168875793 container attach f6e8a9aa83b256b295991430eaa60f4c4c558fcff737474d771ca516d3d028da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bohr, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:44:07 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:07 compute-0 sudo[151966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tllcovujrpzvdhshsazswocypclgntko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337047.0992591-581-105393936042769/AnsiballZ_copy.py'
Oct 01 16:44:07 compute-0 sudo[151966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:07 compute-0 python3.9[151968]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759337047.0992591-581-105393936042769/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:44:07 compute-0 sudo[151966]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]: {
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:     "0": [
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:         {
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             "devices": [
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "/dev/loop3"
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             ],
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             "lv_name": "ceph_lv0",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             "lv_size": "21470642176",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             "name": "ceph_lv0",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             "tags": {
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "ceph.cluster_name": "ceph",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "ceph.crush_device_class": "",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "ceph.encrypted": "0",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "ceph.osd_id": "0",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "ceph.type": "block",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "ceph.vdo": "0"
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             },
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             "type": "block",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             "vg_name": "ceph_vg0"
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:         }
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:     ],
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:     "1": [
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:         {
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             "devices": [
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "/dev/loop4"
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             ],
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             "lv_name": "ceph_lv1",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             "lv_size": "21470642176",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             "name": "ceph_lv1",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             "tags": {
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "ceph.cluster_name": "ceph",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "ceph.crush_device_class": "",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "ceph.encrypted": "0",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "ceph.osd_id": "1",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "ceph.type": "block",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "ceph.vdo": "0"
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             },
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             "type": "block",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             "vg_name": "ceph_vg1"
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:         }
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:     ],
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:     "2": [
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:         {
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             "devices": [
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "/dev/loop5"
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             ],
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             "lv_name": "ceph_lv2",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             "lv_size": "21470642176",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             "name": "ceph_lv2",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             "tags": {
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "ceph.cluster_name": "ceph",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "ceph.crush_device_class": "",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "ceph.encrypted": "0",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "ceph.osd_id": "2",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "ceph.type": "block",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:                 "ceph.vdo": "0"
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             },
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             "type": "block",
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:             "vg_name": "ceph_vg2"
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:         }
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]:     ]
Oct 01 16:44:08 compute-0 pedantic_bohr[151865]: }
Oct 01 16:44:08 compute-0 systemd[1]: libpod-f6e8a9aa83b256b295991430eaa60f4c4c558fcff737474d771ca516d3d028da.scope: Deactivated successfully.
Oct 01 16:44:08 compute-0 podman[151819]: 2025-10-01 16:44:08.089532894 +0000 UTC m=+0.938360478 container died f6e8a9aa83b256b295991430eaa60f4c4c558fcff737474d771ca516d3d028da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bohr, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:44:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-09ae3c0a7cd6636f9cc3add2a9bda5a34444f8a6699f133bd605698a1129f0f7-merged.mount: Deactivated successfully.
Oct 01 16:44:08 compute-0 podman[151819]: 2025-10-01 16:44:08.138674544 +0000 UTC m=+0.987502128 container remove f6e8a9aa83b256b295991430eaa60f4c4c558fcff737474d771ca516d3d028da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bohr, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 01 16:44:08 compute-0 systemd[1]: libpod-conmon-f6e8a9aa83b256b295991430eaa60f4c4c558fcff737474d771ca516d3d028da.scope: Deactivated successfully.
Oct 01 16:44:08 compute-0 sudo[151616]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:08 compute-0 sudo[152058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wynwfwilobkhqoadpksodjnuoeygsfla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337047.0992591-581-105393936042769/AnsiballZ_systemd.py'
Oct 01 16:44:08 compute-0 sudo[152058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:08 compute-0 sudo[152060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:44:08 compute-0 sudo[152060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:44:08 compute-0 sudo[152060]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:08 compute-0 sudo[152086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:44:08 compute-0 sudo[152086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:44:08 compute-0 sudo[152086]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:08 compute-0 sudo[152111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:44:08 compute-0 sudo[152111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:44:08 compute-0 sudo[152111]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:08 compute-0 sudo[152136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 16:44:08 compute-0 sudo[152136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:44:08 compute-0 python3.9[152061]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 01 16:44:08 compute-0 systemd[1]: Reloading.
Oct 01 16:44:08 compute-0 systemd-rc-local-generator[152194]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:44:08 compute-0 systemd-sysv-generator[152198]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:44:08 compute-0 ceph-mon[74273]: pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:08 compute-0 sudo[152058]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:44:08 compute-0 podman[152237]: 2025-10-01 16:44:08.741257265 +0000 UTC m=+0.041525304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:44:08 compute-0 podman[152237]: 2025-10-01 16:44:08.875716821 +0000 UTC m=+0.175984840 container create 88c5edf9fd8516558bb6c679720d7908ff2bfcb44a0c89b78c4258a990d6e34e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_morse, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:44:08 compute-0 systemd[1]: Started libpod-conmon-88c5edf9fd8516558bb6c679720d7908ff2bfcb44a0c89b78c4258a990d6e34e.scope.
Oct 01 16:44:08 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:44:08 compute-0 podman[152237]: 2025-10-01 16:44:08.988205145 +0000 UTC m=+0.288473254 container init 88c5edf9fd8516558bb6c679720d7908ff2bfcb44a0c89b78c4258a990d6e34e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_morse, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 01 16:44:08 compute-0 podman[152237]: 2025-10-01 16:44:08.995363626 +0000 UTC m=+0.295631645 container start 88c5edf9fd8516558bb6c679720d7908ff2bfcb44a0c89b78c4258a990d6e34e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_morse, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:44:08 compute-0 podman[152237]: 2025-10-01 16:44:08.999144337 +0000 UTC m=+0.299412446 container attach 88c5edf9fd8516558bb6c679720d7908ff2bfcb44a0c89b78c4258a990d6e34e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 01 16:44:09 compute-0 sweet_morse[152276]: 167 167
Oct 01 16:44:09 compute-0 systemd[1]: libpod-88c5edf9fd8516558bb6c679720d7908ff2bfcb44a0c89b78c4258a990d6e34e.scope: Deactivated successfully.
Oct 01 16:44:09 compute-0 podman[152237]: 2025-10-01 16:44:09.001276377 +0000 UTC m=+0.301544406 container died 88c5edf9fd8516558bb6c679720d7908ff2bfcb44a0c89b78c4258a990d6e34e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_morse, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:44:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d5ac74c46dc6ba025b161c485d67e0fc14d46bb7f8d84d182577ad6b9ff9cf9-merged.mount: Deactivated successfully.
Oct 01 16:44:09 compute-0 podman[152237]: 2025-10-01 16:44:09.045317261 +0000 UTC m=+0.345585290 container remove 88c5edf9fd8516558bb6c679720d7908ff2bfcb44a0c89b78c4258a990d6e34e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_morse, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 01 16:44:09 compute-0 systemd[1]: libpod-conmon-88c5edf9fd8516558bb6c679720d7908ff2bfcb44a0c89b78c4258a990d6e34e.scope: Deactivated successfully.
Oct 01 16:44:09 compute-0 sudo[152346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbvrvshnrsdlvccpoqcoaaituzzkahdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337047.0992591-581-105393936042769/AnsiballZ_systemd.py'
Oct 01 16:44:09 compute-0 sudo[152346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:09 compute-0 podman[152354]: 2025-10-01 16:44:09.22794243 +0000 UTC m=+0.045836258 container create 38ada944ae415429390bf846f86aa645ba878ff591cf3caaf817ad19d102aa51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 01 16:44:09 compute-0 systemd[1]: Started libpod-conmon-38ada944ae415429390bf846f86aa645ba878ff591cf3caaf817ad19d102aa51.scope.
Oct 01 16:44:09 compute-0 podman[152354]: 2025-10-01 16:44:09.204112398 +0000 UTC m=+0.022006217 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:44:09 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cbe5ed16740e9f831190958982d2c6822c15fb1572d81bf41242170e9c9b073/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cbe5ed16740e9f831190958982d2c6822c15fb1572d81bf41242170e9c9b073/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cbe5ed16740e9f831190958982d2c6822c15fb1572d81bf41242170e9c9b073/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cbe5ed16740e9f831190958982d2c6822c15fb1572d81bf41242170e9c9b073/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:44:09 compute-0 podman[152354]: 2025-10-01 16:44:09.353218503 +0000 UTC m=+0.171112361 container init 38ada944ae415429390bf846f86aa645ba878ff591cf3caaf817ad19d102aa51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ishizaka, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:44:09 compute-0 podman[152354]: 2025-10-01 16:44:09.365654265 +0000 UTC m=+0.183548093 container start 38ada944ae415429390bf846f86aa645ba878ff591cf3caaf817ad19d102aa51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 01 16:44:09 compute-0 podman[152354]: 2025-10-01 16:44:09.37047506 +0000 UTC m=+0.188368888 container attach 38ada944ae415429390bf846f86aa645ba878ff591cf3caaf817ad19d102aa51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:44:09 compute-0 python3.9[152348]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:44:09 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:09 compute-0 systemd[1]: Reloading.
Oct 01 16:44:09 compute-0 systemd-sysv-generator[152406]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:44:09 compute-0 systemd-rc-local-generator[152403]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:44:09 compute-0 systemd[1]: Starting ovn_controller container...
Oct 01 16:44:09 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8abe8f183793b4fce9d5a890bf7af5d97a3a16461773af1da68a084d44fe339c/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 01 16:44:09 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f.
Oct 01 16:44:09 compute-0 podman[152415]: 2025-10-01 16:44:09.964447213 +0000 UTC m=+0.149730865 container init 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 01 16:44:09 compute-0 ovn_controller[152430]: + sudo -E kolla_set_configs
Oct 01 16:44:09 compute-0 podman[152415]: 2025-10-01 16:44:09.993773153 +0000 UTC m=+0.179056805 container start 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Oct 01 16:44:10 compute-0 edpm-start-podman-container[152415]: ovn_controller
Oct 01 16:44:10 compute-0 systemd[1]: Created slice User Slice of UID 0.
Oct 01 16:44:10 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 01 16:44:10 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 01 16:44:10 compute-0 systemd[1]: Starting User Manager for UID 0...
Oct 01 16:44:10 compute-0 systemd[152454]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Oct 01 16:44:10 compute-0 edpm-start-podman-container[152414]: Creating additional drop-in dependency for "ovn_controller" (347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f)
Oct 01 16:44:10 compute-0 systemd[1]: Reloading.
Oct 01 16:44:10 compute-0 podman[152436]: 2025-10-01 16:44:10.12911946 +0000 UTC m=+0.108928301 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 01 16:44:10 compute-0 systemd[152454]: Queued start job for default target Main User Target.
Oct 01 16:44:10 compute-0 systemd-rc-local-generator[152518]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:44:10 compute-0 systemd-sysv-generator[152521]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:44:10 compute-0 systemd[152454]: Created slice User Application Slice.
Oct 01 16:44:10 compute-0 systemd[152454]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 01 16:44:10 compute-0 systemd[152454]: Started Daily Cleanup of User's Temporary Directories.
Oct 01 16:44:10 compute-0 systemd[152454]: Reached target Paths.
Oct 01 16:44:10 compute-0 systemd[152454]: Reached target Timers.
Oct 01 16:44:10 compute-0 systemd[152454]: Starting D-Bus User Message Bus Socket...
Oct 01 16:44:10 compute-0 systemd[152454]: Starting Create User's Volatile Files and Directories...
Oct 01 16:44:10 compute-0 systemd[152454]: Listening on D-Bus User Message Bus Socket.
Oct 01 16:44:10 compute-0 systemd[152454]: Reached target Sockets.
Oct 01 16:44:10 compute-0 systemd[152454]: Finished Create User's Volatile Files and Directories.
Oct 01 16:44:10 compute-0 systemd[152454]: Reached target Basic System.
Oct 01 16:44:10 compute-0 systemd[152454]: Reached target Main User Target.
Oct 01 16:44:10 compute-0 systemd[152454]: Startup finished in 154ms.
Oct 01 16:44:10 compute-0 systemd[1]: Started User Manager for UID 0.
Oct 01 16:44:10 compute-0 systemd[1]: Started ovn_controller container.
Oct 01 16:44:10 compute-0 systemd[1]: 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f-3e61487f09e9ffe5.service: Main process exited, code=exited, status=1/FAILURE
Oct 01 16:44:10 compute-0 systemd[1]: 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f-3e61487f09e9ffe5.service: Failed with result 'exit-code'.
Oct 01 16:44:10 compute-0 intelligent_ishizaka[152370]: {
Oct 01 16:44:10 compute-0 intelligent_ishizaka[152370]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 16:44:10 compute-0 intelligent_ishizaka[152370]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:44:10 compute-0 intelligent_ishizaka[152370]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 16:44:10 compute-0 intelligent_ishizaka[152370]:         "osd_id": 2,
Oct 01 16:44:10 compute-0 intelligent_ishizaka[152370]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:44:10 compute-0 intelligent_ishizaka[152370]:         "type": "bluestore"
Oct 01 16:44:10 compute-0 intelligent_ishizaka[152370]:     },
Oct 01 16:44:10 compute-0 intelligent_ishizaka[152370]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 16:44:10 compute-0 intelligent_ishizaka[152370]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:44:10 compute-0 intelligent_ishizaka[152370]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 16:44:10 compute-0 intelligent_ishizaka[152370]:         "osd_id": 0,
Oct 01 16:44:10 compute-0 intelligent_ishizaka[152370]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:44:10 compute-0 intelligent_ishizaka[152370]:         "type": "bluestore"
Oct 01 16:44:10 compute-0 intelligent_ishizaka[152370]:     },
Oct 01 16:44:10 compute-0 intelligent_ishizaka[152370]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 16:44:10 compute-0 intelligent_ishizaka[152370]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:44:10 compute-0 intelligent_ishizaka[152370]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 16:44:10 compute-0 intelligent_ishizaka[152370]:         "osd_id": 1,
Oct 01 16:44:10 compute-0 intelligent_ishizaka[152370]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:44:10 compute-0 intelligent_ishizaka[152370]:         "type": "bluestore"
Oct 01 16:44:10 compute-0 intelligent_ishizaka[152370]:     }
Oct 01 16:44:10 compute-0 intelligent_ishizaka[152370]: }
Oct 01 16:44:10 compute-0 podman[152354]: 2025-10-01 16:44:10.43373053 +0000 UTC m=+1.251624328 container died 38ada944ae415429390bf846f86aa645ba878ff591cf3caaf817ad19d102aa51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:44:10 compute-0 systemd[1]: Started Session c1 of User root.
Oct 01 16:44:10 compute-0 systemd[1]: libpod-38ada944ae415429390bf846f86aa645ba878ff591cf3caaf817ad19d102aa51.scope: Deactivated successfully.
Oct 01 16:44:10 compute-0 systemd[1]: libpod-38ada944ae415429390bf846f86aa645ba878ff591cf3caaf817ad19d102aa51.scope: Consumed 1.057s CPU time.
Oct 01 16:44:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-1cbe5ed16740e9f831190958982d2c6822c15fb1572d81bf41242170e9c9b073-merged.mount: Deactivated successfully.
Oct 01 16:44:10 compute-0 sudo[152346]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:10 compute-0 podman[152354]: 2025-10-01 16:44:10.49778349 +0000 UTC m=+1.315677318 container remove 38ada944ae415429390bf846f86aa645ba878ff591cf3caaf817ad19d102aa51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ishizaka, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:44:10 compute-0 systemd[1]: libpod-conmon-38ada944ae415429390bf846f86aa645ba878ff591cf3caaf817ad19d102aa51.scope: Deactivated successfully.
Oct 01 16:44:10 compute-0 ovn_controller[152430]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 01 16:44:10 compute-0 ovn_controller[152430]: INFO:__main__:Validating config file
Oct 01 16:44:10 compute-0 ovn_controller[152430]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 01 16:44:10 compute-0 ovn_controller[152430]: INFO:__main__:Writing out command to execute
Oct 01 16:44:10 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Oct 01 16:44:10 compute-0 ovn_controller[152430]: ++ cat /run_command
Oct 01 16:44:10 compute-0 sudo[152136]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:10 compute-0 ovn_controller[152430]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 01 16:44:10 compute-0 ovn_controller[152430]: + ARGS=
Oct 01 16:44:10 compute-0 ovn_controller[152430]: + sudo kolla_copy_cacerts
Oct 01 16:44:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:44:10 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:44:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:44:10 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:44:10 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev fadc306d-eb5a-4e9a-8fcd-d74431a8de8a does not exist
Oct 01 16:44:10 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 20cbb527-1e57-427a-b299-589afdf51b50 does not exist
Oct 01 16:44:10 compute-0 systemd[1]: Started Session c2 of User root.
Oct 01 16:44:10 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Oct 01 16:44:10 compute-0 ovn_controller[152430]: + [[ ! -n '' ]]
Oct 01 16:44:10 compute-0 ovn_controller[152430]: + . kolla_extend_start
Oct 01 16:44:10 compute-0 ovn_controller[152430]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 01 16:44:10 compute-0 ovn_controller[152430]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Oct 01 16:44:10 compute-0 ovn_controller[152430]: + umask 0022
Oct 01 16:44:10 compute-0 ovn_controller[152430]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Oct 01 16:44:10 compute-0 ovn_controller[152430]: 2025-10-01T16:44:10Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 01 16:44:10 compute-0 ovn_controller[152430]: 2025-10-01T16:44:10Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 01 16:44:10 compute-0 ovn_controller[152430]: 2025-10-01T16:44:10Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Oct 01 16:44:10 compute-0 ovn_controller[152430]: 2025-10-01T16:44:10Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Oct 01 16:44:10 compute-0 ovn_controller[152430]: 2025-10-01T16:44:10Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 01 16:44:10 compute-0 ovn_controller[152430]: 2025-10-01T16:44:10Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Oct 01 16:44:10 compute-0 NetworkManager[44927]: <info>  [1759337050.5942] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Oct 01 16:44:10 compute-0 NetworkManager[44927]: <info>  [1759337050.5948] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 01 16:44:10 compute-0 NetworkManager[44927]: <info>  [1759337050.5957] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Oct 01 16:44:10 compute-0 NetworkManager[44927]: <info>  [1759337050.5961] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Oct 01 16:44:10 compute-0 NetworkManager[44927]: <info>  [1759337050.5964] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 01 16:44:10 compute-0 kernel: br-int: entered promiscuous mode
Oct 01 16:44:10 compute-0 sudo[152592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:44:10 compute-0 sudo[152592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:44:10 compute-0 ovn_controller[152430]: 2025-10-01T16:44:10Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 01 16:44:10 compute-0 sudo[152592]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:10 compute-0 ovn_controller[152430]: 2025-10-01T16:44:10Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 01 16:44:10 compute-0 ovn_controller[152430]: 2025-10-01T16:44:10Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 01 16:44:10 compute-0 ovn_controller[152430]: 2025-10-01T16:44:10Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Oct 01 16:44:10 compute-0 ovn_controller[152430]: 2025-10-01T16:44:10Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Oct 01 16:44:10 compute-0 ovn_controller[152430]: 2025-10-01T16:44:10Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Oct 01 16:44:10 compute-0 ovn_controller[152430]: 2025-10-01T16:44:10Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 01 16:44:10 compute-0 ovn_controller[152430]: 2025-10-01T16:44:10Z|00014|main|INFO|OVS feature set changed, force recompute.
Oct 01 16:44:10 compute-0 ovn_controller[152430]: 2025-10-01T16:44:10Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 01 16:44:10 compute-0 ovn_controller[152430]: 2025-10-01T16:44:10Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 01 16:44:10 compute-0 ovn_controller[152430]: 2025-10-01T16:44:10Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 01 16:44:10 compute-0 ovn_controller[152430]: 2025-10-01T16:44:10Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Oct 01 16:44:10 compute-0 ovn_controller[152430]: 2025-10-01T16:44:10Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Oct 01 16:44:10 compute-0 ovn_controller[152430]: 2025-10-01T16:44:10Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 01 16:44:10 compute-0 ovn_controller[152430]: 2025-10-01T16:44:10Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 01 16:44:10 compute-0 ovn_controller[152430]: 2025-10-01T16:44:10Z|00022|main|INFO|OVS feature set changed, force recompute.
Oct 01 16:44:10 compute-0 ovn_controller[152430]: 2025-10-01T16:44:10Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Oct 01 16:44:10 compute-0 ovn_controller[152430]: 2025-10-01T16:44:10Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Oct 01 16:44:10 compute-0 ovn_controller[152430]: 2025-10-01T16:44:10Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 01 16:44:10 compute-0 ovn_controller[152430]: 2025-10-01T16:44:10Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 01 16:44:10 compute-0 ovn_controller[152430]: 2025-10-01T16:44:10Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 01 16:44:10 compute-0 ovn_controller[152430]: 2025-10-01T16:44:10Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 01 16:44:10 compute-0 ovn_controller[152430]: 2025-10-01T16:44:10Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 01 16:44:10 compute-0 ovn_controller[152430]: 2025-10-01T16:44:10Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 01 16:44:10 compute-0 NetworkManager[44927]: <info>  [1759337050.6293] manager: (ovn-901ac9-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Oct 01 16:44:10 compute-0 systemd-udevd[152658]: Network interface NamePolicy= disabled on kernel command line.
Oct 01 16:44:10 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Oct 01 16:44:10 compute-0 NetworkManager[44927]: <info>  [1759337050.6494] device (genev_sys_6081): carrier: link connected
Oct 01 16:44:10 compute-0 NetworkManager[44927]: <info>  [1759337050.6497] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Oct 01 16:44:10 compute-0 systemd-udevd[152662]: Network interface NamePolicy= disabled on kernel command line.
Oct 01 16:44:10 compute-0 sudo[152624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 16:44:10 compute-0 sudo[152624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:44:10 compute-0 sudo[152624]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:10 compute-0 ceph-mon[74273]: pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:44:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:44:10 compute-0 sudo[152780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfqncjpeyymaabnhnlkmsezikekkwxks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337050.6338897-609-94171504870201/AnsiballZ_command.py'
Oct 01 16:44:10 compute-0 sudo[152780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:11 compute-0 python3.9[152782]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:44:11 compute-0 ovs-vsctl[152783]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct 01 16:44:11 compute-0 sudo[152780]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_16:44:11
Oct 01 16:44:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 16:44:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 16:44:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'backups', '.rgw.root', '.mgr', 'vms', 'default.rgw.log', 'volumes']
Oct 01 16:44:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 16:44:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:44:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:44:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:44:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:44:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:44:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:44:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 16:44:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:44:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 16:44:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:44:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:44:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:44:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:44:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:44:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:44:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:44:11 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:11 compute-0 sudo[152933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjtxhzkunsldnbpnalrdltahmoryqrbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337051.3340237-617-260391145828999/AnsiballZ_command.py'
Oct 01 16:44:11 compute-0 sudo[152933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:11 compute-0 python3.9[152935]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:44:11 compute-0 ovs-vsctl[152937]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Oct 01 16:44:11 compute-0 sudo[152933]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:12 compute-0 sudo[153088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhbrttngmuoqfsbibyrujjjkdqmzfkww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337052.35561-631-175669632128017/AnsiballZ_command.py'
Oct 01 16:44:12 compute-0 sudo[153088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:12 compute-0 ceph-mon[74273]: pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:12 compute-0 python3.9[153090]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:44:12 compute-0 ovs-vsctl[153091]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct 01 16:44:12 compute-0 sudo[153088]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:13 compute-0 sshd-session[140942]: Connection closed by 192.168.122.30 port 49692
Oct 01 16:44:13 compute-0 sshd-session[140939]: pam_unix(sshd:session): session closed for user zuul
Oct 01 16:44:13 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Oct 01 16:44:13 compute-0 systemd[1]: session-46.scope: Consumed 1min 568ms CPU time.
Oct 01 16:44:13 compute-0 systemd-logind[788]: Session 46 logged out. Waiting for processes to exit.
Oct 01 16:44:13 compute-0 systemd-logind[788]: Removed session 46.
Oct 01 16:44:13 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:44:14 compute-0 ceph-mon[74273]: pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:15 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:16 compute-0 ceph-mon[74273]: pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:17 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:44:18 compute-0 ceph-mon[74273]: pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:19 compute-0 sshd-session[153116]: Accepted publickey for zuul from 192.168.122.30 port 49148 ssh2: ECDSA SHA256:cAu4I/kPoFUKOLOQB71BUt6Th09G4PIJ2iHT8DD8gEY
Oct 01 16:44:19 compute-0 systemd-logind[788]: New session 48 of user zuul.
Oct 01 16:44:19 compute-0 systemd[1]: Started Session 48 of User zuul.
Oct 01 16:44:19 compute-0 sshd-session[153116]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 16:44:19 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:20 compute-0 python3.9[153269]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:44:20 compute-0 systemd[1]: Stopping User Manager for UID 0...
Oct 01 16:44:20 compute-0 systemd[152454]: Activating special unit Exit the Session...
Oct 01 16:44:20 compute-0 systemd[152454]: Stopped target Main User Target.
Oct 01 16:44:20 compute-0 systemd[152454]: Stopped target Basic System.
Oct 01 16:44:20 compute-0 systemd[152454]: Stopped target Paths.
Oct 01 16:44:20 compute-0 systemd[152454]: Stopped target Sockets.
Oct 01 16:44:20 compute-0 systemd[152454]: Stopped target Timers.
Oct 01 16:44:20 compute-0 systemd[152454]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 01 16:44:20 compute-0 systemd[152454]: Closed D-Bus User Message Bus Socket.
Oct 01 16:44:20 compute-0 systemd[152454]: Stopped Create User's Volatile Files and Directories.
Oct 01 16:44:20 compute-0 systemd[152454]: Removed slice User Application Slice.
Oct 01 16:44:20 compute-0 systemd[152454]: Reached target Shutdown.
Oct 01 16:44:20 compute-0 systemd[152454]: Finished Exit the Session.
Oct 01 16:44:20 compute-0 systemd[152454]: Reached target Exit the Session.
Oct 01 16:44:20 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Oct 01 16:44:20 compute-0 systemd[1]: Stopped User Manager for UID 0.
Oct 01 16:44:20 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 01 16:44:20 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 01 16:44:20 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 01 16:44:20 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 01 16:44:20 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Oct 01 16:44:20 compute-0 ceph-mon[74273]: pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 16:44:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:44:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 16:44:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:44:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:44:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:44:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:44:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:44:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:44:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:44:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:44:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:44:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 16:44:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:44:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:44:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:44:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 16:44:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:44:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 16:44:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:44:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:44:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:44:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 16:44:21 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:21 compute-0 sudo[153426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjfbynomvvqzqeqtqsrxztzexsrpunfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337061.033278-34-48500923326624/AnsiballZ_file.py'
Oct 01 16:44:21 compute-0 sudo[153426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:21 compute-0 python3.9[153428]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:44:21 compute-0 sudo[153426]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:22 compute-0 sudo[153578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wybjugdtlhqshjpsaifzicvryupzeyrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337062.0076303-34-112050441800098/AnsiballZ_file.py'
Oct 01 16:44:22 compute-0 sudo[153578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:22 compute-0 python3.9[153580]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:44:22 compute-0 sudo[153578]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:22 compute-0 sshd-session[153599]: Connection closed by 95.251.232.242 port 46300
Oct 01 16:44:22 compute-0 ceph-mon[74273]: pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:23 compute-0 sudo[153733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjngwtfooifvcqbvtgxfnobvwiwcyesc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337062.7455196-34-127000023208001/AnsiballZ_file.py'
Oct 01 16:44:23 compute-0 sudo[153733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:23 compute-0 sshd-session[153628]: Invalid user a from 95.251.232.242 port 46304
Oct 01 16:44:23 compute-0 sshd-session[153628]: Connection closed by invalid user a 95.251.232.242 port 46304 [preauth]
Oct 01 16:44:23 compute-0 python3.9[153735]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:44:23 compute-0 sudo[153733]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:23 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:44:23 compute-0 sudo[153885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxzxgetsnxomovnsrzyvftuwvukqeilb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337063.4886-34-203047173469383/AnsiballZ_file.py'
Oct 01 16:44:23 compute-0 sudo[153885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:24 compute-0 python3.9[153887]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:44:24 compute-0 sudo[153885]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:24 compute-0 sudo[154037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buljzlahinclisrmerlkshynpecovink ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337064.1794028-34-51775343645694/AnsiballZ_file.py'
Oct 01 16:44:24 compute-0 sudo[154037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:24 compute-0 python3.9[154039]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:44:24 compute-0 sudo[154037]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:24 compute-0 ceph-mon[74273]: pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:25 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:25 compute-0 python3.9[154189]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:44:26 compute-0 sudo[154339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbvtiecvitocqabxbjlgmhlgfatzmyrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337065.7969093-78-152139605115811/AnsiballZ_seboolean.py'
Oct 01 16:44:26 compute-0 sudo[154339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:26 compute-0 python3.9[154341]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 01 16:44:26 compute-0 ceph-mon[74273]: pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:27 compute-0 sudo[154339]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:27 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:28 compute-0 python3.9[154491]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:44:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:44:28 compute-0 python3.9[154613]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759337067.423154-86-90566506758785/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:44:28 compute-0 ceph-mon[74273]: pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:29 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:29 compute-0 python3.9[154763]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:44:30 compute-0 python3.9[154884]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759337069.1188333-101-202099121799843/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:44:30 compute-0 ceph-mon[74273]: pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:30 compute-0 sudo[155034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlxjnawhojkzzsrmpajeoyuiaprgvdtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337070.6229227-118-77899906295737/AnsiballZ_setup.py'
Oct 01 16:44:30 compute-0 sudo[155034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:31 compute-0 python3.9[155036]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 16:44:31 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:31 compute-0 sudo[155034]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:31 compute-0 sudo[155118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvxvytzigcgfgsaygeqmhrorsofenpxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337070.6229227-118-77899906295737/AnsiballZ_dnf.py'
Oct 01 16:44:31 compute-0 sudo[155118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:32 compute-0 python3.9[155120]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 16:44:32 compute-0 ceph-mon[74273]: pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:33 compute-0 sudo[155118]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:33 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:44:33 compute-0 ceph-mon[74273]: pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:34 compute-0 sudo[155271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pijmabfbczhkeswdsuerkcbrimfgnorh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337073.5454135-130-13242233208638/AnsiballZ_systemd.py'
Oct 01 16:44:34 compute-0 sudo[155271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:34 compute-0 python3.9[155273]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 01 16:44:34 compute-0 sudo[155271]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:35 compute-0 python3.9[155426]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:44:35 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:36 compute-0 python3.9[155547]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759337074.869776-138-244063454353375/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:44:36 compute-0 ceph-mon[74273]: pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:36 compute-0 python3.9[155697]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:44:37 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:37 compute-0 python3.9[155818]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759337076.2639275-138-99486841600706/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:44:38 compute-0 ceph-mon[74273]: pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:44:38 compute-0 python3.9[155968]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:44:39 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:39 compute-0 python3.9[156089]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759337078.2995164-182-237714749022929/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:44:40 compute-0 python3.9[156239]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:44:40 compute-0 ceph-mon[74273]: pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:40 compute-0 ovn_controller[152430]: 2025-10-01T16:44:40Z|00025|memory|INFO|16256 kB peak resident set size after 30.0 seconds
Oct 01 16:44:40 compute-0 ovn_controller[152430]: 2025-10-01T16:44:40Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Oct 01 16:44:40 compute-0 podman[156334]: 2025-10-01 16:44:40.674062603 +0000 UTC m=+0.133361892 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 01 16:44:40 compute-0 python3.9[156373]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759337079.6696177-182-171752265010424/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:44:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:44:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:44:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:44:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:44:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:44:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:44:41 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:41 compute-0 python3.9[156537]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:44:42 compute-0 sudo[156689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfwsqlxksbrsmhxxqbuktjvhrktuqelg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337081.800603-220-133613270821847/AnsiballZ_file.py'
Oct 01 16:44:42 compute-0 sudo[156689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:42 compute-0 python3.9[156691]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:44:42 compute-0 sudo[156689]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:42 compute-0 ceph-mon[74273]: pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:42 compute-0 sudo[156841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcqkzfzssjuzaptrhszphgulqyyvljcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337082.5076506-228-61408212515976/AnsiballZ_stat.py'
Oct 01 16:44:42 compute-0 sudo[156841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:43 compute-0 python3.9[156843]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:44:43 compute-0 sudo[156841]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:43 compute-0 sudo[156919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhekhmjwbilvxldtyuuqdwvkumvbuvph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337082.5076506-228-61408212515976/AnsiballZ_file.py'
Oct 01 16:44:43 compute-0 sudo[156919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:43 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:43 compute-0 python3.9[156921]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:44:43 compute-0 sudo[156919]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:44:44 compute-0 sudo[157071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwuajirnrbhwtsgudpykvkorjtlkyeoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337083.850133-228-161803916550618/AnsiballZ_stat.py'
Oct 01 16:44:44 compute-0 sudo[157071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:44 compute-0 python3.9[157073]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:44:44 compute-0 sudo[157071]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:44 compute-0 ceph-mon[74273]: pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:44 compute-0 sudo[157149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prpeitarmdqnvwtpjdxzuktupkwpqbnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337083.850133-228-161803916550618/AnsiballZ_file.py'
Oct 01 16:44:44 compute-0 sudo[157149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:44 compute-0 python3.9[157151]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:44:44 compute-0 sudo[157149]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:45 compute-0 sudo[157301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stezeuafyymgouvdcaixgbolljzscutt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337084.9572392-251-123798294914560/AnsiballZ_file.py'
Oct 01 16:44:45 compute-0 sudo[157301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:45 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:45 compute-0 python3.9[157303]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:44:45 compute-0 sudo[157301]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:46 compute-0 sudo[157453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvtrzhmgpktldtjvcxzdnyebwvjpedfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337085.766457-259-159054260843080/AnsiballZ_stat.py'
Oct 01 16:44:46 compute-0 sudo[157453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:46 compute-0 python3.9[157455]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:44:46 compute-0 sudo[157453]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:46 compute-0 ceph-mon[74273]: pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:46 compute-0 sudo[157531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-segrbjhcyzqopsscazqpodxfjhpxnznw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337085.766457-259-159054260843080/AnsiballZ_file.py'
Oct 01 16:44:46 compute-0 sudo[157531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:46 compute-0 python3.9[157533]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:44:46 compute-0 sudo[157531]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:47 compute-0 sudo[157683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiuhunovokfdyybdnxrldirtqquanlwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337087.1001663-271-232096303007241/AnsiballZ_stat.py'
Oct 01 16:44:47 compute-0 sudo[157683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:47 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:47 compute-0 python3.9[157685]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:44:47 compute-0 sudo[157683]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:47 compute-0 sudo[157761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ousrhdtgyobuwcesgmpurokdsnxcocwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337087.1001663-271-232096303007241/AnsiballZ_file.py'
Oct 01 16:44:47 compute-0 sudo[157761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:48 compute-0 python3.9[157763]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:44:48 compute-0 sudo[157761]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:48 compute-0 ceph-mon[74273]: pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:48 compute-0 sudo[157913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prrpdzmkzdsydoyqsklhsjsyhieysupw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337088.325182-283-166848221424269/AnsiballZ_systemd.py'
Oct 01 16:44:48 compute-0 sudo[157913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:44:49 compute-0 python3.9[157915]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:44:49 compute-0 systemd[1]: Reloading.
Oct 01 16:44:49 compute-0 systemd-sysv-generator[157943]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:44:49 compute-0 systemd-rc-local-generator[157938]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:44:49 compute-0 sudo[157913]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:49 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:49 compute-0 sudo[158101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knfmkicjpzbzjrokhyqmmicorizfcmrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337089.4736056-291-241076283572987/AnsiballZ_stat.py'
Oct 01 16:44:49 compute-0 sudo[158101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:50 compute-0 python3.9[158103]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:44:50 compute-0 sudo[158101]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:50 compute-0 sudo[158179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toqlvkguinazbrhwdzpmycbrqesmdigb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337089.4736056-291-241076283572987/AnsiballZ_file.py'
Oct 01 16:44:50 compute-0 sudo[158179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:50 compute-0 python3.9[158181]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:44:50 compute-0 ceph-mon[74273]: pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:50 compute-0 sudo[158179]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:51 compute-0 sudo[158331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sffiecunooykgfnkbmygokxlajbxdxrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337090.8562045-303-194364706231729/AnsiballZ_stat.py'
Oct 01 16:44:51 compute-0 sudo[158331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:51 compute-0 python3.9[158333]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:44:51 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:51 compute-0 sudo[158331]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:51 compute-0 sudo[158409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzeweqgywkizvwuqpfelfvnsuelbbpvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337090.8562045-303-194364706231729/AnsiballZ_file.py'
Oct 01 16:44:51 compute-0 sudo[158409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:52 compute-0 python3.9[158411]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:44:52 compute-0 sudo[158409]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:52 compute-0 ceph-mon[74273]: pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:52 compute-0 sudo[158561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzxggmvcipnyielufmuapdkqcwtlnfgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337092.2032304-315-224506637391638/AnsiballZ_systemd.py'
Oct 01 16:44:52 compute-0 sudo[158561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:52 compute-0 python3.9[158563]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:44:52 compute-0 systemd[1]: Reloading.
Oct 01 16:44:53 compute-0 systemd-sysv-generator[158596]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:44:53 compute-0 systemd-rc-local-generator[158591]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:44:53 compute-0 systemd[1]: Starting Create netns directory...
Oct 01 16:44:53 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 01 16:44:53 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 01 16:44:53 compute-0 systemd[1]: Finished Create netns directory.
Oct 01 16:44:53 compute-0 sudo[158561]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:53 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:44:53 compute-0 sudo[158754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onhiqlilzugumseudmdlqqzverthnbst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337093.6182544-325-56750153826182/AnsiballZ_file.py'
Oct 01 16:44:53 compute-0 sudo[158754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:54 compute-0 python3.9[158756]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:44:54 compute-0 sudo[158754]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:54 compute-0 sudo[158906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcgrxbrotoeexwgawfenriacdtrwhvbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337094.2798643-333-78678675645716/AnsiballZ_stat.py'
Oct 01 16:44:54 compute-0 sudo[158906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:54 compute-0 ceph-mon[74273]: pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:54 compute-0 python3.9[158908]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:44:54 compute-0 sudo[158906]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:55 compute-0 sudo[159029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgjbzkmftijoxsnimlgfbmqbnrakodhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337094.2798643-333-78678675645716/AnsiballZ_copy.py'
Oct 01 16:44:55 compute-0 sudo[159029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:55 compute-0 python3.9[159031]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759337094.2798643-333-78678675645716/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:44:55 compute-0 sudo[159029]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:55 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:56 compute-0 sudo[159181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uupnmpwxjgnrpiphriobfmubhlcyxrnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337095.744587-350-270743598211189/AnsiballZ_file.py'
Oct 01 16:44:56 compute-0 sudo[159181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:56 compute-0 python3.9[159183]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:44:56 compute-0 sudo[159181]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:56 compute-0 ceph-mon[74273]: pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:56 compute-0 sudo[159333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoqdsqnlwiykwwhhugnhmssgfeaweksk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337096.4573102-358-203078652474259/AnsiballZ_stat.py'
Oct 01 16:44:56 compute-0 sudo[159333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:57 compute-0 python3.9[159335]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:44:57 compute-0 sudo[159333]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:57 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:57 compute-0 sudo[159456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zioknrnvmezwhymdvjweylzxbdcdztkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337096.4573102-358-203078652474259/AnsiballZ_copy.py'
Oct 01 16:44:57 compute-0 sudo[159456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:57 compute-0 python3.9[159458]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759337096.4573102-358-203078652474259/.source.json _original_basename=.nv2ai1oe follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:44:57 compute-0 sudo[159456]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:58 compute-0 sudo[159608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-polahlolexhaigunfsgtnukihoqitjpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337097.9458091-373-239182750949450/AnsiballZ_file.py'
Oct 01 16:44:58 compute-0 sudo[159608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:58 compute-0 python3.9[159610]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:44:58 compute-0 sudo[159608]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:58 compute-0 ceph-mon[74273]: pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:44:59 compute-0 sudo[159760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yumncofyrdfxbdnkojywqpwvhsnbbkxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337098.7213938-381-181078653006124/AnsiballZ_stat.py'
Oct 01 16:44:59 compute-0 sudo[159760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:59 compute-0 sudo[159760]: pam_unix(sudo:session): session closed for user root
Oct 01 16:44:59 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:44:59 compute-0 sudo[159883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnivszmpnnanfvnizswunozsdejakdtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337098.7213938-381-181078653006124/AnsiballZ_copy.py'
Oct 01 16:44:59 compute-0 sudo[159883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:44:59 compute-0 sudo[159883]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:00 compute-0 ceph-mon[74273]: pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:00 compute-0 sudo[160035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxyxehgbfzmhfjysnllitztrfdqwmfpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337100.3338184-398-102327155921455/AnsiballZ_container_config_data.py'
Oct 01 16:45:00 compute-0 sudo[160035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:01 compute-0 python3.9[160037]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Oct 01 16:45:01 compute-0 sudo[160035]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:01 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:01 compute-0 sudo[160187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfgeeghvnawjfhnockzwjxtjpafpjzbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337101.2499404-407-156708215375473/AnsiballZ_container_config_hash.py'
Oct 01 16:45:01 compute-0 sudo[160187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:01 compute-0 python3.9[160189]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 01 16:45:01 compute-0 sudo[160187]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:02 compute-0 ceph-mon[74273]: pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:02 compute-0 sudo[160339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjeligjcsrqnwdwhdcgqjypovkvezysa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337102.1991348-416-238021592247920/AnsiballZ_podman_container_info.py'
Oct 01 16:45:02 compute-0 sudo[160339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:02 compute-0 python3.9[160341]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 01 16:45:03 compute-0 sudo[160339]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:03 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:45:04 compute-0 sudo[160518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjhctjdavwfyfyujctxoymgbvwfjhzbw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759337103.6866255-429-180027923386876/AnsiballZ_edpm_container_manage.py'
Oct 01 16:45:04 compute-0 sudo[160518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:04 compute-0 python3[160520]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 01 16:45:04 compute-0 ceph-mon[74273]: pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:05 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:06 compute-0 ceph-mon[74273]: pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:07 compute-0 ceph-osd[88140]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 16:45:07 compute-0 ceph-osd[88140]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 5514 writes, 23K keys, 5514 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5514 writes, 832 syncs, 6.63 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5514 writes, 23K keys, 5514 commit groups, 1.0 writes per commit group, ingest: 18.56 MB, 0.03 MB/s
                                           Interval WAL: 5514 writes, 832 syncs, 6.63 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 01 16:45:07 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:08 compute-0 ceph-mon[74273]: pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:45:09 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:10 compute-0 sudo[160599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:45:10 compute-0 sudo[160599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:45:10 compute-0 sudo[160599]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:10 compute-0 sudo[160630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:45:10 compute-0 sudo[160630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:45:10 compute-0 sudo[160630]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:10 compute-0 ceph-mon[74273]: pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:10 compute-0 sudo[160659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:45:10 compute-0 sudo[160659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:45:10 compute-0 sudo[160659]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:10 compute-0 sudo[160685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 01 16:45:10 compute-0 sudo[160685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:45:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_16:45:11
Oct 01 16:45:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 16:45:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 16:45:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['images', '.mgr', 'default.rgw.log', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', '.rgw.root', 'volumes']
Oct 01 16:45:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 16:45:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:45:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:45:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:45:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:45:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:45:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:45:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 16:45:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:45:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 16:45:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:45:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:45:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:45:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:45:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:45:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:45:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:45:11 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:11 compute-0 ceph-osd[89167]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 16:45:11 compute-0 ceph-osd[89167]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Cumulative writes: 6669 writes, 27K keys, 6669 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6669 writes, 1198 syncs, 5.57 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6669 writes, 27K keys, 6669 commit groups, 1.0 writes per commit group, ingest: 19.45 MB, 0.03 MB/s
                                           Interval WAL: 6669 writes, 1198 syncs, 5.57 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f21090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f21090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f21090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 01 16:45:11 compute-0 podman[160623]: 2025-10-01 16:45:11.555257416 +0000 UTC m=+0.763162979 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 01 16:45:12 compute-0 sudo[160685]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:45:12 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:45:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:45:12 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:45:12 compute-0 sudo[160780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:45:12 compute-0 sudo[160780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:45:12 compute-0 sudo[160780]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:12 compute-0 sudo[160805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:45:12 compute-0 sudo[160805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:45:12 compute-0 sudo[160805]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:12 compute-0 sudo[160830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:45:12 compute-0 sudo[160830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:45:12 compute-0 sudo[160830]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:12 compute-0 sudo[160857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 16:45:12 compute-0 sudo[160857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:45:12 compute-0 ceph-mon[74273]: pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:45:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:45:12 compute-0 podman[160533]: 2025-10-01 16:45:12.998259983 +0000 UTC m=+8.426772174 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 01 16:45:13 compute-0 podman[160920]: 2025-10-01 16:45:13.182949457 +0000 UTC m=+0.068869163 container create a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent)
Oct 01 16:45:13 compute-0 podman[160920]: 2025-10-01 16:45:13.154724765 +0000 UTC m=+0.040644571 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 01 16:45:13 compute-0 python3[160520]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 01 16:45:13 compute-0 sudo[160857]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:13 compute-0 sudo[160518]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:45:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:45:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 16:45:13 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:45:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 16:45:13 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:45:13 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 574e7f5d-ae69-4468-a018-070d05f675f9 does not exist
Oct 01 16:45:13 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 863ee53a-3127-41d5-9ca8-12dfc78c7d2d does not exist
Oct 01 16:45:13 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 05582094-f217-42c2-bac9-fc5b2117ef55 does not exist
Oct 01 16:45:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 16:45:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:45:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 16:45:13 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:45:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:45:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:45:13 compute-0 sudo[161000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:45:13 compute-0 sudo[161000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:45:13 compute-0 sudo[161000]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:13 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:13 compute-0 sudo[161025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:45:13 compute-0 sudo[161025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:45:13 compute-0 sudo[161025]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:13 compute-0 sudo[161073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:45:13 compute-0 sudo[161073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:45:13 compute-0 sudo[161073]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:13 compute-0 sudo[161126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 16:45:13 compute-0 sudo[161126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:45:13 compute-0 sudo[161240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niizexvrwrbyzomyuvewsvklnhbdcfpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337113.5023117-437-238160425298026/AnsiballZ_stat.py'
Oct 01 16:45:13 compute-0 sudo[161240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:45:13 compute-0 podman[161268]: 2025-10-01 16:45:13.96277615 +0000 UTC m=+0.042721935 container create 6fd8aa5cd785b11a42259f10dac5e77744e183e13c0da96c5ba4cbe2ff5594cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hugle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 01 16:45:13 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:45:13 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:45:13 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:45:13 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:45:13 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:45:13 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:45:13 compute-0 systemd[1]: Started libpod-conmon-6fd8aa5cd785b11a42259f10dac5e77744e183e13c0da96c5ba4cbe2ff5594cf.scope.
Oct 01 16:45:14 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:45:14 compute-0 python3.9[161247]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:45:14 compute-0 podman[161268]: 2025-10-01 16:45:14.034393304 +0000 UTC m=+0.114339099 container init 6fd8aa5cd785b11a42259f10dac5e77744e183e13c0da96c5ba4cbe2ff5594cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:45:14 compute-0 podman[161268]: 2025-10-01 16:45:13.941260367 +0000 UTC m=+0.021206192 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:45:14 compute-0 podman[161268]: 2025-10-01 16:45:14.04349516 +0000 UTC m=+0.123440935 container start 6fd8aa5cd785b11a42259f10dac5e77744e183e13c0da96c5ba4cbe2ff5594cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 01 16:45:14 compute-0 podman[161268]: 2025-10-01 16:45:14.047590006 +0000 UTC m=+0.127535791 container attach 6fd8aa5cd785b11a42259f10dac5e77744e183e13c0da96c5ba4cbe2ff5594cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hugle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:45:14 compute-0 vigilant_hugle[161284]: 167 167
Oct 01 16:45:14 compute-0 systemd[1]: libpod-6fd8aa5cd785b11a42259f10dac5e77744e183e13c0da96c5ba4cbe2ff5594cf.scope: Deactivated successfully.
Oct 01 16:45:14 compute-0 podman[161268]: 2025-10-01 16:45:14.061508258 +0000 UTC m=+0.141454063 container died 6fd8aa5cd785b11a42259f10dac5e77744e183e13c0da96c5ba4cbe2ff5594cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 01 16:45:14 compute-0 sudo[161240]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e7290d182261a00bac118f3aece6ed20ee08988447e88d87677daa0d6bc4df1-merged.mount: Deactivated successfully.
Oct 01 16:45:14 compute-0 podman[161268]: 2025-10-01 16:45:14.10619464 +0000 UTC m=+0.186140415 container remove 6fd8aa5cd785b11a42259f10dac5e77744e183e13c0da96c5ba4cbe2ff5594cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hugle, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:45:14 compute-0 systemd[1]: libpod-conmon-6fd8aa5cd785b11a42259f10dac5e77744e183e13c0da96c5ba4cbe2ff5594cf.scope: Deactivated successfully.
Oct 01 16:45:14 compute-0 podman[161335]: 2025-10-01 16:45:14.340501551 +0000 UTC m=+0.064618056 container create f5c73344d1591317639a7f7d3a0465ecbea352207c821610731be0dd23deb4e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bartik, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:45:14 compute-0 systemd[1]: Started libpod-conmon-f5c73344d1591317639a7f7d3a0465ecbea352207c821610731be0dd23deb4e5.scope.
Oct 01 16:45:14 compute-0 podman[161335]: 2025-10-01 16:45:14.301864351 +0000 UTC m=+0.025980866 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:45:14 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:45:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1766c4c700690f2def093daa9b6f68a4c01b38d4418708e8f00f4f58f30ca61a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:45:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1766c4c700690f2def093daa9b6f68a4c01b38d4418708e8f00f4f58f30ca61a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:45:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1766c4c700690f2def093daa9b6f68a4c01b38d4418708e8f00f4f58f30ca61a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:45:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1766c4c700690f2def093daa9b6f68a4c01b38d4418708e8f00f4f58f30ca61a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:45:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1766c4c700690f2def093daa9b6f68a4c01b38d4418708e8f00f4f58f30ca61a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:45:14 compute-0 podman[161335]: 2025-10-01 16:45:14.424989674 +0000 UTC m=+0.149106179 container init f5c73344d1591317639a7f7d3a0465ecbea352207c821610731be0dd23deb4e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 01 16:45:14 compute-0 podman[161335]: 2025-10-01 16:45:14.444519398 +0000 UTC m=+0.168635863 container start f5c73344d1591317639a7f7d3a0465ecbea352207c821610731be0dd23deb4e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:45:14 compute-0 podman[161335]: 2025-10-01 16:45:14.448648656 +0000 UTC m=+0.172765161 container attach f5c73344d1591317639a7f7d3a0465ecbea352207c821610731be0dd23deb4e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 01 16:45:14 compute-0 sudo[161481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csqisjheehhjxzisakkfkhndxelrvhsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337114.3398218-446-67559917299685/AnsiballZ_file.py'
Oct 01 16:45:14 compute-0 sudo[161481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:14 compute-0 python3.9[161483]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:45:14 compute-0 sudo[161481]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:14 compute-0 ceph-mon[74273]: pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:15 compute-0 sudo[161557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbtwjyltijouzqpkrkahkuibedyuruok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337114.3398218-446-67559917299685/AnsiballZ_stat.py'
Oct 01 16:45:15 compute-0 sudo[161557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:15 compute-0 python3.9[161560]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:45:15 compute-0 sudo[161557]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:15 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:15 compute-0 reverent_bartik[161377]: --> passed data devices: 0 physical, 3 LVM
Oct 01 16:45:15 compute-0 reverent_bartik[161377]: --> relative data size: 1.0
Oct 01 16:45:15 compute-0 reverent_bartik[161377]: --> All data devices are unavailable
Oct 01 16:45:15 compute-0 systemd[1]: libpod-f5c73344d1591317639a7f7d3a0465ecbea352207c821610731be0dd23deb4e5.scope: Deactivated successfully.
Oct 01 16:45:15 compute-0 systemd[1]: libpod-f5c73344d1591317639a7f7d3a0465ecbea352207c821610731be0dd23deb4e5.scope: Consumed 1.029s CPU time.
Oct 01 16:45:15 compute-0 podman[161335]: 2025-10-01 16:45:15.567393918 +0000 UTC m=+1.291510383 container died f5c73344d1591317639a7f7d3a0465ecbea352207c821610731be0dd23deb4e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bartik, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 01 16:45:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-1766c4c700690f2def093daa9b6f68a4c01b38d4418708e8f00f4f58f30ca61a-merged.mount: Deactivated successfully.
Oct 01 16:45:15 compute-0 podman[161335]: 2025-10-01 16:45:15.632234628 +0000 UTC m=+1.356351113 container remove f5c73344d1591317639a7f7d3a0465ecbea352207c821610731be0dd23deb4e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bartik, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:45:15 compute-0 systemd[1]: libpod-conmon-f5c73344d1591317639a7f7d3a0465ecbea352207c821610731be0dd23deb4e5.scope: Deactivated successfully.
Oct 01 16:45:15 compute-0 sudo[161126]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:15 compute-0 sudo[161672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:45:15 compute-0 sudo[161672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:45:15 compute-0 sudo[161672]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:15 compute-0 sudo[161720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:45:15 compute-0 sudo[161720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:45:15 compute-0 sudo[161720]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:15 compute-0 sudo[161765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:45:15 compute-0 sudo[161765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:45:15 compute-0 sudo[161765]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:15 compute-0 sudo[161824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuyvocqsdxilwxdunmtqtyyhtwejnekx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337115.3933964-446-47391181844485/AnsiballZ_copy.py'
Oct 01 16:45:15 compute-0 sudo[161824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:15 compute-0 sudo[161818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 16:45:15 compute-0 sudo[161818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:45:16 compute-0 python3.9[161839]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759337115.3933964-446-47391181844485/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:45:16 compute-0 sudo[161824]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:16 compute-0 ceph-osd[90269]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 16:45:16 compute-0 ceph-osd[90269]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 5418 writes, 23K keys, 5418 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5418 writes, 774 syncs, 7.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5418 writes, 23K keys, 5418 commit groups, 1.0 writes per commit group, ingest: 18.33 MB, 0.03 MB/s
                                           Interval WAL: 5418 writes, 774 syncs, 7.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.55 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.55 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 01 16:45:16 compute-0 podman[161910]: 2025-10-01 16:45:16.299345186 +0000 UTC m=+0.061398460 container create bc422f529e4491e4ec9644c0281f711353e741249db59483d947f4c17f238157 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_poincare, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 01 16:45:16 compute-0 systemd[1]: Started libpod-conmon-bc422f529e4491e4ec9644c0281f711353e741249db59483d947f4c17f238157.scope.
Oct 01 16:45:16 compute-0 podman[161910]: 2025-10-01 16:45:16.265552805 +0000 UTC m=+0.027606119 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:45:16 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:45:16 compute-0 podman[161910]: 2025-10-01 16:45:16.388365816 +0000 UTC m=+0.150419070 container init bc422f529e4491e4ec9644c0281f711353e741249db59483d947f4c17f238157 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_poincare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 01 16:45:16 compute-0 podman[161910]: 2025-10-01 16:45:16.39630508 +0000 UTC m=+0.158358314 container start bc422f529e4491e4ec9644c0281f711353e741249db59483d947f4c17f238157 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 01 16:45:16 compute-0 podman[161910]: 2025-10-01 16:45:16.399692047 +0000 UTC m=+0.161745271 container attach bc422f529e4491e4ec9644c0281f711353e741249db59483d947f4c17f238157 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_poincare, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:45:16 compute-0 wizardly_poincare[161949]: 167 167
Oct 01 16:45:16 compute-0 systemd[1]: libpod-bc422f529e4491e4ec9644c0281f711353e741249db59483d947f4c17f238157.scope: Deactivated successfully.
Oct 01 16:45:16 compute-0 podman[161910]: 2025-10-01 16:45:16.405189161 +0000 UTC m=+0.167242395 container died bc422f529e4491e4ec9644c0281f711353e741249db59483d947f4c17f238157 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 01 16:45:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9559f8b5addc0d86aca9038f272401ce89e85f3cb45b7bdddcecbc7bab0f06c-merged.mount: Deactivated successfully.
Oct 01 16:45:16 compute-0 sudo[161986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwnsbtlrlbjzaxkkmibomrbnubiphjnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337115.3933964-446-47391181844485/AnsiballZ_systemd.py'
Oct 01 16:45:16 compute-0 sudo[161986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:16 compute-0 podman[161910]: 2025-10-01 16:45:16.45040492 +0000 UTC m=+0.212458194 container remove bc422f529e4491e4ec9644c0281f711353e741249db59483d947f4c17f238157 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_poincare, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:45:16 compute-0 systemd[1]: libpod-conmon-bc422f529e4491e4ec9644c0281f711353e741249db59483d947f4c17f238157.scope: Deactivated successfully.
Oct 01 16:45:16 compute-0 podman[162002]: 2025-10-01 16:45:16.641208471 +0000 UTC m=+0.059269602 container create a770b4de3a07b7b90c6491ac34fc24c28f5bcb5aaa066fe841aa77652443bdca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shtern, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 01 16:45:16 compute-0 systemd[1]: Started libpod-conmon-a770b4de3a07b7b90c6491ac34fc24c28f5bcb5aaa066fe841aa77652443bdca.scope.
Oct 01 16:45:16 compute-0 podman[162002]: 2025-10-01 16:45:16.621070304 +0000 UTC m=+0.039131425 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:45:16 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:45:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bb697c24036881177c50473433a84ff8a08d77392cfaf43b7d49d1b53e73927/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:45:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bb697c24036881177c50473433a84ff8a08d77392cfaf43b7d49d1b53e73927/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:45:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bb697c24036881177c50473433a84ff8a08d77392cfaf43b7d49d1b53e73927/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:45:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bb697c24036881177c50473433a84ff8a08d77392cfaf43b7d49d1b53e73927/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:45:16 compute-0 python3.9[161994]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 01 16:45:16 compute-0 podman[162002]: 2025-10-01 16:45:16.742852962 +0000 UTC m=+0.160914083 container init a770b4de3a07b7b90c6491ac34fc24c28f5bcb5aaa066fe841aa77652443bdca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:45:16 compute-0 systemd[1]: Reloading.
Oct 01 16:45:16 compute-0 podman[162002]: 2025-10-01 16:45:16.75384532 +0000 UTC m=+0.171906441 container start a770b4de3a07b7b90c6491ac34fc24c28f5bcb5aaa066fe841aa77652443bdca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shtern, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Oct 01 16:45:16 compute-0 podman[162002]: 2025-10-01 16:45:16.757826989 +0000 UTC m=+0.175888100 container attach a770b4de3a07b7b90c6491ac34fc24c28f5bcb5aaa066fe841aa77652443bdca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:45:16 compute-0 systemd-rc-local-generator[162049]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:45:16 compute-0 systemd-sysv-generator[162054]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:45:16 compute-0 ceph-mon[74273]: pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:17 compute-0 sudo[161986]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:17 compute-0 sudo[162131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdpplriibxplbglncxozpnznzbqipchi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337115.3933964-446-47391181844485/AnsiballZ_systemd.py'
Oct 01 16:45:17 compute-0 sudo[162131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:17 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]: {
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:     "0": [
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:         {
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             "devices": [
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "/dev/loop3"
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             ],
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             "lv_name": "ceph_lv0",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             "lv_size": "21470642176",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             "name": "ceph_lv0",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             "tags": {
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "ceph.cluster_name": "ceph",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "ceph.crush_device_class": "",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "ceph.encrypted": "0",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "ceph.osd_id": "0",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "ceph.type": "block",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "ceph.vdo": "0"
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             },
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             "type": "block",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             "vg_name": "ceph_vg0"
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:         }
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:     ],
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:     "1": [
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:         {
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             "devices": [
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "/dev/loop4"
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             ],
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             "lv_name": "ceph_lv1",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             "lv_size": "21470642176",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             "name": "ceph_lv1",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             "tags": {
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "ceph.cluster_name": "ceph",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "ceph.crush_device_class": "",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "ceph.encrypted": "0",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "ceph.osd_id": "1",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "ceph.type": "block",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "ceph.vdo": "0"
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             },
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             "type": "block",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             "vg_name": "ceph_vg1"
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:         }
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:     ],
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:     "2": [
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:         {
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             "devices": [
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "/dev/loop5"
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             ],
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             "lv_name": "ceph_lv2",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             "lv_size": "21470642176",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             "name": "ceph_lv2",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             "tags": {
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "ceph.cluster_name": "ceph",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "ceph.crush_device_class": "",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "ceph.encrypted": "0",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "ceph.osd_id": "2",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "ceph.type": "block",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:                 "ceph.vdo": "0"
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             },
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             "type": "block",
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:             "vg_name": "ceph_vg2"
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:         }
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]:     ]
Oct 01 16:45:17 compute-0 wizardly_shtern[162019]: }
Oct 01 16:45:17 compute-0 systemd[1]: libpod-a770b4de3a07b7b90c6491ac34fc24c28f5bcb5aaa066fe841aa77652443bdca.scope: Deactivated successfully.
Oct 01 16:45:17 compute-0 podman[162002]: 2025-10-01 16:45:17.612003167 +0000 UTC m=+1.030064288 container died a770b4de3a07b7b90c6491ac34fc24c28f5bcb5aaa066fe841aa77652443bdca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Oct 01 16:45:17 compute-0 python3.9[162133]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:45:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bb697c24036881177c50473433a84ff8a08d77392cfaf43b7d49d1b53e73927-merged.mount: Deactivated successfully.
Oct 01 16:45:17 compute-0 podman[162002]: 2025-10-01 16:45:17.674425848 +0000 UTC m=+1.092486949 container remove a770b4de3a07b7b90c6491ac34fc24c28f5bcb5aaa066fe841aa77652443bdca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shtern, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 01 16:45:17 compute-0 systemd[1]: libpod-conmon-a770b4de3a07b7b90c6491ac34fc24c28f5bcb5aaa066fe841aa77652443bdca.scope: Deactivated successfully.
Oct 01 16:45:17 compute-0 systemd[1]: Reloading.
Oct 01 16:45:17 compute-0 sudo[161818]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:17 compute-0 systemd-sysv-generator[162197]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:45:17 compute-0 systemd-rc-local-generator[162194]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:45:17 compute-0 sudo[162152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:45:17 compute-0 sudo[162152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:45:17 compute-0 sudo[162152]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:17 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Oct 01 16:45:18 compute-0 sudo[162214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:45:18 compute-0 sudo[162214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:45:18 compute-0 sudo[162214]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:18 compute-0 ceph-mon[74273]: pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:18 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:45:18 compute-0 sudo[162250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:45:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cd36410cd25af1060bce45049d581045bc65e50d1e26581199bb8cf2d27f8c2/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct 01 16:45:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cd36410cd25af1060bce45049d581045bc65e50d1e26581199bb8cf2d27f8c2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 01 16:45:18 compute-0 sudo[162250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:45:18 compute-0 sudo[162250]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:18 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6.
Oct 01 16:45:18 compute-0 podman[162224]: 2025-10-01 16:45:18.097399699 +0000 UTC m=+0.109960493 container init a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, tcib_managed=true)
Oct 01 16:45:18 compute-0 ovn_metadata_agent[162258]: + sudo -E kolla_set_configs
Oct 01 16:45:18 compute-0 podman[162224]: 2025-10-01 16:45:18.120061523 +0000 UTC m=+0.132622297 container start a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 01 16:45:18 compute-0 sudo[162281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 16:45:18 compute-0 edpm-start-podman-container[162224]: ovn_metadata_agent
Oct 01 16:45:18 compute-0 sudo[162281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:45:18 compute-0 ovn_metadata_agent[162258]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 01 16:45:18 compute-0 ovn_metadata_agent[162258]: INFO:__main__:Validating config file
Oct 01 16:45:18 compute-0 ovn_metadata_agent[162258]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 01 16:45:18 compute-0 ovn_metadata_agent[162258]: INFO:__main__:Copying service configuration files
Oct 01 16:45:18 compute-0 ovn_metadata_agent[162258]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct 01 16:45:18 compute-0 ovn_metadata_agent[162258]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct 01 16:45:18 compute-0 ovn_metadata_agent[162258]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct 01 16:45:18 compute-0 ovn_metadata_agent[162258]: INFO:__main__:Writing out command to execute
Oct 01 16:45:18 compute-0 ovn_metadata_agent[162258]: INFO:__main__:Setting permission for /var/lib/neutron
Oct 01 16:45:18 compute-0 ovn_metadata_agent[162258]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct 01 16:45:18 compute-0 ovn_metadata_agent[162258]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct 01 16:45:18 compute-0 ovn_metadata_agent[162258]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct 01 16:45:18 compute-0 ovn_metadata_agent[162258]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct 01 16:45:18 compute-0 ovn_metadata_agent[162258]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct 01 16:45:18 compute-0 ovn_metadata_agent[162258]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct 01 16:45:18 compute-0 ovn_metadata_agent[162258]: ++ cat /run_command
Oct 01 16:45:18 compute-0 ovn_metadata_agent[162258]: + CMD=neutron-ovn-metadata-agent
Oct 01 16:45:18 compute-0 ovn_metadata_agent[162258]: + ARGS=
Oct 01 16:45:18 compute-0 ovn_metadata_agent[162258]: + sudo kolla_copy_cacerts
Oct 01 16:45:18 compute-0 podman[162308]: 2025-10-01 16:45:18.203570137 +0000 UTC m=+0.073955258 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:45:18 compute-0 edpm-start-podman-container[162213]: Creating additional drop-in dependency for "ovn_metadata_agent" (a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6)
Oct 01 16:45:18 compute-0 systemd[1]: Reloading.
Oct 01 16:45:18 compute-0 ovn_metadata_agent[162258]: + [[ ! -n '' ]]
Oct 01 16:45:18 compute-0 ovn_metadata_agent[162258]: + . kolla_extend_start
Oct 01 16:45:18 compute-0 ovn_metadata_agent[162258]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Oct 01 16:45:18 compute-0 ovn_metadata_agent[162258]: Running command: 'neutron-ovn-metadata-agent'
Oct 01 16:45:18 compute-0 ovn_metadata_agent[162258]: + umask 0022
Oct 01 16:45:18 compute-0 ovn_metadata_agent[162258]: + exec neutron-ovn-metadata-agent
Oct 01 16:45:18 compute-0 systemd-sysv-generator[162392]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:45:18 compute-0 systemd-rc-local-generator[162388]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:45:18 compute-0 ceph-mgr[74571]: [devicehealth INFO root] Check health
Oct 01 16:45:18 compute-0 systemd[1]: Started ovn_metadata_agent container.
Oct 01 16:45:18 compute-0 sudo[162131]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:18 compute-0 podman[162424]: 2025-10-01 16:45:18.497040541 +0000 UTC m=+0.036223382 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:45:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:45:18 compute-0 podman[162424]: 2025-10-01 16:45:18.8392756 +0000 UTC m=+0.378458411 container create 8f3003d05ebd69b7935080f2f9d131b26dab19277b0e75ce2022a7b9665fcf27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:45:18 compute-0 systemd[1]: Started libpod-conmon-8f3003d05ebd69b7935080f2f9d131b26dab19277b0e75ce2022a7b9665fcf27.scope.
Oct 01 16:45:18 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:45:18 compute-0 podman[162424]: 2025-10-01 16:45:18.923320682 +0000 UTC m=+0.462503493 container init 8f3003d05ebd69b7935080f2f9d131b26dab19277b0e75ce2022a7b9665fcf27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_aryabhata, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 01 16:45:18 compute-0 podman[162424]: 2025-10-01 16:45:18.930599161 +0000 UTC m=+0.469782012 container start 8f3003d05ebd69b7935080f2f9d131b26dab19277b0e75ce2022a7b9665fcf27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_aryabhata, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 01 16:45:18 compute-0 podman[162424]: 2025-10-01 16:45:18.934432599 +0000 UTC m=+0.473615410 container attach 8f3003d05ebd69b7935080f2f9d131b26dab19277b0e75ce2022a7b9665fcf27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_aryabhata, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 01 16:45:18 compute-0 competent_aryabhata[162464]: 167 167
Oct 01 16:45:18 compute-0 systemd[1]: libpod-8f3003d05ebd69b7935080f2f9d131b26dab19277b0e75ce2022a7b9665fcf27.scope: Deactivated successfully.
Oct 01 16:45:18 compute-0 conmon[162464]: conmon 8f3003d05ebd69b79350 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8f3003d05ebd69b7935080f2f9d131b26dab19277b0e75ce2022a7b9665fcf27.scope/container/memory.events
Oct 01 16:45:18 compute-0 podman[162424]: 2025-10-01 16:45:18.939377474 +0000 UTC m=+0.478560325 container died 8f3003d05ebd69b7935080f2f9d131b26dab19277b0e75ce2022a7b9665fcf27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_aryabhata, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:45:18 compute-0 sshd-session[153119]: Connection closed by 192.168.122.30 port 49148
Oct 01 16:45:18 compute-0 sshd-session[153116]: pam_unix(sshd:session): session closed for user zuul
Oct 01 16:45:18 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Oct 01 16:45:18 compute-0 systemd[1]: session-48.scope: Consumed 58.225s CPU time.
Oct 01 16:45:18 compute-0 systemd-logind[788]: Session 48 logged out. Waiting for processes to exit.
Oct 01 16:45:18 compute-0 systemd-logind[788]: Removed session 48.
Oct 01 16:45:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-723698ff02b2e544a57027df9d215b7fa0802a36e59ecabcb387a6f08fa2254f-merged.mount: Deactivated successfully.
Oct 01 16:45:18 compute-0 podman[162424]: 2025-10-01 16:45:18.992215666 +0000 UTC m=+0.531398477 container remove 8f3003d05ebd69b7935080f2f9d131b26dab19277b0e75ce2022a7b9665fcf27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 01 16:45:19 compute-0 systemd[1]: libpod-conmon-8f3003d05ebd69b7935080f2f9d131b26dab19277b0e75ce2022a7b9665fcf27.scope: Deactivated successfully.
Oct 01 16:45:19 compute-0 podman[162489]: 2025-10-01 16:45:19.201836561 +0000 UTC m=+0.049283674 container create 77a163f0f40eb312b88c59a8c4c28645559104fab1362543b363bda2da70103a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_feistel, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 01 16:45:19 compute-0 systemd[1]: Started libpod-conmon-77a163f0f40eb312b88c59a8c4c28645559104fab1362543b363bda2da70103a.scope.
Oct 01 16:45:19 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:45:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c8f5b0aac1fb587f74964c10b71ece7e82bf238a9a0d836acc8179274e19f48/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:45:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c8f5b0aac1fb587f74964c10b71ece7e82bf238a9a0d836acc8179274e19f48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:45:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c8f5b0aac1fb587f74964c10b71ece7e82bf238a9a0d836acc8179274e19f48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:45:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c8f5b0aac1fb587f74964c10b71ece7e82bf238a9a0d836acc8179274e19f48/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:45:19 compute-0 podman[162489]: 2025-10-01 16:45:19.178234332 +0000 UTC m=+0.025681465 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:45:19 compute-0 podman[162489]: 2025-10-01 16:45:19.291031463 +0000 UTC m=+0.138478556 container init 77a163f0f40eb312b88c59a8c4c28645559104fab1362543b363bda2da70103a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_feistel, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 01 16:45:19 compute-0 podman[162489]: 2025-10-01 16:45:19.303643234 +0000 UTC m=+0.151090317 container start 77a163f0f40eb312b88c59a8c4c28645559104fab1362543b363bda2da70103a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_feistel, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 01 16:45:19 compute-0 podman[162489]: 2025-10-01 16:45:19.30759138 +0000 UTC m=+0.155038463 container attach 77a163f0f40eb312b88c59a8c4c28645559104fab1362543b363bda2da70103a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Oct 01 16:45:19 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.908 162304 INFO neutron.common.config [-] Logging enabled!
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.908 162304 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.909 162304 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.909 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.909 162304 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.909 162304 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.909 162304 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.909 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.909 162304 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.910 162304 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.910 162304 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.910 162304 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.910 162304 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.910 162304 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.910 162304 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.910 162304 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.910 162304 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.910 162304 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.910 162304 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.911 162304 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.911 162304 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.911 162304 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.911 162304 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.911 162304 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.911 162304 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.911 162304 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.911 162304 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.911 162304 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.912 162304 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.912 162304 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.912 162304 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.912 162304 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.912 162304 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.912 162304 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.912 162304 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.912 162304 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.912 162304 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.913 162304 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.913 162304 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.913 162304 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.913 162304 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.913 162304 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.913 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.913 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.913 162304 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.913 162304 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.914 162304 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.914 162304 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.914 162304 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.914 162304 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.914 162304 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.914 162304 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.914 162304 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.914 162304 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.914 162304 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.915 162304 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.915 162304 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.915 162304 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.915 162304 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.915 162304 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.915 162304 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.915 162304 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.915 162304 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.915 162304 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.916 162304 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.916 162304 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.916 162304 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.916 162304 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.916 162304 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.916 162304 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.916 162304 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.916 162304 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.916 162304 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.916 162304 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.917 162304 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.917 162304 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.917 162304 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.917 162304 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.917 162304 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.917 162304 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.917 162304 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.917 162304 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.917 162304 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.917 162304 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.918 162304 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.918 162304 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.918 162304 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.918 162304 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.918 162304 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.918 162304 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.918 162304 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.918 162304 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.918 162304 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.919 162304 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.919 162304 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.919 162304 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.919 162304 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.919 162304 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.919 162304 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.919 162304 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.919 162304 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.919 162304 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.919 162304 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.920 162304 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.920 162304 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.920 162304 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.920 162304 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.920 162304 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.920 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.920 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.920 162304 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.920 162304 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.920 162304 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.921 162304 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.921 162304 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.921 162304 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.921 162304 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.921 162304 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.921 162304 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.921 162304 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.921 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.921 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.922 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.922 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.922 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.922 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.922 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.922 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.922 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.923 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.923 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.923 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.923 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.923 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.923 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.923 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.923 162304 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.923 162304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.924 162304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.924 162304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.924 162304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.924 162304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.924 162304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.924 162304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.924 162304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.924 162304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.924 162304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.924 162304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.925 162304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.925 162304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.925 162304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.925 162304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.925 162304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.925 162304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.925 162304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.925 162304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.925 162304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.925 162304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.926 162304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.926 162304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.926 162304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.926 162304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.926 162304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.926 162304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.926 162304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.926 162304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.926 162304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.926 162304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.927 162304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.927 162304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.927 162304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.927 162304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.927 162304 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.927 162304 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.927 162304 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.927 162304 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.928 162304 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.928 162304 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.928 162304 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.928 162304 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.928 162304 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.928 162304 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.928 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.928 162304 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.928 162304 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.929 162304 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.929 162304 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.929 162304 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.929 162304 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.929 162304 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.929 162304 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.929 162304 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.929 162304 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.929 162304 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.929 162304 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.930 162304 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.930 162304 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.930 162304 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.930 162304 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.930 162304 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.930 162304 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.930 162304 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.930 162304 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.930 162304 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.931 162304 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.931 162304 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.931 162304 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.931 162304 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.931 162304 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.931 162304 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.931 162304 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.931 162304 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.931 162304 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.932 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.932 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.932 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.932 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.932 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.932 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.932 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.932 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.932 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.933 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.933 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.933 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.933 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.933 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.933 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.933 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.933 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.933 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.933 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.934 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.934 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.934 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.934 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.934 162304 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.934 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.934 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.934 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.934 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.935 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.935 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.935 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.935 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.935 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.935 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.935 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.935 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.935 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.936 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.936 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.936 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.936 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.936 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.936 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.936 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.936 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.936 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.936 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.937 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.937 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.937 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.937 162304 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.937 162304 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.937 162304 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.937 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.937 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.937 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.938 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.938 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.938 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.938 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.938 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.938 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.938 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.938 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.938 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.938 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.939 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.939 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.939 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.939 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.939 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.939 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.939 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.939 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.939 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.940 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.940 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.940 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.940 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.940 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.940 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.940 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.940 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.940 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.940 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.941 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.941 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.941 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.941 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.941 162304 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.941 162304 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.952 162304 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.952 162304 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.952 162304 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.953 162304 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.953 162304 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.965 162304 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name d2971fc2-5b75-459a-98a0-6e626d0d4d99 (UUID: d2971fc2-5b75-459a-98a0-6e626d0d4d99) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.987 162304 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.988 162304 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.988 162304 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.988 162304 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.992 162304 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 01 16:45:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:19.999 162304 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 01 16:45:20 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:20.005 162304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'd2971fc2-5b75-459a-98a0-6e626d0d4d99'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fe508e69820>], external_ids={}, name=d2971fc2-5b75-459a-98a0-6e626d0d4d99, nb_cfg_timestamp=1759337058629, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 16:45:20 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:20.006 162304 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fe508e69310>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Oct 01 16:45:20 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:20.007 162304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 01 16:45:20 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:20.007 162304 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 01 16:45:20 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:20.007 162304 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 01 16:45:20 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:20.007 162304 INFO oslo_service.service [-] Starting 1 workers
Oct 01 16:45:20 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:20.011 162304 DEBUG oslo_service.service [-] Started child 162516 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Oct 01 16:45:20 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:20.014 162304 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpry047iba/privsep.sock']
Oct 01 16:45:20 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:20.017 162516 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-2001191'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Oct 01 16:45:20 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:20.056 162516 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Oct 01 16:45:20 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:20.057 162516 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Oct 01 16:45:20 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:20.057 162516 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 01 16:45:20 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:20.063 162516 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 01 16:45:20 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:20.072 162516 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 01 16:45:20 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:20.082 162516 INFO eventlet.wsgi.server [-] (162516) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Oct 01 16:45:20 compute-0 practical_feistel[162506]: {
Oct 01 16:45:20 compute-0 practical_feistel[162506]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 16:45:20 compute-0 practical_feistel[162506]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:45:20 compute-0 practical_feistel[162506]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 16:45:20 compute-0 practical_feistel[162506]:         "osd_id": 2,
Oct 01 16:45:20 compute-0 practical_feistel[162506]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:45:20 compute-0 practical_feistel[162506]:         "type": "bluestore"
Oct 01 16:45:20 compute-0 practical_feistel[162506]:     },
Oct 01 16:45:20 compute-0 practical_feistel[162506]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 16:45:20 compute-0 practical_feistel[162506]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:45:20 compute-0 practical_feistel[162506]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 16:45:20 compute-0 practical_feistel[162506]:         "osd_id": 0,
Oct 01 16:45:20 compute-0 practical_feistel[162506]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:45:20 compute-0 practical_feistel[162506]:         "type": "bluestore"
Oct 01 16:45:20 compute-0 practical_feistel[162506]:     },
Oct 01 16:45:20 compute-0 practical_feistel[162506]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 16:45:20 compute-0 practical_feistel[162506]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:45:20 compute-0 practical_feistel[162506]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 16:45:20 compute-0 practical_feistel[162506]:         "osd_id": 1,
Oct 01 16:45:20 compute-0 practical_feistel[162506]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:45:20 compute-0 practical_feistel[162506]:         "type": "bluestore"
Oct 01 16:45:20 compute-0 practical_feistel[162506]:     }
Oct 01 16:45:20 compute-0 practical_feistel[162506]: }
Oct 01 16:45:20 compute-0 systemd[1]: libpod-77a163f0f40eb312b88c59a8c4c28645559104fab1362543b363bda2da70103a.scope: Deactivated successfully.
Oct 01 16:45:20 compute-0 podman[162489]: 2025-10-01 16:45:20.311595316 +0000 UTC m=+1.159042409 container died 77a163f0f40eb312b88c59a8c4c28645559104fab1362543b363bda2da70103a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_feistel, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:45:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c8f5b0aac1fb587f74964c10b71ece7e82bf238a9a0d836acc8179274e19f48-merged.mount: Deactivated successfully.
Oct 01 16:45:20 compute-0 podman[162489]: 2025-10-01 16:45:20.385212499 +0000 UTC m=+1.232659592 container remove 77a163f0f40eb312b88c59a8c4c28645559104fab1362543b363bda2da70103a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_feistel, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:45:20 compute-0 systemd[1]: libpod-conmon-77a163f0f40eb312b88c59a8c4c28645559104fab1362543b363bda2da70103a.scope: Deactivated successfully.
Oct 01 16:45:20 compute-0 sudo[162281]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:45:20 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:45:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:45:20 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:45:20 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev ac16c8b8-b460-407d-99ea-2ecde5221bda does not exist
Oct 01 16:45:20 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 1936b3fe-2262-4373-9ebb-8eb33317f4e0 does not exist
Oct 01 16:45:20 compute-0 sudo[162558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:45:20 compute-0 sudo[162558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:45:20 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Oct 01 16:45:20 compute-0 sudo[162558]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:20 compute-0 ceph-mon[74273]: pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:20 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:45:20 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:45:20 compute-0 sudo[162584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 16:45:20 compute-0 sudo[162584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:45:20 compute-0 sudo[162584]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:20 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:20.659 162304 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 01 16:45:20 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:20.660 162304 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpry047iba/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 01 16:45:20 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:20.528 162582 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 01 16:45:20 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:20.537 162582 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 01 16:45:20 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:20.547 162582 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Oct 01 16:45:20 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:20.547 162582 INFO oslo.privsep.daemon [-] privsep daemon running as pid 162582
Oct 01 16:45:20 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:20.664 162582 DEBUG oslo.privsep.daemon [-] privsep: reply[e4020bcc-12de-47cf-a80b-d7a0ffe7d489]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 01 16:45:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 16:45:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:45:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 16:45:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:45:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:45:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:45:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:45:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:45:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:45:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:45:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:45:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:45:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 16:45:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:45:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:45:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:45:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 16:45:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:45:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 16:45:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:45:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:45:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:45:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.158 162582 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.159 162582 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.159 162582 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 16:45:21 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.672 162582 DEBUG oslo.privsep.daemon [-] privsep: reply[a7d056d8-4e08-42da-8da9-63a242c074d8]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.676 162304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=d2971fc2-5b75-459a-98a0-6e626d0d4d99, column=external_ids, values=({'neutron:ovn-metadata-id': '806c5d48-d099-5c67-9895-f7bf536a31db'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.704 162304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d2971fc2-5b75-459a-98a0-6e626d0d4d99, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.754 162304 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.755 162304 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.755 162304 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.755 162304 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.755 162304 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.755 162304 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.756 162304 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.756 162304 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.756 162304 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.757 162304 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.757 162304 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.757 162304 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.757 162304 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.758 162304 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.758 162304 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.758 162304 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.759 162304 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.759 162304 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.759 162304 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.759 162304 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.759 162304 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.760 162304 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.760 162304 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.760 162304 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.760 162304 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.761 162304 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.761 162304 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.761 162304 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.762 162304 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.762 162304 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.762 162304 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.762 162304 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.762 162304 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.763 162304 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.763 162304 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.764 162304 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.764 162304 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.764 162304 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.764 162304 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.765 162304 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.765 162304 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.765 162304 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.765 162304 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.766 162304 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.766 162304 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.766 162304 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.766 162304 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.767 162304 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.767 162304 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.767 162304 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.767 162304 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.768 162304 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.768 162304 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.768 162304 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.768 162304 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.769 162304 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.769 162304 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.769 162304 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.769 162304 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.769 162304 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.770 162304 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.770 162304 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.770 162304 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.770 162304 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.770 162304 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.771 162304 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.771 162304 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.771 162304 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.771 162304 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.772 162304 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.772 162304 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.772 162304 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.772 162304 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.773 162304 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.773 162304 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.773 162304 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.773 162304 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.773 162304 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.774 162304 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.774 162304 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.774 162304 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.774 162304 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.775 162304 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.775 162304 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.775 162304 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.775 162304 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.775 162304 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.776 162304 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.776 162304 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.776 162304 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.776 162304 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.777 162304 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.777 162304 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.777 162304 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.777 162304 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.777 162304 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.778 162304 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.778 162304 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.778 162304 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.778 162304 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.779 162304 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.779 162304 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.779 162304 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.779 162304 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.779 162304 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.780 162304 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.780 162304 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.780 162304 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.780 162304 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.781 162304 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.781 162304 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.781 162304 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.781 162304 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.781 162304 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.782 162304 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.782 162304 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.782 162304 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.782 162304 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.783 162304 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.783 162304 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.783 162304 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.783 162304 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.784 162304 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.784 162304 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.784 162304 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.784 162304 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.784 162304 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.785 162304 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.785 162304 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.785 162304 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.785 162304 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.786 162304 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.786 162304 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.786 162304 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.786 162304 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.787 162304 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.787 162304 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.787 162304 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.787 162304 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.788 162304 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.788 162304 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.788 162304 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.788 162304 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.789 162304 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.789 162304 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.789 162304 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.790 162304 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.790 162304 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.790 162304 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.791 162304 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.791 162304 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.791 162304 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.792 162304 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.792 162304 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.792 162304 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.793 162304 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.793 162304 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.793 162304 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.793 162304 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.794 162304 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.794 162304 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.794 162304 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.795 162304 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.795 162304 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.795 162304 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.796 162304 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.796 162304 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.796 162304 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.797 162304 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.797 162304 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.798 162304 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.798 162304 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.798 162304 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.799 162304 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.799 162304 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.800 162304 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.800 162304 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.800 162304 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.801 162304 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.801 162304 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.801 162304 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.802 162304 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.802 162304 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.802 162304 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.803 162304 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.803 162304 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.804 162304 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.804 162304 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.804 162304 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.804 162304 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.804 162304 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.805 162304 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.805 162304 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.805 162304 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.805 162304 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.806 162304 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.806 162304 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.806 162304 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.806 162304 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.806 162304 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.807 162304 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.807 162304 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.807 162304 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.807 162304 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.808 162304 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.808 162304 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.808 162304 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.808 162304 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.808 162304 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.809 162304 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.809 162304 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.809 162304 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.809 162304 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.809 162304 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.810 162304 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.810 162304 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.810 162304 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.810 162304 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.810 162304 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.811 162304 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.811 162304 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.811 162304 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.811 162304 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.812 162304 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.812 162304 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.812 162304 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.812 162304 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.812 162304 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.813 162304 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.813 162304 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.813 162304 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.813 162304 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.813 162304 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.814 162304 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.814 162304 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.814 162304 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.814 162304 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.815 162304 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.815 162304 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.815 162304 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.815 162304 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.815 162304 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.816 162304 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.816 162304 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.816 162304 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.816 162304 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.817 162304 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.817 162304 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.817 162304 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.817 162304 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.817 162304 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.818 162304 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.818 162304 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.818 162304 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.818 162304 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.819 162304 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.819 162304 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.819 162304 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.819 162304 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.819 162304 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.820 162304 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.820 162304 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.820 162304 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.820 162304 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.820 162304 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.821 162304 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.821 162304 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.821 162304 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.821 162304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.822 162304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.822 162304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.822 162304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.822 162304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.823 162304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.823 162304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.823 162304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.823 162304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.823 162304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.824 162304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.824 162304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.824 162304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.824 162304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.825 162304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.825 162304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.825 162304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.825 162304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.825 162304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.826 162304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.826 162304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.826 162304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.826 162304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.827 162304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.827 162304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.827 162304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.827 162304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.828 162304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.828 162304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.828 162304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.828 162304 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.828 162304 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.829 162304 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.829 162304 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.829 162304 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:45:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:45:21.829 162304 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 01 16:45:22 compute-0 ceph-mon[74273]: pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:23 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:45:24 compute-0 ceph-mon[74273]: pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:25 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:26 compute-0 sshd-session[162614]: Accepted publickey for zuul from 192.168.122.30 port 36624 ssh2: ECDSA SHA256:cAu4I/kPoFUKOLOQB71BUt6Th09G4PIJ2iHT8DD8gEY
Oct 01 16:45:26 compute-0 systemd-logind[788]: New session 49 of user zuul.
Oct 01 16:45:26 compute-0 systemd[1]: Started Session 49 of User zuul.
Oct 01 16:45:26 compute-0 sshd-session[162614]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 16:45:26 compute-0 ceph-mon[74273]: pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:27 compute-0 python3.9[162767]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:45:27 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:28 compute-0 sudo[162921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rthsrxgkzwnlayouejwdrvgkjmtawtyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337127.8455956-34-245811799466093/AnsiballZ_command.py'
Oct 01 16:45:28 compute-0 sudo[162921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:28 compute-0 python3.9[162923]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:45:28 compute-0 ceph-mon[74273]: pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:28 compute-0 sudo[162921]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:45:29 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:29 compute-0 sudo[163086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sixgvbfxqdmuxncemsvcooagpsaygkfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337128.9918885-45-1052575419008/AnsiballZ_systemd_service.py'
Oct 01 16:45:29 compute-0 sudo[163086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:30 compute-0 python3.9[163088]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 01 16:45:30 compute-0 systemd[1]: Reloading.
Oct 01 16:45:30 compute-0 systemd-rc-local-generator[163114]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:45:30 compute-0 systemd-sysv-generator[163119]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:45:30 compute-0 sudo[163086]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:30 compute-0 ceph-mon[74273]: pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:31 compute-0 python3.9[163273]: ansible-ansible.builtin.service_facts Invoked
Oct 01 16:45:31 compute-0 network[163290]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 01 16:45:31 compute-0 network[163291]: 'network-scripts' will be removed from distribution in near future.
Oct 01 16:45:31 compute-0 network[163292]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 01 16:45:31 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:32 compute-0 ceph-mon[74273]: pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:33 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:45:34 compute-0 ceph-mon[74273]: pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:35 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:35 compute-0 sudo[163555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlnadxtbzgflcbeqqkrhqoxebmxtjtns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337135.2452075-64-130594979241905/AnsiballZ_systemd_service.py'
Oct 01 16:45:35 compute-0 sudo[163555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:35 compute-0 python3.9[163557]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:45:35 compute-0 sudo[163555]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:36 compute-0 sudo[163708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngvwpsdhxmhmnlpcxpsovbmnyoymffdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337136.0809872-64-221454994677266/AnsiballZ_systemd_service.py'
Oct 01 16:45:36 compute-0 sudo[163708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:36 compute-0 ceph-mon[74273]: pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:36 compute-0 python3.9[163710]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:45:36 compute-0 sudo[163708]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:37 compute-0 sudo[163861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwkyfqqffjmqeubdvckwoyjvwhadjqjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337136.9457872-64-28895001206356/AnsiballZ_systemd_service.py'
Oct 01 16:45:37 compute-0 sudo[163861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:37 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:37 compute-0 python3.9[163863]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:45:37 compute-0 sudo[163861]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:38 compute-0 sudo[164014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wopvddtrckmixfjgmqalzpdqobyrqvgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337137.8934546-64-54983035843054/AnsiballZ_systemd_service.py'
Oct 01 16:45:38 compute-0 sudo[164014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:38 compute-0 python3.9[164016]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:45:38 compute-0 sudo[164014]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:38 compute-0 ceph-mon[74273]: pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:45:39 compute-0 sudo[164167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmwshsjbycpgninrlcnwiorpolazfpmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337138.7796934-64-146246381776041/AnsiballZ_systemd_service.py'
Oct 01 16:45:39 compute-0 sudo[164167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:39 compute-0 python3.9[164169]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:45:39 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:39 compute-0 sudo[164167]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:40 compute-0 sudo[164320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxqsscojbfhbfogdhyyvuvuuptjdsgmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337139.6773129-64-217358997538893/AnsiballZ_systemd_service.py'
Oct 01 16:45:40 compute-0 sudo[164320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:40 compute-0 python3.9[164322]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:45:40 compute-0 sudo[164320]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:40 compute-0 ceph-mon[74273]: pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:40 compute-0 sudo[164473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adolldjexkqoojjpkkyzsovanvhupfxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337140.5498059-64-128504917722280/AnsiballZ_systemd_service.py'
Oct 01 16:45:40 compute-0 sudo[164473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:41 compute-0 python3.9[164475]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:45:41 compute-0 sudo[164473]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:45:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:45:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:45:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:45:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:45:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:45:41 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:42 compute-0 sudo[164626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqhokpnjfrfwthbkigviwyetbpquiolk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337141.5859866-116-153934756923458/AnsiballZ_file.py'
Oct 01 16:45:42 compute-0 sudo[164626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:42 compute-0 python3.9[164628]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:45:42 compute-0 sudo[164626]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:42 compute-0 podman[164699]: 2025-10-01 16:45:42.800443393 +0000 UTC m=+0.106570059 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller)
Oct 01 16:45:42 compute-0 ceph-mon[74273]: pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:42 compute-0 sudo[164802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdwhtdeuvhucdfpfzdupkfcgsakhdgcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337142.5612185-116-43785510968302/AnsiballZ_file.py'
Oct 01 16:45:42 compute-0 sudo[164802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:43 compute-0 python3.9[164804]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:45:43 compute-0 sudo[164802]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:43 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:43 compute-0 sudo[164954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybhzagiexwkpthbzxqppvpwjstyywvmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337143.2853637-116-148396276384181/AnsiballZ_file.py'
Oct 01 16:45:43 compute-0 sudo[164954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:45:43 compute-0 python3.9[164956]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:45:43 compute-0 sudo[164954]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:44 compute-0 sudo[165106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfuxuwuvdyrurkytybbefhrrvkxmpcgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337144.0134237-116-198260608912549/AnsiballZ_file.py'
Oct 01 16:45:44 compute-0 sudo[165106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:44 compute-0 python3.9[165108]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:45:44 compute-0 sudo[165106]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:45 compute-0 sudo[165258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pecxmqpokmueoopxeqcvgkakcauyoody ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337144.7810252-116-255322659412699/AnsiballZ_file.py'
Oct 01 16:45:45 compute-0 sudo[165258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:45 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:45 compute-0 python3.9[165260]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:45:45 compute-0 sudo[165258]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:46 compute-0 sudo[165410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uysyxnzkbieczxzzgavrxwzknklsgvmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337145.9132564-116-150290422032068/AnsiballZ_file.py'
Oct 01 16:45:46 compute-0 sudo[165410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:46 compute-0 python3.9[165412]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:45:46 compute-0 sudo[165410]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:47 compute-0 ceph-mon[74273]: pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:47 compute-0 sudo[165562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyzzetnlacvfuzkxvbwcxtgklxdknmci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337147.033457-116-161552703542912/AnsiballZ_file.py'
Oct 01 16:45:47 compute-0 sudo[165562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:47 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:47 compute-0 python3.9[165564]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:45:47 compute-0 sudo[165562]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:48 compute-0 sudo[165733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxbhbrcwlgmglzokkdwecvzexifahrrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337148.002744-166-40267691547416/AnsiballZ_file.py'
Oct 01 16:45:48 compute-0 sudo[165733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:48 compute-0 podman[165688]: 2025-10-01 16:45:48.446494253 +0000 UTC m=+0.080993842 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct 01 16:45:49 compute-0 python3.9[165735]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:45:49 compute-0 sudo[165733]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:49 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:49 compute-0 sudo[165886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lehdmfyxlsgkhxymmqgaoqmsijihzfnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337149.2516344-166-244203801120270/AnsiballZ_file.py'
Oct 01 16:45:49 compute-0 sudo[165886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:45:50 compute-0 python3.9[165888]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:45:50 compute-0 sudo[165886]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:50 compute-0 ceph-mon[74273]: pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:51 compute-0 sudo[166038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubphehitotjfqeawbaobbxfwlwrjlgch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337150.6204014-166-133996372019703/AnsiballZ_file.py'
Oct 01 16:45:51 compute-0 sudo[166038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:51 compute-0 python3.9[166040]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:45:51 compute-0 sudo[166038]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:51 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:51 compute-0 ceph-mon[74273]: pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:51 compute-0 ceph-mon[74273]: pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:51 compute-0 sudo[166190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhqkustztsgsomlxyhmoubyiuazvvrtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337151.4197583-166-99562481327359/AnsiballZ_file.py'
Oct 01 16:45:51 compute-0 sudo[166190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:52 compute-0 python3.9[166192]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:45:52 compute-0 sudo[166190]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:52 compute-0 sudo[166342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hklrofasrhvctzjoplfljkspuyzcbqlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337152.2051427-166-110152127009590/AnsiballZ_file.py'
Oct 01 16:45:52 compute-0 sudo[166342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:52 compute-0 ceph-mon[74273]: pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:52 compute-0 python3.9[166344]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:45:52 compute-0 sudo[166342]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:53 compute-0 sudo[166494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eswtkqaaktqdfucfckehrddnwhjvqqhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337152.9609215-166-137158870115654/AnsiballZ_file.py'
Oct 01 16:45:53 compute-0 sudo[166494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:53 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:53 compute-0 python3.9[166496]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:45:53 compute-0 sudo[166494]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:54 compute-0 sudo[166646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isenlprjdnparbipwwneagbpbrmjswvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337153.731101-166-123767184502920/AnsiballZ_file.py'
Oct 01 16:45:54 compute-0 sudo[166646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:54 compute-0 python3.9[166648]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:45:54 compute-0 sudo[166646]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:54 compute-0 ceph-mon[74273]: pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:45:55 compute-0 sudo[166798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuhwvtompqxpexnjtqhhndtexawrlqfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337154.6065714-217-236908435341691/AnsiballZ_command.py'
Oct 01 16:45:55 compute-0 sudo[166798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:55 compute-0 python3.9[166800]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:45:55 compute-0 sudo[166798]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:55 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:56 compute-0 python3.9[166952]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 01 16:45:56 compute-0 ceph-mon[74273]: pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:56 compute-0 sudo[167102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpbuxkxvuwbqmbnbucepnmzfbccjexmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337156.3936477-235-248681754820528/AnsiballZ_systemd_service.py'
Oct 01 16:45:56 compute-0 sudo[167102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:57 compute-0 python3.9[167104]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 01 16:45:57 compute-0 systemd[1]: Reloading.
Oct 01 16:45:57 compute-0 systemd-rc-local-generator[167130]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:45:57 compute-0 systemd-sysv-generator[167134]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:45:57 compute-0 sudo[167102]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:57 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:57 compute-0 sudo[167288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdekbmovsstlzwwaiwqdrhplnnkuolvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337157.6116173-243-183232964684253/AnsiballZ_command.py'
Oct 01 16:45:57 compute-0 sudo[167288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:58 compute-0 python3.9[167290]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:45:58 compute-0 sudo[167288]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:58 compute-0 sudo[167441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mncbkjwwkqqcoydeuyqyinioaxgjsgjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337158.312962-243-168318575808158/AnsiballZ_command.py'
Oct 01 16:45:58 compute-0 sudo[167441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:58 compute-0 ceph-mon[74273]: pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:58 compute-0 python3.9[167443]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:45:58 compute-0 sudo[167441]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:59 compute-0 sudo[167594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-envmhsjoaifnesrjyyrkzejmkxpsucjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337158.9738684-243-101070618637825/AnsiballZ_command.py'
Oct 01 16:45:59 compute-0 sudo[167594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:59 compute-0 python3.9[167596]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:45:59 compute-0 sudo[167594]: pam_unix(sudo:session): session closed for user root
Oct 01 16:45:59 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:45:59 compute-0 sudo[167747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yggerdxbwaosienkpnphpcuhqzxsyxmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337159.557837-243-119058644997193/AnsiballZ_command.py'
Oct 01 16:45:59 compute-0 sudo[167747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:45:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:46:00 compute-0 python3.9[167749]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:46:00 compute-0 sudo[167747]: pam_unix(sudo:session): session closed for user root
Oct 01 16:46:00 compute-0 sudo[167900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-perdhdoyctbuiogfmsdygulyxhwgnars ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337160.3554647-243-76787425869959/AnsiballZ_command.py'
Oct 01 16:46:00 compute-0 sudo[167900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:46:00 compute-0 ceph-mon[74273]: pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:00 compute-0 python3.9[167902]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:46:00 compute-0 sudo[167900]: pam_unix(sudo:session): session closed for user root
Oct 01 16:46:01 compute-0 sudo[168053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pipfckklytzmsysomwiwqbclcstcktda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337161.1176171-243-133757947401047/AnsiballZ_command.py'
Oct 01 16:46:01 compute-0 sudo[168053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:46:01 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:01 compute-0 python3.9[168055]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:46:01 compute-0 sudo[168053]: pam_unix(sudo:session): session closed for user root
Oct 01 16:46:01 compute-0 sudo[168206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhwmcnjnzejgozuovwrimkzevmeoeheu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337161.7176013-243-264130295426622/AnsiballZ_command.py'
Oct 01 16:46:01 compute-0 sudo[168206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:46:02 compute-0 python3.9[168208]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:46:02 compute-0 sudo[168206]: pam_unix(sudo:session): session closed for user root
Oct 01 16:46:02 compute-0 ceph-mon[74273]: pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:03 compute-0 sudo[168359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwgagmtjhhdepjnzmkfrwrxapfvfwmby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337162.5902994-297-47206667656834/AnsiballZ_getent.py'
Oct 01 16:46:03 compute-0 sudo[168359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:46:03 compute-0 python3.9[168361]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct 01 16:46:03 compute-0 sudo[168359]: pam_unix(sudo:session): session closed for user root
Oct 01 16:46:03 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:03 compute-0 sudo[168512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylmapllhawyoqciyopjayoebowzflagp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337163.437593-305-114172669060452/AnsiballZ_group.py'
Oct 01 16:46:03 compute-0 sudo[168512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:46:04 compute-0 python3.9[168514]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 01 16:46:04 compute-0 groupadd[168515]: group added to /etc/group: name=libvirt, GID=42473
Oct 01 16:46:04 compute-0 groupadd[168515]: group added to /etc/gshadow: name=libvirt
Oct 01 16:46:04 compute-0 groupadd[168515]: new group: name=libvirt, GID=42473
Oct 01 16:46:04 compute-0 sudo[168512]: pam_unix(sudo:session): session closed for user root
Oct 01 16:46:04 compute-0 ceph-mon[74273]: pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:04 compute-0 sudo[168670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mewbksrgwscfmzskdarhtpyrshgxkcvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337164.3099265-313-50768248451735/AnsiballZ_user.py'
Oct 01 16:46:04 compute-0 sudo[168670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:46:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:46:05 compute-0 python3.9[168672]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 01 16:46:05 compute-0 useradd[168674]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Oct 01 16:46:05 compute-0 sudo[168670]: pam_unix(sudo:session): session closed for user root
Oct 01 16:46:05 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:06 compute-0 sudo[168830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jekjvejggdkzasduujzifexwnkpmpiuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337165.6314056-324-127748388749856/AnsiballZ_setup.py'
Oct 01 16:46:06 compute-0 sudo[168830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:46:06 compute-0 python3.9[168832]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 16:46:06 compute-0 sudo[168830]: pam_unix(sudo:session): session closed for user root
Oct 01 16:46:06 compute-0 ceph-mon[74273]: pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:07 compute-0 sudo[168914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouioixdjzwfeetxjomkwygzpemydbxyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337165.6314056-324-127748388749856/AnsiballZ_dnf.py'
Oct 01 16:46:07 compute-0 sudo[168914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:46:07 compute-0 python3.9[168916]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 16:46:07 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:08 compute-0 ceph-mon[74273]: pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:09 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 8 op/s
Oct 01 16:46:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:46:10 compute-0 ceph-mon[74273]: pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 8 op/s
Oct 01 16:46:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_16:46:11
Oct 01 16:46:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 16:46:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 16:46:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', '.mgr', '.rgw.root', 'vms', 'default.rgw.meta']
Oct 01 16:46:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 16:46:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:46:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:46:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:46:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:46:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:46:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:46:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 16:46:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 16:46:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:46:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:46:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:46:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:46:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:46:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:46:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:46:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:46:11 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 8 op/s
Oct 01 16:46:12 compute-0 ceph-mon[74273]: pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 8 op/s
Oct 01 16:46:13 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 44 op/s
Oct 01 16:46:13 compute-0 podman[168928]: 2025-10-01 16:46:13.770690038 +0000 UTC m=+0.085233438 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller)
Oct 01 16:46:14 compute-0 ceph-mon[74273]: pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 44 op/s
Oct 01 16:46:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:46:15 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 16:46:16 compute-0 ceph-mon[74273]: pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 16:46:17 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 16:46:18 compute-0 podman[169079]: 2025-10-01 16:46:18.764619779 +0000 UTC m=+0.068571875 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 01 16:46:18 compute-0 ceph-mon[74273]: pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 16:46:19 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 16:46:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:46:19.943 162304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 16:46:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:46:19.944 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 16:46:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:46:19.944 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 16:46:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:46:20 compute-0 sudo[169146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:46:20 compute-0 sudo[169146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:46:20 compute-0 sudo[169146]: pam_unix(sudo:session): session closed for user root
Oct 01 16:46:20 compute-0 sudo[169171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:46:20 compute-0 sudo[169171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:46:20 compute-0 sudo[169171]: pam_unix(sudo:session): session closed for user root
Oct 01 16:46:20 compute-0 sudo[169196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:46:20 compute-0 sudo[169196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:46:20 compute-0 sudo[169196]: pam_unix(sudo:session): session closed for user root
Oct 01 16:46:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 16:46:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:46:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 16:46:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:46:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:46:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:46:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:46:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:46:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:46:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:46:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:46:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:46:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 16:46:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:46:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:46:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:46:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 16:46:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:46:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 16:46:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:46:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:46:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:46:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 16:46:20 compute-0 sudo[169221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 01 16:46:20 compute-0 sudo[169221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:46:20 compute-0 ceph-mon[74273]: pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 16:46:21 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 51 op/s
Oct 01 16:46:21 compute-0 podman[169319]: 2025-10-01 16:46:21.56225667 +0000 UTC m=+0.077739521 container exec bfdaa9b78cc1558959452c7020a00aa78f3da27e3ededf3766f2f88165c2443b (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 01 16:46:21 compute-0 podman[169319]: 2025-10-01 16:46:21.679277523 +0000 UTC m=+0.194760334 container exec_died bfdaa9b78cc1558959452c7020a00aa78f3da27e3ededf3766f2f88165c2443b (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 01 16:46:22 compute-0 sudo[169221]: pam_unix(sudo:session): session closed for user root
Oct 01 16:46:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:46:22 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:46:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:46:22 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:46:22 compute-0 sudo[169480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:46:22 compute-0 sudo[169480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:46:22 compute-0 sudo[169480]: pam_unix(sudo:session): session closed for user root
Oct 01 16:46:22 compute-0 sudo[169505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:46:22 compute-0 sudo[169505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:46:22 compute-0 sudo[169505]: pam_unix(sudo:session): session closed for user root
Oct 01 16:46:22 compute-0 sudo[169530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:46:22 compute-0 sudo[169530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:46:22 compute-0 sudo[169530]: pam_unix(sudo:session): session closed for user root
Oct 01 16:46:22 compute-0 sudo[169555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 16:46:22 compute-0 sudo[169555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:46:22 compute-0 ceph-mon[74273]: pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 51 op/s
Oct 01 16:46:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:46:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:46:23 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 51 op/s
Oct 01 16:46:23 compute-0 sudo[169555]: pam_unix(sudo:session): session closed for user root
Oct 01 16:46:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 01 16:46:23 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 01 16:46:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:46:23 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:46:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 16:46:23 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:46:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 16:46:23 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:46:23 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev ded9a886-7e3c-4d2f-9905-dfac4cd8831a does not exist
Oct 01 16:46:23 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 66253e52-79ce-4598-ade4-63d07135e362 does not exist
Oct 01 16:46:23 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 47019f2d-1d52-45bf-868a-51ffb0b69640 does not exist
Oct 01 16:46:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 16:46:23 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:46:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 16:46:23 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:46:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:46:23 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:46:23 compute-0 sudo[169611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:46:23 compute-0 sudo[169611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:46:23 compute-0 sudo[169611]: pam_unix(sudo:session): session closed for user root
Oct 01 16:46:23 compute-0 sudo[169636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:46:23 compute-0 sudo[169636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:46:23 compute-0 sudo[169636]: pam_unix(sudo:session): session closed for user root
Oct 01 16:46:23 compute-0 sudo[169661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:46:23 compute-0 sudo[169661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:46:23 compute-0 sudo[169661]: pam_unix(sudo:session): session closed for user root
Oct 01 16:46:23 compute-0 sudo[169686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 16:46:23 compute-0 sudo[169686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:46:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 01 16:46:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:46:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:46:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:46:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:46:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:46:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:46:24 compute-0 podman[169751]: 2025-10-01 16:46:24.375976788 +0000 UTC m=+0.053761748 container create 621484005d236b9f91a2a47c2a0a1c3c77434a2542f6805821f46bb4f961b16d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:46:24 compute-0 systemd[1]: Started libpod-conmon-621484005d236b9f91a2a47c2a0a1c3c77434a2542f6805821f46bb4f961b16d.scope.
Oct 01 16:46:24 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:46:24 compute-0 podman[169751]: 2025-10-01 16:46:24.444333922 +0000 UTC m=+0.122118902 container init 621484005d236b9f91a2a47c2a0a1c3c77434a2542f6805821f46bb4f961b16d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_pare, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:46:24 compute-0 podman[169751]: 2025-10-01 16:46:24.354259829 +0000 UTC m=+0.032044839 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:46:24 compute-0 podman[169751]: 2025-10-01 16:46:24.455521551 +0000 UTC m=+0.133306511 container start 621484005d236b9f91a2a47c2a0a1c3c77434a2542f6805821f46bb4f961b16d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Oct 01 16:46:24 compute-0 podman[169751]: 2025-10-01 16:46:24.458606834 +0000 UTC m=+0.136391794 container attach 621484005d236b9f91a2a47c2a0a1c3c77434a2542f6805821f46bb4f961b16d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 01 16:46:24 compute-0 systemd[1]: libpod-621484005d236b9f91a2a47c2a0a1c3c77434a2542f6805821f46bb4f961b16d.scope: Deactivated successfully.
Oct 01 16:46:24 compute-0 naughty_pare[169767]: 167 167
Oct 01 16:46:24 compute-0 conmon[169767]: conmon 621484005d236b9f91a2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-621484005d236b9f91a2a47c2a0a1c3c77434a2542f6805821f46bb4f961b16d.scope/container/memory.events
Oct 01 16:46:24 compute-0 podman[169772]: 2025-10-01 16:46:24.521467343 +0000 UTC m=+0.038209255 container died 621484005d236b9f91a2a47c2a0a1c3c77434a2542f6805821f46bb4f961b16d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Oct 01 16:46:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ce59c7f26f16c4261c49774bc8da0172beb34b7541c22a630edb874d6212f1f-merged.mount: Deactivated successfully.
Oct 01 16:46:24 compute-0 podman[169772]: 2025-10-01 16:46:24.571555599 +0000 UTC m=+0.088297481 container remove 621484005d236b9f91a2a47c2a0a1c3c77434a2542f6805821f46bb4f961b16d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 01 16:46:24 compute-0 systemd[1]: libpod-conmon-621484005d236b9f91a2a47c2a0a1c3c77434a2542f6805821f46bb4f961b16d.scope: Deactivated successfully.
Oct 01 16:46:24 compute-0 podman[169796]: 2025-10-01 16:46:24.832880213 +0000 UTC m=+0.066624245 container create b8a8c463f06ddac94c0157cc5f9b3f5efc6ccfe4461f34c0311534a109e7d4ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 01 16:46:24 compute-0 systemd[1]: Started libpod-conmon-b8a8c463f06ddac94c0157cc5f9b3f5efc6ccfe4461f34c0311534a109e7d4ca.scope.
Oct 01 16:46:24 compute-0 podman[169796]: 2025-10-01 16:46:24.809818492 +0000 UTC m=+0.043562554 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:46:24 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:46:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29da5f9c25896839253f09512f176af70ac4868565218332d1d77b03c9a9f22a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:46:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29da5f9c25896839253f09512f176af70ac4868565218332d1d77b03c9a9f22a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:46:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29da5f9c25896839253f09512f176af70ac4868565218332d1d77b03c9a9f22a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:46:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29da5f9c25896839253f09512f176af70ac4868565218332d1d77b03c9a9f22a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:46:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29da5f9c25896839253f09512f176af70ac4868565218332d1d77b03c9a9f22a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:46:24 compute-0 podman[169796]: 2025-10-01 16:46:24.950993767 +0000 UTC m=+0.184737809 container init b8a8c463f06ddac94c0157cc5f9b3f5efc6ccfe4461f34c0311534a109e7d4ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_euler, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:46:24 compute-0 podman[169796]: 2025-10-01 16:46:24.981037162 +0000 UTC m=+0.214781204 container start b8a8c463f06ddac94c0157cc5f9b3f5efc6ccfe4461f34c0311534a109e7d4ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:46:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:46:24 compute-0 podman[169796]: 2025-10-01 16:46:24.99282932 +0000 UTC m=+0.226573342 container attach b8a8c463f06ddac94c0157cc5f9b3f5efc6ccfe4461f34c0311534a109e7d4ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_euler, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 01 16:46:25 compute-0 ceph-mon[74273]: pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 51 op/s
Oct 01 16:46:25 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 0 B/s wr, 15 op/s
Oct 01 16:46:26 compute-0 ceph-mon[74273]: pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 0 B/s wr, 15 op/s
Oct 01 16:46:26 compute-0 cool_euler[169812]: --> passed data devices: 0 physical, 3 LVM
Oct 01 16:46:26 compute-0 cool_euler[169812]: --> relative data size: 1.0
Oct 01 16:46:26 compute-0 cool_euler[169812]: --> All data devices are unavailable
Oct 01 16:46:26 compute-0 systemd[1]: libpod-b8a8c463f06ddac94c0157cc5f9b3f5efc6ccfe4461f34c0311534a109e7d4ca.scope: Deactivated successfully.
Oct 01 16:46:26 compute-0 systemd[1]: libpod-b8a8c463f06ddac94c0157cc5f9b3f5efc6ccfe4461f34c0311534a109e7d4ca.scope: Consumed 1.083s CPU time.
Oct 01 16:46:26 compute-0 conmon[169812]: conmon b8a8c463f06ddac94c01 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b8a8c463f06ddac94c0157cc5f9b3f5efc6ccfe4461f34c0311534a109e7d4ca.scope/container/memory.events
Oct 01 16:46:26 compute-0 podman[169796]: 2025-10-01 16:46:26.115638585 +0000 UTC m=+1.349382627 container died b8a8c463f06ddac94c0157cc5f9b3f5efc6ccfe4461f34c0311534a109e7d4ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 01 16:46:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-29da5f9c25896839253f09512f176af70ac4868565218332d1d77b03c9a9f22a-merged.mount: Deactivated successfully.
Oct 01 16:46:26 compute-0 podman[169796]: 2025-10-01 16:46:26.20706545 +0000 UTC m=+1.440809492 container remove b8a8c463f06ddac94c0157cc5f9b3f5efc6ccfe4461f34c0311534a109e7d4ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_euler, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 01 16:46:26 compute-0 systemd[1]: libpod-conmon-b8a8c463f06ddac94c0157cc5f9b3f5efc6ccfe4461f34c0311534a109e7d4ca.scope: Deactivated successfully.
Oct 01 16:46:26 compute-0 sudo[169686]: pam_unix(sudo:session): session closed for user root
Oct 01 16:46:26 compute-0 sudo[169857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:46:26 compute-0 sudo[169857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:46:26 compute-0 sudo[169857]: pam_unix(sudo:session): session closed for user root
Oct 01 16:46:26 compute-0 sudo[169882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:46:26 compute-0 sudo[169882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:46:26 compute-0 sudo[169882]: pam_unix(sudo:session): session closed for user root
Oct 01 16:46:26 compute-0 sudo[169907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:46:26 compute-0 sudo[169907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:46:26 compute-0 sudo[169907]: pam_unix(sudo:session): session closed for user root
Oct 01 16:46:26 compute-0 sudo[169932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 16:46:26 compute-0 sudo[169932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:46:26 compute-0 podman[169999]: 2025-10-01 16:46:26.936405274 +0000 UTC m=+0.065090233 container create ae07c31f2605f593d8ba2ecc1f9c07d9d1e70d54a2ae95c98a6e2c29a9b25ad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 01 16:46:26 compute-0 systemd[1]: Started libpod-conmon-ae07c31f2605f593d8ba2ecc1f9c07d9d1e70d54a2ae95c98a6e2c29a9b25ad6.scope.
Oct 01 16:46:26 compute-0 podman[169999]: 2025-10-01 16:46:26.899028059 +0000 UTC m=+0.027713068 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:46:27 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:46:27 compute-0 podman[169999]: 2025-10-01 16:46:27.042307412 +0000 UTC m=+0.170992461 container init ae07c31f2605f593d8ba2ecc1f9c07d9d1e70d54a2ae95c98a6e2c29a9b25ad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lamport, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:46:27 compute-0 podman[169999]: 2025-10-01 16:46:27.052770218 +0000 UTC m=+0.181455167 container start ae07c31f2605f593d8ba2ecc1f9c07d9d1e70d54a2ae95c98a6e2c29a9b25ad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lamport, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:46:27 compute-0 podman[169999]: 2025-10-01 16:46:27.057000434 +0000 UTC m=+0.185685473 container attach ae07c31f2605f593d8ba2ecc1f9c07d9d1e70d54a2ae95c98a6e2c29a9b25ad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lamport, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 01 16:46:27 compute-0 silly_lamport[170015]: 167 167
Oct 01 16:46:27 compute-0 systemd[1]: libpod-ae07c31f2605f593d8ba2ecc1f9c07d9d1e70d54a2ae95c98a6e2c29a9b25ad6.scope: Deactivated successfully.
Oct 01 16:46:27 compute-0 podman[169999]: 2025-10-01 16:46:27.06164796 +0000 UTC m=+0.190332959 container died ae07c31f2605f593d8ba2ecc1f9c07d9d1e70d54a2ae95c98a6e2c29a9b25ad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 01 16:46:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e7595b46bb0ab7ecdd85e2fdbbec752fe0382499e9cf1addb75a5a33dee5808-merged.mount: Deactivated successfully.
Oct 01 16:46:27 compute-0 podman[169999]: 2025-10-01 16:46:27.130465015 +0000 UTC m=+0.259149984 container remove ae07c31f2605f593d8ba2ecc1f9c07d9d1e70d54a2ae95c98a6e2c29a9b25ad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:46:27 compute-0 systemd[1]: libpod-conmon-ae07c31f2605f593d8ba2ecc1f9c07d9d1e70d54a2ae95c98a6e2c29a9b25ad6.scope: Deactivated successfully.
Oct 01 16:46:27 compute-0 podman[170039]: 2025-10-01 16:46:27.329413593 +0000 UTC m=+0.059461122 container create 29e1cb0d161d1319fda500d376dddf1f9d7bbcc3a94149d0c0744d9f437ce105 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_colden, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:46:27 compute-0 systemd[1]: Started libpod-conmon-29e1cb0d161d1319fda500d376dddf1f9d7bbcc3a94149d0c0744d9f437ce105.scope.
Oct 01 16:46:27 compute-0 podman[170039]: 2025-10-01 16:46:27.30091948 +0000 UTC m=+0.030967019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:46:27 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:46:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c16278589ceca51389fd3a9a93f64aa616bf3381244c8cf7582d3c10e1304246/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:46:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c16278589ceca51389fd3a9a93f64aa616bf3381244c8cf7582d3c10e1304246/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:46:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c16278589ceca51389fd3a9a93f64aa616bf3381244c8cf7582d3c10e1304246/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:46:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c16278589ceca51389fd3a9a93f64aa616bf3381244c8cf7582d3c10e1304246/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:46:27 compute-0 podman[170039]: 2025-10-01 16:46:27.478219842 +0000 UTC m=+0.208267441 container init 29e1cb0d161d1319fda500d376dddf1f9d7bbcc3a94149d0c0744d9f437ce105 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 01 16:46:27 compute-0 podman[170039]: 2025-10-01 16:46:27.490994626 +0000 UTC m=+0.221042155 container start 29e1cb0d161d1319fda500d376dddf1f9d7bbcc3a94149d0c0744d9f437ce105 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 01 16:46:27 compute-0 podman[170039]: 2025-10-01 16:46:27.504287203 +0000 UTC m=+0.234334702 container attach 29e1cb0d161d1319fda500d376dddf1f9d7bbcc3a94149d0c0744d9f437ce105 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:46:27 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:28 compute-0 laughing_colden[170056]: {
Oct 01 16:46:28 compute-0 laughing_colden[170056]:     "0": [
Oct 01 16:46:28 compute-0 laughing_colden[170056]:         {
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             "devices": [
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "/dev/loop3"
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             ],
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             "lv_name": "ceph_lv0",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             "lv_size": "21470642176",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             "name": "ceph_lv0",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             "tags": {
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "ceph.cluster_name": "ceph",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "ceph.crush_device_class": "",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "ceph.encrypted": "0",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "ceph.osd_id": "0",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "ceph.type": "block",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "ceph.vdo": "0"
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             },
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             "type": "block",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             "vg_name": "ceph_vg0"
Oct 01 16:46:28 compute-0 laughing_colden[170056]:         }
Oct 01 16:46:28 compute-0 laughing_colden[170056]:     ],
Oct 01 16:46:28 compute-0 laughing_colden[170056]:     "1": [
Oct 01 16:46:28 compute-0 laughing_colden[170056]:         {
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             "devices": [
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "/dev/loop4"
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             ],
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             "lv_name": "ceph_lv1",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             "lv_size": "21470642176",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             "name": "ceph_lv1",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             "tags": {
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "ceph.cluster_name": "ceph",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "ceph.crush_device_class": "",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "ceph.encrypted": "0",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "ceph.osd_id": "1",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "ceph.type": "block",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "ceph.vdo": "0"
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             },
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             "type": "block",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             "vg_name": "ceph_vg1"
Oct 01 16:46:28 compute-0 laughing_colden[170056]:         }
Oct 01 16:46:28 compute-0 laughing_colden[170056]:     ],
Oct 01 16:46:28 compute-0 laughing_colden[170056]:     "2": [
Oct 01 16:46:28 compute-0 laughing_colden[170056]:         {
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             "devices": [
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "/dev/loop5"
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             ],
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             "lv_name": "ceph_lv2",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             "lv_size": "21470642176",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             "name": "ceph_lv2",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             "tags": {
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "ceph.cluster_name": "ceph",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "ceph.crush_device_class": "",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "ceph.encrypted": "0",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "ceph.osd_id": "2",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "ceph.type": "block",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:                 "ceph.vdo": "0"
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             },
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             "type": "block",
Oct 01 16:46:28 compute-0 laughing_colden[170056]:             "vg_name": "ceph_vg2"
Oct 01 16:46:28 compute-0 laughing_colden[170056]:         }
Oct 01 16:46:28 compute-0 laughing_colden[170056]:     ]
Oct 01 16:46:28 compute-0 laughing_colden[170056]: }
Oct 01 16:46:28 compute-0 systemd[1]: libpod-29e1cb0d161d1319fda500d376dddf1f9d7bbcc3a94149d0c0744d9f437ce105.scope: Deactivated successfully.
Oct 01 16:46:28 compute-0 podman[170039]: 2025-10-01 16:46:28.241078004 +0000 UTC m=+0.971125493 container died 29e1cb0d161d1319fda500d376dddf1f9d7bbcc3a94149d0c0744d9f437ce105 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 01 16:46:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-c16278589ceca51389fd3a9a93f64aa616bf3381244c8cf7582d3c10e1304246-merged.mount: Deactivated successfully.
Oct 01 16:46:28 compute-0 podman[170039]: 2025-10-01 16:46:28.322235152 +0000 UTC m=+1.052282651 container remove 29e1cb0d161d1319fda500d376dddf1f9d7bbcc3a94149d0c0744d9f437ce105 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_colden, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 01 16:46:28 compute-0 systemd[1]: libpod-conmon-29e1cb0d161d1319fda500d376dddf1f9d7bbcc3a94149d0c0744d9f437ce105.scope: Deactivated successfully.
Oct 01 16:46:28 compute-0 sudo[169932]: pam_unix(sudo:session): session closed for user root
Oct 01 16:46:28 compute-0 sudo[170079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:46:28 compute-0 sudo[170079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:46:28 compute-0 sudo[170079]: pam_unix(sudo:session): session closed for user root
Oct 01 16:46:28 compute-0 sudo[170104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:46:28 compute-0 sudo[170104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:46:28 compute-0 sudo[170104]: pam_unix(sudo:session): session closed for user root
Oct 01 16:46:28 compute-0 sudo[170129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:46:28 compute-0 sudo[170129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:46:28 compute-0 sudo[170129]: pam_unix(sudo:session): session closed for user root
Oct 01 16:46:28 compute-0 ceph-mon[74273]: pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:28 compute-0 sudo[170154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 16:46:28 compute-0 sudo[170154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:46:29 compute-0 podman[170220]: 2025-10-01 16:46:29.000449434 +0000 UTC m=+0.046004157 container create b3f31ae84d080d641dc473be2dca4524d47ead5b5dec2ea3e27b3552fe3fa8b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swartz, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 01 16:46:29 compute-0 systemd[1]: Started libpod-conmon-b3f31ae84d080d641dc473be2dca4524d47ead5b5dec2ea3e27b3552fe3fa8b2.scope.
Oct 01 16:46:29 compute-0 podman[170220]: 2025-10-01 16:46:28.983479026 +0000 UTC m=+0.029033769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:46:29 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:46:29 compute-0 podman[170220]: 2025-10-01 16:46:29.091620517 +0000 UTC m=+0.137175230 container init b3f31ae84d080d641dc473be2dca4524d47ead5b5dec2ea3e27b3552fe3fa8b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:46:29 compute-0 podman[170220]: 2025-10-01 16:46:29.097604995 +0000 UTC m=+0.143159698 container start b3f31ae84d080d641dc473be2dca4524d47ead5b5dec2ea3e27b3552fe3fa8b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 01 16:46:29 compute-0 unruffled_swartz[170237]: 167 167
Oct 01 16:46:29 compute-0 systemd[1]: libpod-b3f31ae84d080d641dc473be2dca4524d47ead5b5dec2ea3e27b3552fe3fa8b2.scope: Deactivated successfully.
Oct 01 16:46:29 compute-0 podman[170220]: 2025-10-01 16:46:29.101669394 +0000 UTC m=+0.147224117 container attach b3f31ae84d080d641dc473be2dca4524d47ead5b5dec2ea3e27b3552fe3fa8b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swartz, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:46:29 compute-0 podman[170220]: 2025-10-01 16:46:29.101907735 +0000 UTC m=+0.147462438 container died b3f31ae84d080d641dc473be2dca4524d47ead5b5dec2ea3e27b3552fe3fa8b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swartz, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:46:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-46c744995f6456813824fb499be6c41ba6e28aa7708c6b82fa53a7c8d960faa8-merged.mount: Deactivated successfully.
Oct 01 16:46:29 compute-0 podman[170220]: 2025-10-01 16:46:29.143679284 +0000 UTC m=+0.189234027 container remove b3f31ae84d080d641dc473be2dca4524d47ead5b5dec2ea3e27b3552fe3fa8b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swartz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:46:29 compute-0 systemd[1]: libpod-conmon-b3f31ae84d080d641dc473be2dca4524d47ead5b5dec2ea3e27b3552fe3fa8b2.scope: Deactivated successfully.
Oct 01 16:46:29 compute-0 podman[170261]: 2025-10-01 16:46:29.333557081 +0000 UTC m=+0.046429157 container create 9c4c0a0df34fa900f8fadfb62fcca4a0733f700fe2ee4911e4688a45c4b1f7dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 01 16:46:29 compute-0 systemd[1]: Started libpod-conmon-9c4c0a0df34fa900f8fadfb62fcca4a0733f700fe2ee4911e4688a45c4b1f7dd.scope.
Oct 01 16:46:29 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:46:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b792f5e8b0f46325ec06dd449462dc9c5f82309cee1916e1c7b9c87c0819240/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:46:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b792f5e8b0f46325ec06dd449462dc9c5f82309cee1916e1c7b9c87c0819240/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:46:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b792f5e8b0f46325ec06dd449462dc9c5f82309cee1916e1c7b9c87c0819240/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:46:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b792f5e8b0f46325ec06dd449462dc9c5f82309cee1916e1c7b9c87c0819240/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:46:29 compute-0 podman[170261]: 2025-10-01 16:46:29.31545511 +0000 UTC m=+0.028327186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:46:29 compute-0 podman[170261]: 2025-10-01 16:46:29.414272719 +0000 UTC m=+0.127144815 container init 9c4c0a0df34fa900f8fadfb62fcca4a0733f700fe2ee4911e4688a45c4b1f7dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_leavitt, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 01 16:46:29 compute-0 podman[170261]: 2025-10-01 16:46:29.419584125 +0000 UTC m=+0.132456201 container start 9c4c0a0df34fa900f8fadfb62fcca4a0733f700fe2ee4911e4688a45c4b1f7dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_leavitt, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:46:29 compute-0 podman[170261]: 2025-10-01 16:46:29.422971163 +0000 UTC m=+0.135843259 container attach 9c4c0a0df34fa900f8fadfb62fcca4a0733f700fe2ee4911e4688a45c4b1f7dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_leavitt, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:46:29 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:46:30 compute-0 cranky_leavitt[170277]: {
Oct 01 16:46:30 compute-0 cranky_leavitt[170277]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 16:46:30 compute-0 cranky_leavitt[170277]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:46:30 compute-0 cranky_leavitt[170277]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 16:46:30 compute-0 cranky_leavitt[170277]:         "osd_id": 2,
Oct 01 16:46:30 compute-0 cranky_leavitt[170277]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:46:30 compute-0 cranky_leavitt[170277]:         "type": "bluestore"
Oct 01 16:46:30 compute-0 cranky_leavitt[170277]:     },
Oct 01 16:46:30 compute-0 cranky_leavitt[170277]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 16:46:30 compute-0 cranky_leavitt[170277]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:46:30 compute-0 cranky_leavitt[170277]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 16:46:30 compute-0 cranky_leavitt[170277]:         "osd_id": 0,
Oct 01 16:46:30 compute-0 cranky_leavitt[170277]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:46:30 compute-0 cranky_leavitt[170277]:         "type": "bluestore"
Oct 01 16:46:30 compute-0 cranky_leavitt[170277]:     },
Oct 01 16:46:30 compute-0 cranky_leavitt[170277]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 16:46:30 compute-0 cranky_leavitt[170277]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:46:30 compute-0 cranky_leavitt[170277]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 16:46:30 compute-0 cranky_leavitt[170277]:         "osd_id": 1,
Oct 01 16:46:30 compute-0 cranky_leavitt[170277]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:46:30 compute-0 cranky_leavitt[170277]:         "type": "bluestore"
Oct 01 16:46:30 compute-0 cranky_leavitt[170277]:     }
Oct 01 16:46:30 compute-0 cranky_leavitt[170277]: }
Oct 01 16:46:30 compute-0 systemd[1]: libpod-9c4c0a0df34fa900f8fadfb62fcca4a0733f700fe2ee4911e4688a45c4b1f7dd.scope: Deactivated successfully.
Oct 01 16:46:30 compute-0 podman[170261]: 2025-10-01 16:46:30.476063189 +0000 UTC m=+1.188935305 container died 9c4c0a0df34fa900f8fadfb62fcca4a0733f700fe2ee4911e4688a45c4b1f7dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 01 16:46:30 compute-0 systemd[1]: libpod-9c4c0a0df34fa900f8fadfb62fcca4a0733f700fe2ee4911e4688a45c4b1f7dd.scope: Consumed 1.064s CPU time.
Oct 01 16:46:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b792f5e8b0f46325ec06dd449462dc9c5f82309cee1916e1c7b9c87c0819240-merged.mount: Deactivated successfully.
Oct 01 16:46:30 compute-0 podman[170261]: 2025-10-01 16:46:30.535856436 +0000 UTC m=+1.248728512 container remove 9c4c0a0df34fa900f8fadfb62fcca4a0733f700fe2ee4911e4688a45c4b1f7dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_leavitt, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 01 16:46:30 compute-0 systemd[1]: libpod-conmon-9c4c0a0df34fa900f8fadfb62fcca4a0733f700fe2ee4911e4688a45c4b1f7dd.scope: Deactivated successfully.
Oct 01 16:46:30 compute-0 sudo[170154]: pam_unix(sudo:session): session closed for user root
Oct 01 16:46:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:46:30 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:46:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:46:30 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:46:30 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 248f0f70-b084-4483-b1a1-da4b25dcc3ba does not exist
Oct 01 16:46:30 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev aea075c1-9a99-4458-84cc-a98b961958df does not exist
Oct 01 16:46:30 compute-0 ceph-mon[74273]: pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:46:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:46:30 compute-0 sudo[170324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:46:30 compute-0 sudo[170324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:46:30 compute-0 sudo[170324]: pam_unix(sudo:session): session closed for user root
Oct 01 16:46:30 compute-0 sudo[170349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 16:46:30 compute-0 sudo[170349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:46:30 compute-0 sudo[170349]: pam_unix(sudo:session): session closed for user root
Oct 01 16:46:31 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:32 compute-0 ceph-mon[74273]: pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:33 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:34 compute-0 ceph-mon[74273]: pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:34 compute-0 kernel: SELinux:  Converting 2766 SID table entries...
Oct 01 16:46:34 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 01 16:46:34 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 01 16:46:34 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 01 16:46:34 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 01 16:46:34 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 01 16:46:34 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 01 16:46:34 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 01 16:46:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:46:35 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:36 compute-0 ceph-mon[74273]: pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:37 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:38 compute-0 ceph-mon[74273]: pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:39 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:46:40 compute-0 ceph-mon[74273]: pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:46:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:46:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:46:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:46:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:46:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:46:41 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:42 compute-0 ceph-mon[74273]: pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:43 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:44 compute-0 kernel: SELinux:  Converting 2766 SID table entries...
Oct 01 16:46:44 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 01 16:46:44 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 01 16:46:44 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 01 16:46:44 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 01 16:46:44 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 01 16:46:44 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 01 16:46:44 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 01 16:46:44 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Oct 01 16:46:44 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Oct 01 16:46:44 compute-0 ceph-mon[74273]: pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:44 compute-0 podman[170388]: 2025-10-01 16:46:44.812001633 +0000 UTC m=+0.107805316 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:46:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:46:45 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:46 compute-0 ceph-mon[74273]: pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:47 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:48 compute-0 ceph-mon[74273]: pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:49 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:49 compute-0 podman[170415]: 2025-10-01 16:46:49.730841138 +0000 UTC m=+0.050313437 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 01 16:46:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:46:51 compute-0 ceph-mon[74273]: pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:51 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:52 compute-0 ceph-mon[74273]: pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:53 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:54 compute-0 ceph-mon[74273]: pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:46:55 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:56 compute-0 ceph-mon[74273]: pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:57 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:58 compute-0 ceph-mon[74273]: pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:59 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:46:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:47:00 compute-0 ceph-mon[74273]: pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:01 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:02 compute-0 ceph-mon[74273]: pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:03 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:04 compute-0 ceph-mon[74273]: pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:47:05 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:06 compute-0 ceph-mon[74273]: pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:07 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:08 compute-0 ceph-mon[74273]: pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:09 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:47:10 compute-0 ceph-mon[74273]: pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_16:47:11
Oct 01 16:47:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 16:47:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 16:47:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'images', 'default.rgw.log']
Oct 01 16:47:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 16:47:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:47:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:47:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:47:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:47:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:47:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:47:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 16:47:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:47:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 16:47:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:47:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:47:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:47:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:47:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:47:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:47:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:47:11 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:12 compute-0 ceph-mon[74273]: pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:13 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:14 compute-0 ceph-mon[74273]: pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:47:15 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:15 compute-0 podman[180558]: 2025-10-01 16:47:15.757739577 +0000 UTC m=+0.078083705 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 01 16:47:16 compute-0 ceph-mon[74273]: pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:17 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:18 compute-0 ceph-mon[74273]: pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:19 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:47:19.945 162304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 16:47:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:47:19.945 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 16:47:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:47:19.946 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 16:47:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:47:20 compute-0 podman[183294]: 2025-10-01 16:47:20.73757728 +0000 UTC m=+0.057311950 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 01 16:47:20 compute-0 ceph-mon[74273]: pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 16:47:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:47:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 16:47:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:47:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:47:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:47:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:47:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:47:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:47:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:47:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:47:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:47:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 16:47:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:47:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:47:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:47:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 16:47:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:47:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 16:47:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:47:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:47:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:47:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 16:47:21 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:22 compute-0 ceph-mon[74273]: pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:23 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:24 compute-0 ceph-mon[74273]: pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:47:25 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:26 compute-0 ceph-mon[74273]: pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:27 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:29 compute-0 ceph-mon[74273]: pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:29 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:47:30 compute-0 sudo[187230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:47:30 compute-0 sudo[187230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:47:30 compute-0 sudo[187230]: pam_unix(sudo:session): session closed for user root
Oct 01 16:47:30 compute-0 sudo[187255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:47:30 compute-0 sudo[187255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:47:30 compute-0 sudo[187255]: pam_unix(sudo:session): session closed for user root
Oct 01 16:47:30 compute-0 sudo[187280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:47:30 compute-0 sudo[187280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:47:30 compute-0 sudo[187280]: pam_unix(sudo:session): session closed for user root
Oct 01 16:47:31 compute-0 ceph-mon[74273]: pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:31 compute-0 sudo[187305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 16:47:31 compute-0 sudo[187305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:47:31 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:31 compute-0 sudo[187305]: pam_unix(sudo:session): session closed for user root
Oct 01 16:47:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:47:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:47:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 16:47:31 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:47:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 16:47:31 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:47:31 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev e49c61dd-7d26-48eb-aeb8-0d7128d5e101 does not exist
Oct 01 16:47:31 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 8de4217d-ea9b-4d22-ab1c-15e2efad8e99 does not exist
Oct 01 16:47:31 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 2f3f1482-571a-40d0-b68a-946b4ef16d40 does not exist
Oct 01 16:47:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 16:47:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:47:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 16:47:31 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:47:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:47:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:47:31 compute-0 sudo[187362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:47:31 compute-0 sudo[187362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:47:31 compute-0 sudo[187362]: pam_unix(sudo:session): session closed for user root
Oct 01 16:47:31 compute-0 sudo[187387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:47:31 compute-0 sudo[187387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:47:31 compute-0 sudo[187387]: pam_unix(sudo:session): session closed for user root
Oct 01 16:47:31 compute-0 sudo[187412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:47:31 compute-0 sudo[187412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:47:31 compute-0 sudo[187412]: pam_unix(sudo:session): session closed for user root
Oct 01 16:47:31 compute-0 sudo[187437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 16:47:31 compute-0 sudo[187437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:47:32 compute-0 ceph-mon[74273]: pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:32 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:47:32 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:47:32 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:47:32 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:47:32 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:47:32 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:47:32 compute-0 podman[187504]: 2025-10-01 16:47:32.308825612 +0000 UTC m=+0.052308384 container create 57f31ba73f64553a34fd6e49e1a1d723337f7e225b8bb70d4b1719d0f8bf8b7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_maxwell, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 01 16:47:32 compute-0 systemd[1]: Started libpod-conmon-57f31ba73f64553a34fd6e49e1a1d723337f7e225b8bb70d4b1719d0f8bf8b7e.scope.
Oct 01 16:47:32 compute-0 podman[187504]: 2025-10-01 16:47:32.285387087 +0000 UTC m=+0.028869899 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:47:32 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:47:32 compute-0 podman[187504]: 2025-10-01 16:47:32.420604587 +0000 UTC m=+0.164087369 container init 57f31ba73f64553a34fd6e49e1a1d723337f7e225b8bb70d4b1719d0f8bf8b7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_maxwell, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:47:32 compute-0 podman[187504]: 2025-10-01 16:47:32.42720434 +0000 UTC m=+0.170687122 container start 57f31ba73f64553a34fd6e49e1a1d723337f7e225b8bb70d4b1719d0f8bf8b7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:47:32 compute-0 podman[187504]: 2025-10-01 16:47:32.430772943 +0000 UTC m=+0.174255725 container attach 57f31ba73f64553a34fd6e49e1a1d723337f7e225b8bb70d4b1719d0f8bf8b7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 01 16:47:32 compute-0 frosty_maxwell[187521]: 167 167
Oct 01 16:47:32 compute-0 systemd[1]: libpod-57f31ba73f64553a34fd6e49e1a1d723337f7e225b8bb70d4b1719d0f8bf8b7e.scope: Deactivated successfully.
Oct 01 16:47:32 compute-0 conmon[187521]: conmon 57f31ba73f64553a34fd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-57f31ba73f64553a34fd6e49e1a1d723337f7e225b8bb70d4b1719d0f8bf8b7e.scope/container/memory.events
Oct 01 16:47:32 compute-0 podman[187504]: 2025-10-01 16:47:32.441952013 +0000 UTC m=+0.185434825 container died 57f31ba73f64553a34fd6e49e1a1d723337f7e225b8bb70d4b1719d0f8bf8b7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 01 16:47:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad586eee99ee8beb5405833cc41e596a65ff489bcc5e3df9c3f085a6c5db91d5-merged.mount: Deactivated successfully.
Oct 01 16:47:32 compute-0 podman[187504]: 2025-10-01 16:47:32.488135874 +0000 UTC m=+0.231618646 container remove 57f31ba73f64553a34fd6e49e1a1d723337f7e225b8bb70d4b1719d0f8bf8b7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_maxwell, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:47:32 compute-0 systemd[1]: libpod-conmon-57f31ba73f64553a34fd6e49e1a1d723337f7e225b8bb70d4b1719d0f8bf8b7e.scope: Deactivated successfully.
Oct 01 16:47:32 compute-0 podman[187545]: 2025-10-01 16:47:32.736211635 +0000 UTC m=+0.065961060 container create 7ea9c775f3c5976989764f96f10a33b5913f1728334c480e616e7c70aee50058 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jackson, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 01 16:47:32 compute-0 systemd[1]: Started libpod-conmon-7ea9c775f3c5976989764f96f10a33b5913f1728334c480e616e7c70aee50058.scope.
Oct 01 16:47:32 compute-0 podman[187545]: 2025-10-01 16:47:32.694869122 +0000 UTC m=+0.024618597 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:47:32 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:47:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4a024ac95ccddfba1f490d9b2c9858167adfee5fb85f4b4f390a01a5d04d770/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:47:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4a024ac95ccddfba1f490d9b2c9858167adfee5fb85f4b4f390a01a5d04d770/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:47:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4a024ac95ccddfba1f490d9b2c9858167adfee5fb85f4b4f390a01a5d04d770/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:47:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4a024ac95ccddfba1f490d9b2c9858167adfee5fb85f4b4f390a01a5d04d770/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:47:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4a024ac95ccddfba1f490d9b2c9858167adfee5fb85f4b4f390a01a5d04d770/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:47:32 compute-0 podman[187545]: 2025-10-01 16:47:32.83703613 +0000 UTC m=+0.166785525 container init 7ea9c775f3c5976989764f96f10a33b5913f1728334c480e616e7c70aee50058 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jackson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 01 16:47:32 compute-0 podman[187545]: 2025-10-01 16:47:32.846149371 +0000 UTC m=+0.175898786 container start 7ea9c775f3c5976989764f96f10a33b5913f1728334c480e616e7c70aee50058 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:47:32 compute-0 podman[187545]: 2025-10-01 16:47:32.85055994 +0000 UTC m=+0.180309415 container attach 7ea9c775f3c5976989764f96f10a33b5913f1728334c480e616e7c70aee50058 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jackson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:47:33 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:33 compute-0 friendly_jackson[187562]: --> passed data devices: 0 physical, 3 LVM
Oct 01 16:47:33 compute-0 friendly_jackson[187562]: --> relative data size: 1.0
Oct 01 16:47:33 compute-0 friendly_jackson[187562]: --> All data devices are unavailable
Oct 01 16:47:33 compute-0 systemd[1]: libpod-7ea9c775f3c5976989764f96f10a33b5913f1728334c480e616e7c70aee50058.scope: Deactivated successfully.
Oct 01 16:47:33 compute-0 podman[187545]: 2025-10-01 16:47:33.92667569 +0000 UTC m=+1.256425135 container died 7ea9c775f3c5976989764f96f10a33b5913f1728334c480e616e7c70aee50058 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jackson, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 01 16:47:33 compute-0 systemd[1]: libpod-7ea9c775f3c5976989764f96f10a33b5913f1728334c480e616e7c70aee50058.scope: Consumed 1.041s CPU time.
Oct 01 16:47:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4a024ac95ccddfba1f490d9b2c9858167adfee5fb85f4b4f390a01a5d04d770-merged.mount: Deactivated successfully.
Oct 01 16:47:33 compute-0 podman[187545]: 2025-10-01 16:47:33.983582481 +0000 UTC m=+1.313331866 container remove 7ea9c775f3c5976989764f96f10a33b5913f1728334c480e616e7c70aee50058 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jackson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 01 16:47:33 compute-0 systemd[1]: libpod-conmon-7ea9c775f3c5976989764f96f10a33b5913f1728334c480e616e7c70aee50058.scope: Deactivated successfully.
Oct 01 16:47:34 compute-0 sudo[187437]: pam_unix(sudo:session): session closed for user root
Oct 01 16:47:34 compute-0 sudo[187605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:47:34 compute-0 sudo[187605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:47:34 compute-0 sudo[187605]: pam_unix(sudo:session): session closed for user root
Oct 01 16:47:34 compute-0 sudo[187630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:47:34 compute-0 sudo[187630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:47:34 compute-0 sudo[187630]: pam_unix(sudo:session): session closed for user root
Oct 01 16:47:34 compute-0 sudo[187655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:47:34 compute-0 sudo[187655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:47:34 compute-0 sudo[187655]: pam_unix(sudo:session): session closed for user root
Oct 01 16:47:34 compute-0 sudo[187680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 16:47:34 compute-0 sudo[187680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:47:34 compute-0 podman[187745]: 2025-10-01 16:47:34.62442738 +0000 UTC m=+0.046895013 container create 1a60090134b15d9c037793d15d4f14f1c586d50b60a412d6d6452f96485ce706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_greider, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 01 16:47:34 compute-0 ceph-mon[74273]: pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:34 compute-0 systemd[1]: Started libpod-conmon-1a60090134b15d9c037793d15d4f14f1c586d50b60a412d6d6452f96485ce706.scope.
Oct 01 16:47:34 compute-0 podman[187745]: 2025-10-01 16:47:34.60298887 +0000 UTC m=+0.025456503 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:47:34 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:47:34 compute-0 podman[187745]: 2025-10-01 16:47:34.72168086 +0000 UTC m=+0.144148463 container init 1a60090134b15d9c037793d15d4f14f1c586d50b60a412d6d6452f96485ce706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_greider, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:47:34 compute-0 podman[187745]: 2025-10-01 16:47:34.734273891 +0000 UTC m=+0.156741524 container start 1a60090134b15d9c037793d15d4f14f1c586d50b60a412d6d6452f96485ce706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_greider, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:47:34 compute-0 podman[187745]: 2025-10-01 16:47:34.738122276 +0000 UTC m=+0.160589909 container attach 1a60090134b15d9c037793d15d4f14f1c586d50b60a412d6d6452f96485ce706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct 01 16:47:34 compute-0 focused_greider[187761]: 167 167
Oct 01 16:47:34 compute-0 systemd[1]: libpod-1a60090134b15d9c037793d15d4f14f1c586d50b60a412d6d6452f96485ce706.scope: Deactivated successfully.
Oct 01 16:47:34 compute-0 conmon[187761]: conmon 1a60090134b15d9c0377 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1a60090134b15d9c037793d15d4f14f1c586d50b60a412d6d6452f96485ce706.scope/container/memory.events
Oct 01 16:47:34 compute-0 podman[187745]: 2025-10-01 16:47:34.742844178 +0000 UTC m=+0.165311811 container died 1a60090134b15d9c037793d15d4f14f1c586d50b60a412d6d6452f96485ce706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_greider, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:47:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd8646343cd02e237170f3428d1fbcf1648a014ba96729573ebd0428d870f0e2-merged.mount: Deactivated successfully.
Oct 01 16:47:34 compute-0 podman[187745]: 2025-10-01 16:47:34.789463918 +0000 UTC m=+0.211931521 container remove 1a60090134b15d9c037793d15d4f14f1c586d50b60a412d6d6452f96485ce706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_greider, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:47:34 compute-0 systemd[1]: libpod-conmon-1a60090134b15d9c037793d15d4f14f1c586d50b60a412d6d6452f96485ce706.scope: Deactivated successfully.
Oct 01 16:47:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:47:35 compute-0 podman[187783]: 2025-10-01 16:47:35.001684031 +0000 UTC m=+0.063903822 container create e01d0accaa0d93ffc7f799e690db0c28053385837ac6e2a2021d53d05befc877 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_murdock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:47:35 compute-0 systemd[1]: Started libpod-conmon-e01d0accaa0d93ffc7f799e690db0c28053385837ac6e2a2021d53d05befc877.scope.
Oct 01 16:47:35 compute-0 podman[187783]: 2025-10-01 16:47:34.978991958 +0000 UTC m=+0.041211739 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:47:35 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:47:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9aa18b9afecac7fc7f55ccbc097304e40ca4ff3a229552c21cc1f3df83494ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:47:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9aa18b9afecac7fc7f55ccbc097304e40ca4ff3a229552c21cc1f3df83494ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:47:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9aa18b9afecac7fc7f55ccbc097304e40ca4ff3a229552c21cc1f3df83494ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:47:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9aa18b9afecac7fc7f55ccbc097304e40ca4ff3a229552c21cc1f3df83494ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:47:35 compute-0 podman[187783]: 2025-10-01 16:47:35.108759204 +0000 UTC m=+0.170979045 container init e01d0accaa0d93ffc7f799e690db0c28053385837ac6e2a2021d53d05befc877 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_murdock, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:47:35 compute-0 podman[187783]: 2025-10-01 16:47:35.115527664 +0000 UTC m=+0.177747455 container start e01d0accaa0d93ffc7f799e690db0c28053385837ac6e2a2021d53d05befc877 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 01 16:47:35 compute-0 podman[187783]: 2025-10-01 16:47:35.119435682 +0000 UTC m=+0.181655463 container attach e01d0accaa0d93ffc7f799e690db0c28053385837ac6e2a2021d53d05befc877 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_murdock, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 01 16:47:35 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]: {
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:     "0": [
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:         {
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             "devices": [
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "/dev/loop3"
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             ],
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             "lv_name": "ceph_lv0",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             "lv_size": "21470642176",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             "name": "ceph_lv0",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             "tags": {
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "ceph.cluster_name": "ceph",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "ceph.crush_device_class": "",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "ceph.encrypted": "0",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "ceph.osd_id": "0",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "ceph.type": "block",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "ceph.vdo": "0"
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             },
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             "type": "block",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             "vg_name": "ceph_vg0"
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:         }
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:     ],
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:     "1": [
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:         {
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             "devices": [
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "/dev/loop4"
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             ],
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             "lv_name": "ceph_lv1",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             "lv_size": "21470642176",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             "name": "ceph_lv1",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             "tags": {
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "ceph.cluster_name": "ceph",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "ceph.crush_device_class": "",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "ceph.encrypted": "0",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "ceph.osd_id": "1",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "ceph.type": "block",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "ceph.vdo": "0"
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             },
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             "type": "block",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             "vg_name": "ceph_vg1"
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:         }
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:     ],
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:     "2": [
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:         {
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             "devices": [
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "/dev/loop5"
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             ],
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             "lv_name": "ceph_lv2",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             "lv_size": "21470642176",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             "name": "ceph_lv2",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             "tags": {
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "ceph.cluster_name": "ceph",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "ceph.crush_device_class": "",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "ceph.encrypted": "0",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "ceph.osd_id": "2",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "ceph.type": "block",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:                 "ceph.vdo": "0"
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             },
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             "type": "block",
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:             "vg_name": "ceph_vg2"
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:         }
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]:     ]
Oct 01 16:47:35 compute-0 vigilant_murdock[187799]: }
Oct 01 16:47:35 compute-0 systemd[1]: libpod-e01d0accaa0d93ffc7f799e690db0c28053385837ac6e2a2021d53d05befc877.scope: Deactivated successfully.
Oct 01 16:47:35 compute-0 podman[187783]: 2025-10-01 16:47:35.844136408 +0000 UTC m=+0.906356199 container died e01d0accaa0d93ffc7f799e690db0c28053385837ac6e2a2021d53d05befc877 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_murdock, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:47:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9aa18b9afecac7fc7f55ccbc097304e40ca4ff3a229552c21cc1f3df83494ca-merged.mount: Deactivated successfully.
Oct 01 16:47:35 compute-0 podman[187783]: 2025-10-01 16:47:35.908265719 +0000 UTC m=+0.970485470 container remove e01d0accaa0d93ffc7f799e690db0c28053385837ac6e2a2021d53d05befc877 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_murdock, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 01 16:47:35 compute-0 systemd[1]: libpod-conmon-e01d0accaa0d93ffc7f799e690db0c28053385837ac6e2a2021d53d05befc877.scope: Deactivated successfully.
Oct 01 16:47:35 compute-0 sudo[187680]: pam_unix(sudo:session): session closed for user root
Oct 01 16:47:36 compute-0 sudo[187820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:47:36 compute-0 sudo[187820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:47:36 compute-0 sudo[187820]: pam_unix(sudo:session): session closed for user root
Oct 01 16:47:36 compute-0 sudo[187845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:47:36 compute-0 sudo[187845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:47:36 compute-0 sudo[187845]: pam_unix(sudo:session): session closed for user root
Oct 01 16:47:36 compute-0 sudo[187870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:47:36 compute-0 sudo[187870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:47:36 compute-0 sudo[187870]: pam_unix(sudo:session): session closed for user root
Oct 01 16:47:36 compute-0 sudo[187895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 16:47:36 compute-0 sudo[187895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:47:36 compute-0 podman[187961]: 2025-10-01 16:47:36.571009617 +0000 UTC m=+0.048000930 container create fe2b31d42f5da5e44c0fe41f09c0cf110fad311da391ed44deeb5a9349060dea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_einstein, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:47:36 compute-0 systemd[1]: Started libpod-conmon-fe2b31d42f5da5e44c0fe41f09c0cf110fad311da391ed44deeb5a9349060dea.scope.
Oct 01 16:47:36 compute-0 ceph-mon[74273]: pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:36 compute-0 podman[187961]: 2025-10-01 16:47:36.550630273 +0000 UTC m=+0.027621636 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:47:36 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:47:36 compute-0 podman[187961]: 2025-10-01 16:47:36.673092276 +0000 UTC m=+0.150083679 container init fe2b31d42f5da5e44c0fe41f09c0cf110fad311da391ed44deeb5a9349060dea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_einstein, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 01 16:47:36 compute-0 podman[187961]: 2025-10-01 16:47:36.680224722 +0000 UTC m=+0.157216065 container start fe2b31d42f5da5e44c0fe41f09c0cf110fad311da391ed44deeb5a9349060dea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_einstein, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:47:36 compute-0 podman[187961]: 2025-10-01 16:47:36.684509576 +0000 UTC m=+0.161500889 container attach fe2b31d42f5da5e44c0fe41f09c0cf110fad311da391ed44deeb5a9349060dea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_einstein, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 01 16:47:36 compute-0 condescending_einstein[187978]: 167 167
Oct 01 16:47:36 compute-0 systemd[1]: libpod-fe2b31d42f5da5e44c0fe41f09c0cf110fad311da391ed44deeb5a9349060dea.scope: Deactivated successfully.
Oct 01 16:47:36 compute-0 podman[187961]: 2025-10-01 16:47:36.687313976 +0000 UTC m=+0.164305329 container died fe2b31d42f5da5e44c0fe41f09c0cf110fad311da391ed44deeb5a9349060dea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_einstein, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 01 16:47:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d723da3a555562c5f7c6b3829be1b60a6b1be867f794ec15347cfa70ad65c60-merged.mount: Deactivated successfully.
Oct 01 16:47:36 compute-0 podman[187961]: 2025-10-01 16:47:36.74149498 +0000 UTC m=+0.218486293 container remove fe2b31d42f5da5e44c0fe41f09c0cf110fad311da391ed44deeb5a9349060dea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 01 16:47:36 compute-0 systemd[1]: libpod-conmon-fe2b31d42f5da5e44c0fe41f09c0cf110fad311da391ed44deeb5a9349060dea.scope: Deactivated successfully.
Oct 01 16:47:36 compute-0 podman[188004]: 2025-10-01 16:47:36.982171334 +0000 UTC m=+0.068162555 container create bd2f83b5e2828fe74351f5b087d87eaa297c935a143cf443b856cfcff08de56d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_northcutt, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Oct 01 16:47:37 compute-0 systemd[1]: Started libpod-conmon-bd2f83b5e2828fe74351f5b087d87eaa297c935a143cf443b856cfcff08de56d.scope.
Oct 01 16:47:37 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:47:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2d327a5360a91a2827cf32f061120d6dd375cdda4cfc052825c1c5cedf908fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:47:37 compute-0 podman[188004]: 2025-10-01 16:47:36.953461162 +0000 UTC m=+0.039452473 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:47:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2d327a5360a91a2827cf32f061120d6dd375cdda4cfc052825c1c5cedf908fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:47:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2d327a5360a91a2827cf32f061120d6dd375cdda4cfc052825c1c5cedf908fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:47:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2d327a5360a91a2827cf32f061120d6dd375cdda4cfc052825c1c5cedf908fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:47:37 compute-0 podman[188004]: 2025-10-01 16:47:37.057364039 +0000 UTC m=+0.143355330 container init bd2f83b5e2828fe74351f5b087d87eaa297c935a143cf443b856cfcff08de56d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 01 16:47:37 compute-0 podman[188004]: 2025-10-01 16:47:37.068405183 +0000 UTC m=+0.154396434 container start bd2f83b5e2828fe74351f5b087d87eaa297c935a143cf443b856cfcff08de56d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:47:37 compute-0 podman[188004]: 2025-10-01 16:47:37.071770357 +0000 UTC m=+0.157761598 container attach bd2f83b5e2828fe74351f5b087d87eaa297c935a143cf443b856cfcff08de56d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_northcutt, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:47:37 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:38 compute-0 vigilant_northcutt[188020]: {
Oct 01 16:47:38 compute-0 vigilant_northcutt[188020]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 16:47:38 compute-0 vigilant_northcutt[188020]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:47:38 compute-0 vigilant_northcutt[188020]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 16:47:38 compute-0 vigilant_northcutt[188020]:         "osd_id": 2,
Oct 01 16:47:38 compute-0 vigilant_northcutt[188020]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:47:38 compute-0 vigilant_northcutt[188020]:         "type": "bluestore"
Oct 01 16:47:38 compute-0 vigilant_northcutt[188020]:     },
Oct 01 16:47:38 compute-0 vigilant_northcutt[188020]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 16:47:38 compute-0 vigilant_northcutt[188020]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:47:38 compute-0 vigilant_northcutt[188020]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 16:47:38 compute-0 vigilant_northcutt[188020]:         "osd_id": 0,
Oct 01 16:47:38 compute-0 vigilant_northcutt[188020]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:47:38 compute-0 vigilant_northcutt[188020]:         "type": "bluestore"
Oct 01 16:47:38 compute-0 vigilant_northcutt[188020]:     },
Oct 01 16:47:38 compute-0 vigilant_northcutt[188020]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 16:47:38 compute-0 vigilant_northcutt[188020]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:47:38 compute-0 vigilant_northcutt[188020]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 16:47:38 compute-0 vigilant_northcutt[188020]:         "osd_id": 1,
Oct 01 16:47:38 compute-0 vigilant_northcutt[188020]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:47:38 compute-0 vigilant_northcutt[188020]:         "type": "bluestore"
Oct 01 16:47:38 compute-0 vigilant_northcutt[188020]:     }
Oct 01 16:47:38 compute-0 vigilant_northcutt[188020]: }
Oct 01 16:47:38 compute-0 systemd[1]: libpod-bd2f83b5e2828fe74351f5b087d87eaa297c935a143cf443b856cfcff08de56d.scope: Deactivated successfully.
Oct 01 16:47:38 compute-0 podman[188004]: 2025-10-01 16:47:38.168584625 +0000 UTC m=+1.254575876 container died bd2f83b5e2828fe74351f5b087d87eaa297c935a143cf443b856cfcff08de56d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_northcutt, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 01 16:47:38 compute-0 systemd[1]: libpod-bd2f83b5e2828fe74351f5b087d87eaa297c935a143cf443b856cfcff08de56d.scope: Consumed 1.111s CPU time.
Oct 01 16:47:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2d327a5360a91a2827cf32f061120d6dd375cdda4cfc052825c1c5cedf908fa-merged.mount: Deactivated successfully.
Oct 01 16:47:38 compute-0 podman[188004]: 2025-10-01 16:47:38.249025484 +0000 UTC m=+1.335016735 container remove bd2f83b5e2828fe74351f5b087d87eaa297c935a143cf443b856cfcff08de56d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:47:38 compute-0 systemd[1]: libpod-conmon-bd2f83b5e2828fe74351f5b087d87eaa297c935a143cf443b856cfcff08de56d.scope: Deactivated successfully.
Oct 01 16:47:38 compute-0 sudo[187895]: pam_unix(sudo:session): session closed for user root
Oct 01 16:47:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:47:38 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:47:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:47:38 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:47:38 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev c9aec407-26d0-44f6-a396-543039992221 does not exist
Oct 01 16:47:38 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 377036f5-4f85-4cc7-ae32-e18eb8633835 does not exist
Oct 01 16:47:38 compute-0 sudo[188068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:47:38 compute-0 sudo[188068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:47:38 compute-0 sudo[188068]: pam_unix(sudo:session): session closed for user root
Oct 01 16:47:38 compute-0 sudo[188093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 16:47:38 compute-0 sudo[188093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:47:38 compute-0 sudo[188093]: pam_unix(sudo:session): session closed for user root
Oct 01 16:47:38 compute-0 ceph-mon[74273]: pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:38 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:47:38 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:47:39 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:47:40 compute-0 kernel: SELinux:  Converting 2767 SID table entries...
Oct 01 16:47:40 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 01 16:47:40 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 01 16:47:40 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 01 16:47:40 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 01 16:47:40 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 01 16:47:40 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 01 16:47:40 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 01 16:47:40 compute-0 ceph-mon[74273]: pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:47:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:47:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:47:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:47:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:47:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:47:41 compute-0 groupadd[188130]: group added to /etc/group: name=dnsmasq, GID=991
Oct 01 16:47:41 compute-0 groupadd[188130]: group added to /etc/gshadow: name=dnsmasq
Oct 01 16:47:41 compute-0 groupadd[188130]: new group: name=dnsmasq, GID=991
Oct 01 16:47:41 compute-0 useradd[188137]: new user: name=dnsmasq, UID=991, GID=991, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Oct 01 16:47:41 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Oct 01 16:47:41 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Oct 01 16:47:41 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Oct 01 16:47:41 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:42 compute-0 groupadd[188150]: group added to /etc/group: name=clevis, GID=990
Oct 01 16:47:42 compute-0 groupadd[188150]: group added to /etc/gshadow: name=clevis
Oct 01 16:47:42 compute-0 groupadd[188150]: new group: name=clevis, GID=990
Oct 01 16:47:42 compute-0 useradd[188157]: new user: name=clevis, UID=990, GID=990, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Oct 01 16:47:42 compute-0 usermod[188167]: add 'clevis' to group 'tss'
Oct 01 16:47:42 compute-0 usermod[188167]: add 'clevis' to shadow group 'tss'
Oct 01 16:47:42 compute-0 ceph-mon[74273]: pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:43 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:44 compute-0 ceph-mon[74273]: pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:44 compute-0 polkitd[6497]: Reloading rules
Oct 01 16:47:44 compute-0 polkitd[6497]: Collecting garbage unconditionally...
Oct 01 16:47:44 compute-0 polkitd[6497]: Loading rules from directory /etc/polkit-1/rules.d
Oct 01 16:47:44 compute-0 polkitd[6497]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 01 16:47:44 compute-0 polkitd[6497]: Finished loading, compiling and executing 4 rules
Oct 01 16:47:44 compute-0 polkitd[6497]: Reloading rules
Oct 01 16:47:44 compute-0 polkitd[6497]: Collecting garbage unconditionally...
Oct 01 16:47:44 compute-0 polkitd[6497]: Loading rules from directory /etc/polkit-1/rules.d
Oct 01 16:47:44 compute-0 polkitd[6497]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 01 16:47:44 compute-0 polkitd[6497]: Finished loading, compiling and executing 4 rules
Oct 01 16:47:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:47:45 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:46 compute-0 groupadd[188354]: group added to /etc/group: name=ceph, GID=167
Oct 01 16:47:46 compute-0 groupadd[188354]: group added to /etc/gshadow: name=ceph
Oct 01 16:47:46 compute-0 groupadd[188354]: new group: name=ceph, GID=167
Oct 01 16:47:46 compute-0 useradd[188370]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Oct 01 16:47:46 compute-0 podman[188355]: 2025-10-01 16:47:46.377461939 +0000 UTC m=+0.128683041 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 01 16:47:46 compute-0 ceph-mon[74273]: pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:47 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:48 compute-0 ceph-mon[74273]: pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:49 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Oct 01 16:47:49 compute-0 sshd[1002]: Received signal 15; terminating.
Oct 01 16:47:49 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Oct 01 16:47:49 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Oct 01 16:47:49 compute-0 systemd[1]: sshd.service: Consumed 2.615s CPU time, read 532.0K from disk, written 0B to disk.
Oct 01 16:47:49 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Oct 01 16:47:49 compute-0 systemd[1]: Stopping sshd-keygen.target...
Oct 01 16:47:49 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 01 16:47:49 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 01 16:47:49 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 01 16:47:49 compute-0 systemd[1]: Reached target sshd-keygen.target.
Oct 01 16:47:49 compute-0 systemd[1]: Starting OpenSSH server daemon...
Oct 01 16:47:49 compute-0 sshd[189011]: Server listening on 0.0.0.0 port 22.
Oct 01 16:47:49 compute-0 sshd[189011]: Server listening on :: port 22.
Oct 01 16:47:49 compute-0 systemd[1]: Started OpenSSH server daemon.
Oct 01 16:47:49 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:47:50 compute-0 ceph-mon[74273]: pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:50 compute-0 podman[189176]: 2025-10-01 16:47:50.849788382 +0000 UTC m=+0.065868441 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 01 16:47:51 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 01 16:47:51 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 01 16:47:51 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:51 compute-0 systemd[1]: Reloading.
Oct 01 16:47:51 compute-0 systemd-rc-local-generator[189289]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:47:51 compute-0 systemd-sysv-generator[189293]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:47:52 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 01 16:47:53 compute-0 ceph-mon[74273]: pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:53 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:53 compute-0 systemd[1]: Starting PackageKit Daemon...
Oct 01 16:47:53 compute-0 PackageKit[191756]: daemon start
Oct 01 16:47:54 compute-0 systemd[1]: Started PackageKit Daemon.
Oct 01 16:47:54 compute-0 ceph-mon[74273]: pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:54 compute-0 sudo[168914]: pam_unix(sudo:session): session closed for user root
Oct 01 16:47:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:47:55 compute-0 sudo[193255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikiczzdhlzwnrgjvxtnrlniiytxcgphp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337274.5929582-336-117296944177171/AnsiballZ_systemd.py'
Oct 01 16:47:55 compute-0 sudo[193255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:47:55 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:55 compute-0 python3.9[193275]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 01 16:47:55 compute-0 systemd[1]: Reloading.
Oct 01 16:47:55 compute-0 systemd-sysv-generator[193680]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:47:55 compute-0 systemd-rc-local-generator[193674]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:47:55 compute-0 sudo[193255]: pam_unix(sudo:session): session closed for user root
Oct 01 16:47:56 compute-0 sudo[194510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnpgccvkxhfpgdunpzoaopiutqsbqvor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337276.0991411-336-40304152918159/AnsiballZ_systemd.py'
Oct 01 16:47:56 compute-0 sudo[194510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:47:56 compute-0 ceph-mon[74273]: pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:56 compute-0 python3.9[194527]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 01 16:47:56 compute-0 systemd[1]: Reloading.
Oct 01 16:47:56 compute-0 systemd-rc-local-generator[194902]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:47:56 compute-0 systemd-sysv-generator[194909]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:47:57 compute-0 sudo[194510]: pam_unix(sudo:session): session closed for user root
Oct 01 16:47:57 compute-0 sudo[195793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oixfduhldslkldthpaedpznymqbhzhfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337277.2383196-336-155398224988064/AnsiballZ_systemd.py'
Oct 01 16:47:57 compute-0 sudo[195793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:47:57 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:57 compute-0 python3.9[195818]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 01 16:47:57 compute-0 systemd[1]: Reloading.
Oct 01 16:47:58 compute-0 systemd-sysv-generator[196267]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:47:58 compute-0 systemd-rc-local-generator[196262]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:47:58 compute-0 sudo[195793]: pam_unix(sudo:session): session closed for user root
Oct 01 16:47:58 compute-0 ceph-mon[74273]: pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:58 compute-0 sudo[197138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwsaubalnknrczlfucfvbldnorxvoucr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337278.4430063-336-59893609096375/AnsiballZ_systemd.py'
Oct 01 16:47:58 compute-0 sudo[197138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:47:59 compute-0 python3.9[197161]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 01 16:47:59 compute-0 systemd[1]: Reloading.
Oct 01 16:47:59 compute-0 systemd-rc-local-generator[197543]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:47:59 compute-0 systemd-sysv-generator[197551]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:47:59 compute-0 sudo[197138]: pam_unix(sudo:session): session closed for user root
Oct 01 16:47:59 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:47:59 compute-0 auditd[699]: Audit daemon rotating log files
Oct 01 16:47:59 compute-0 sudo[198345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijukopfipswyhhwokmhwyabvzpkcacsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337279.5893154-365-119331030286511/AnsiballZ_systemd.py'
Oct 01 16:47:59 compute-0 sudo[198345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:48:00 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 01 16:48:00 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 01 16:48:00 compute-0 systemd[1]: man-db-cache-update.service: Consumed 10.852s CPU time.
Oct 01 16:48:00 compute-0 systemd[1]: run-rad229a030e4f44f28239439f3647fa5a.service: Deactivated successfully.
Oct 01 16:48:00 compute-0 python3.9[198356]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 16:48:00 compute-0 ceph-mon[74273]: pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:00 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Oct 01 16:48:00 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:48:00.673868) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 16:48:00 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Oct 01 16:48:00 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337280673949, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2040, "num_deletes": 251, "total_data_size": 3528478, "memory_usage": 3573952, "flush_reason": "Manual Compaction"}
Oct 01 16:48:00 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Oct 01 16:48:00 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337280698598, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3442684, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9726, "largest_seqno": 11765, "table_properties": {"data_size": 3433436, "index_size": 5870, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17795, "raw_average_key_size": 19, "raw_value_size": 3415077, "raw_average_value_size": 3732, "num_data_blocks": 267, "num_entries": 915, "num_filter_entries": 915, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759337045, "oldest_key_time": 1759337045, "file_creation_time": 1759337280, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Oct 01 16:48:00 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 24788 microseconds, and 12029 cpu microseconds.
Oct 01 16:48:00 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 16:48:00 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:48:00.698657) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3442684 bytes OK
Oct 01 16:48:00 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:48:00.698682) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Oct 01 16:48:00 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:48:00.700517) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Oct 01 16:48:00 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:48:00.700540) EVENT_LOG_v1 {"time_micros": 1759337280700533, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 16:48:00 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:48:00.700562) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 16:48:00 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3519971, prev total WAL file size 3519971, number of live WAL files 2.
Oct 01 16:48:00 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 16:48:00 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:48:00.702108) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Oct 01 16:48:00 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 16:48:00 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3361KB)], [26(5929KB)]
Oct 01 16:48:00 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337280702164, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9514750, "oldest_snapshot_seqno": -1}
Oct 01 16:48:00 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3706 keys, 7926453 bytes, temperature: kUnknown
Oct 01 16:48:00 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337280760973, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 7926453, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7897992, "index_size": 18111, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9285, "raw_key_size": 88982, "raw_average_key_size": 24, "raw_value_size": 7827414, "raw_average_value_size": 2112, "num_data_blocks": 784, "num_entries": 3706, "num_filter_entries": 3706, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336399, "oldest_key_time": 0, "file_creation_time": 1759337280, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Oct 01 16:48:00 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 16:48:00 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:48:00.761713) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 7926453 bytes
Oct 01 16:48:00 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:48:00.763229) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 160.5 rd, 133.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 5.8 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(5.1) write-amplify(2.3) OK, records in: 4220, records dropped: 514 output_compression: NoCompression
Oct 01 16:48:00 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:48:00.763261) EVENT_LOG_v1 {"time_micros": 1759337280763246, "job": 10, "event": "compaction_finished", "compaction_time_micros": 59264, "compaction_time_cpu_micros": 38462, "output_level": 6, "num_output_files": 1, "total_output_size": 7926453, "num_input_records": 4220, "num_output_records": 3706, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 16:48:00 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 16:48:00 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337280764548, "job": 10, "event": "table_file_deletion", "file_number": 28}
Oct 01 16:48:00 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 16:48:00 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337280766677, "job": 10, "event": "table_file_deletion", "file_number": 26}
Oct 01 16:48:00 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:48:00.702003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:48:00 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:48:00.766828) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:48:00 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:48:00.766836) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:48:00 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:48:00.766839) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:48:00 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:48:00.766842) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:48:00 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:48:00.766845) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:48:01 compute-0 systemd[1]: Reloading.
Oct 01 16:48:01 compute-0 systemd-rc-local-generator[198476]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:48:01 compute-0 systemd-sysv-generator[198480]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:48:01 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:01 compute-0 sudo[198345]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:02 compute-0 sudo[198634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlbdosrcbiuxexjddlumknztqmbomqkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337281.8678815-365-77511990269289/AnsiballZ_systemd.py'
Oct 01 16:48:02 compute-0 sudo[198634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:02 compute-0 python3.9[198636]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 16:48:02 compute-0 ceph-mon[74273]: pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:02 compute-0 systemd[1]: Reloading.
Oct 01 16:48:02 compute-0 systemd-rc-local-generator[198667]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:48:02 compute-0 systemd-sysv-generator[198671]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:48:03 compute-0 sudo[198634]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:03 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:03 compute-0 sudo[198824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymlcmjxbzyxtbyduhffljwpjgtbizttm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337283.2613997-365-30617139393505/AnsiballZ_systemd.py'
Oct 01 16:48:03 compute-0 sudo[198824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:03 compute-0 python3.9[198826]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 16:48:04 compute-0 ceph-mon[74273]: pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:48:05 compute-0 systemd[1]: Reloading.
Oct 01 16:48:05 compute-0 systemd-rc-local-generator[198859]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:48:05 compute-0 systemd-sysv-generator[198863]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:48:05 compute-0 sudo[198824]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:05 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:06 compute-0 sudo[199016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddtswqyglwczkgsohweilnetcczrwhkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337285.580152-365-128178278060013/AnsiballZ_systemd.py'
Oct 01 16:48:06 compute-0 sudo[199016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:06 compute-0 python3.9[199018]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 16:48:06 compute-0 sudo[199016]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:06 compute-0 ceph-mon[74273]: pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:06 compute-0 sudo[199171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itvwenrunwvkztyvksaxqydvybpmchvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337286.5696807-365-201144995250698/AnsiballZ_systemd.py'
Oct 01 16:48:06 compute-0 sudo[199171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:07 compute-0 python3.9[199173]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 16:48:07 compute-0 systemd[1]: Reloading.
Oct 01 16:48:07 compute-0 systemd-rc-local-generator[199201]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:48:07 compute-0 systemd-sysv-generator[199206]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:48:07 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:07 compute-0 sudo[199171]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:08 compute-0 sudo[199361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzzrlgopjjergfiivoswbxfjvhuwcngn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337287.9262877-401-36732614366562/AnsiballZ_systemd.py'
Oct 01 16:48:08 compute-0 sudo[199361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:08 compute-0 python3.9[199363]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 01 16:48:08 compute-0 ceph-mon[74273]: pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:09 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:09 compute-0 systemd[1]: Reloading.
Oct 01 16:48:09 compute-0 systemd-sysv-generator[199397]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:48:09 compute-0 systemd-rc-local-generator[199393]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:48:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:48:10 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Oct 01 16:48:10 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Oct 01 16:48:10 compute-0 sudo[199361]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:10 compute-0 ceph-mon[74273]: pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:10 compute-0 sudo[199554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxqauxeqzwgbpkzdzqyfguxmizpywrax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337290.3652143-409-236965370963832/AnsiballZ_systemd.py'
Oct 01 16:48:10 compute-0 sudo[199554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:11 compute-0 python3.9[199556]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 16:48:11 compute-0 sudo[199554]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_16:48:11
Oct 01 16:48:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 16:48:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 16:48:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'default.rgw.control', 'vms', 'default.rgw.meta', '.mgr', 'volumes', 'cephfs.cephfs.data', 'images', 'backups', 'cephfs.cephfs.meta']
Oct 01 16:48:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 16:48:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:48:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:48:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:48:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:48:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:48:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:48:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 16:48:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:48:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 16:48:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:48:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:48:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:48:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:48:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:48:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:48:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:48:11 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:11 compute-0 sudo[199709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfcukcswtohebfnwnyyalcgadqcglswn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337291.369545-409-208663271529477/AnsiballZ_systemd.py'
Oct 01 16:48:11 compute-0 sudo[199709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:12 compute-0 python3.9[199711]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 16:48:12 compute-0 ceph-mon[74273]: pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:13 compute-0 sudo[199709]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:13 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:13 compute-0 sudo[199864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biocyncejjhmvqzdixixhyhbisgcfijx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337293.4119728-409-184059605349430/AnsiballZ_systemd.py'
Oct 01 16:48:13 compute-0 sudo[199864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:14 compute-0 python3.9[199866]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 16:48:14 compute-0 sudo[199864]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:14 compute-0 sudo[200019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpzmemooyjebwycrdnytmttzegqlpcct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337294.298461-409-106577663392190/AnsiballZ_systemd.py'
Oct 01 16:48:14 compute-0 sudo[200019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:14 compute-0 ceph-mon[74273]: pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:14 compute-0 python3.9[200021]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 16:48:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:48:15 compute-0 sudo[200019]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:15 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:15 compute-0 sudo[200174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eolyblrcprgjglaswqzgficrbjovvame ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337295.2105653-409-177133436559511/AnsiballZ_systemd.py'
Oct 01 16:48:15 compute-0 sudo[200174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:15 compute-0 python3.9[200176]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 16:48:16 compute-0 sudo[200174]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:16 compute-0 sudo[200339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tljcpvwiwfuyhylghcdywrigkvbojhrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337296.1792648-409-50914182688810/AnsiballZ_systemd.py'
Oct 01 16:48:16 compute-0 sudo[200339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:16 compute-0 podman[200303]: 2025-10-01 16:48:16.582761078 +0000 UTC m=+0.091320394 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:48:16 compute-0 ceph-mon[74273]: pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:16 compute-0 python3.9[200349]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 16:48:16 compute-0 sudo[200339]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:17 compute-0 sudo[200511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztodcjeqzddfelcllnvppighxptojaky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337297.0395842-409-232122963734157/AnsiballZ_systemd.py'
Oct 01 16:48:17 compute-0 sudo[200511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:17 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:17 compute-0 python3.9[200513]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 16:48:17 compute-0 sudo[200511]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:18 compute-0 sudo[200666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohdmmdboucufoozglzjlupzwdipibxzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337298.00402-409-207957118113297/AnsiballZ_systemd.py'
Oct 01 16:48:18 compute-0 sudo[200666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:18 compute-0 python3.9[200668]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 16:48:18 compute-0 ceph-mon[74273]: pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:18 compute-0 sudo[200666]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:19 compute-0 sudo[200821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swosyxxxbzagwqfbencsohisawulqqiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337298.9532313-409-94251233628814/AnsiballZ_systemd.py'
Oct 01 16:48:19 compute-0 sudo[200821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:19 compute-0 python3.9[200823]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 16:48:19 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:19 compute-0 sudo[200821]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:48:19.947 162304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 16:48:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:48:19.947 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 16:48:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:48:19.947 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 16:48:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:48:20 compute-0 sudo[200976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raoqkvuzkykmanvhwqzpljyswwxuigax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337299.8030326-409-213955833759316/AnsiballZ_systemd.py'
Oct 01 16:48:20 compute-0 sudo[200976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:20 compute-0 python3.9[200978]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 16:48:20 compute-0 sudo[200976]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:20 compute-0 ceph-mon[74273]: pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 16:48:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:48:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 16:48:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:48:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:48:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:48:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:48:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:48:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:48:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:48:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:48:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:48:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 16:48:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:48:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:48:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:48:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 16:48:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:48:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 16:48:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:48:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:48:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:48:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 16:48:21 compute-0 sudo[201144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjzosyyawnfzwaotmfcvdwmcoqryfgyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337300.7025476-409-142007194627616/AnsiballZ_systemd.py'
Oct 01 16:48:21 compute-0 sudo[201144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:21 compute-0 podman[201105]: 2025-10-01 16:48:21.129323845 +0000 UTC m=+0.084919939 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 01 16:48:21 compute-0 python3.9[201154]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 16:48:21 compute-0 sudo[201144]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:21 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:21 compute-0 sudo[201308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkmojeoteuquzyszcmmbqcfcchnzprql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337301.6724117-409-188004026684389/AnsiballZ_systemd.py'
Oct 01 16:48:21 compute-0 sudo[201308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:22 compute-0 python3.9[201310]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 16:48:22 compute-0 sudo[201308]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:22 compute-0 ceph-mon[74273]: pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:22 compute-0 sudo[201463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjgtwyskzziielhjhyowynltdlwbjdkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337302.4723632-409-118615670224035/AnsiballZ_systemd.py'
Oct 01 16:48:22 compute-0 sudo[201463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:23 compute-0 python3.9[201465]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 16:48:23 compute-0 sudo[201463]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:23 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:23 compute-0 sudo[201618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soslnvduilxjhghmexiyubmqpytylzjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337303.3906085-409-60570413177091/AnsiballZ_systemd.py'
Oct 01 16:48:23 compute-0 sudo[201618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:24 compute-0 python3.9[201620]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 16:48:24 compute-0 sudo[201618]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:24 compute-0 ceph-mon[74273]: pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:48:25 compute-0 sudo[201773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtsbhtcenrmhyxvimneedhudlysyjkla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337304.6559103-511-61454985518268/AnsiballZ_file.py'
Oct 01 16:48:25 compute-0 sudo[201773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:25 compute-0 python3.9[201775]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:48:25 compute-0 sudo[201773]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:25 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:25 compute-0 sudo[201925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnmdaezcorpwilmkvmpevkaafnsutylu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337305.5277798-511-263778415931753/AnsiballZ_file.py'
Oct 01 16:48:25 compute-0 sudo[201925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:26 compute-0 python3.9[201927]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:48:26 compute-0 sudo[201925]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:26 compute-0 sudo[202077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfdtwhvhboaggecvijaxdhdnxlaormnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337306.3707578-511-156029633204706/AnsiballZ_file.py'
Oct 01 16:48:26 compute-0 sudo[202077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:26 compute-0 ceph-mon[74273]: pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:27 compute-0 python3.9[202079]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:48:27 compute-0 sudo[202077]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:27 compute-0 sudo[202229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-makmswmqtgytynwlfepdsqzuarozazhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337307.1835446-511-36501942252523/AnsiballZ_file.py'
Oct 01 16:48:27 compute-0 sudo[202229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:27 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:27 compute-0 python3.9[202231]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:48:27 compute-0 sudo[202229]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:28 compute-0 sudo[202381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgislssnuwrrivoxbtbmtdozgoslibcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337307.94997-511-94193906828801/AnsiballZ_file.py'
Oct 01 16:48:28 compute-0 sudo[202381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:28 compute-0 python3.9[202383]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:48:28 compute-0 sudo[202381]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:28 compute-0 ceph-mon[74273]: pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:29 compute-0 sudo[202533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baqkvhypijpbpxijlxqbpajqipvlpasi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337308.6896856-511-118943696712602/AnsiballZ_file.py'
Oct 01 16:48:29 compute-0 sudo[202533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:29 compute-0 python3.9[202535]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:48:29 compute-0 sudo[202533]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:29 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:29 compute-0 sudo[202685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvvndlwmirhbskodlaeowcbhcdfroayi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337309.4502895-554-123187044932986/AnsiballZ_stat.py'
Oct 01 16:48:29 compute-0 sudo[202685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:48:30 compute-0 python3.9[202687]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:48:30 compute-0 sudo[202685]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:30 compute-0 ceph-mon[74273]: pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:30 compute-0 sudo[202810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daabdrqjjxayymtuvxlooljbwiqeyftn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337309.4502895-554-123187044932986/AnsiballZ_copy.py'
Oct 01 16:48:30 compute-0 sudo[202810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:31 compute-0 python3.9[202812]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759337309.4502895-554-123187044932986/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:48:31 compute-0 sudo[202810]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:31 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:31 compute-0 sudo[202962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isdbufqusdjtqbajsnypqhplwkqpcspf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337311.3034441-554-194235691505669/AnsiballZ_stat.py'
Oct 01 16:48:31 compute-0 sudo[202962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:31 compute-0 python3.9[202964]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:48:31 compute-0 sudo[202962]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:32 compute-0 sudo[203087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhtcenryrilkzctanzpkpdpfynfgwvit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337311.3034441-554-194235691505669/AnsiballZ_copy.py'
Oct 01 16:48:32 compute-0 sudo[203087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:32 compute-0 python3.9[203089]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759337311.3034441-554-194235691505669/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:48:32 compute-0 sudo[203087]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:32 compute-0 ceph-mon[74273]: pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:33 compute-0 sudo[203239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwrfikmtmwpmejjlyxbkjxtjochtegqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337312.8270311-554-76851541456534/AnsiballZ_stat.py'
Oct 01 16:48:33 compute-0 sudo[203239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:33 compute-0 python3.9[203241]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:48:33 compute-0 sudo[203239]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:33 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:33 compute-0 sudo[203364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smnionppiwhbjdfprfeenwkhzrestrvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337312.8270311-554-76851541456534/AnsiballZ_copy.py'
Oct 01 16:48:33 compute-0 sudo[203364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:34 compute-0 python3.9[203366]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759337312.8270311-554-76851541456534/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:48:34 compute-0 sudo[203364]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:34 compute-0 sudo[203516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khypkvfqtvrcghpyfdvmdsdqpgydgxea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337314.2238605-554-225361331424791/AnsiballZ_stat.py'
Oct 01 16:48:34 compute-0 sudo[203516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:34 compute-0 python3.9[203518]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:48:34 compute-0 sudo[203516]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:34 compute-0 ceph-mon[74273]: pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:48:35 compute-0 sudo[203641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idslynxujguarpkdlxqwquffauvbvcwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337314.2238605-554-225361331424791/AnsiballZ_copy.py'
Oct 01 16:48:35 compute-0 sudo[203641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:35 compute-0 python3.9[203643]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759337314.2238605-554-225361331424791/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:48:35 compute-0 sudo[203641]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:35 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:35 compute-0 sudo[203793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shkltoadofuheisduqztbtlhmusaxlue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337315.5432935-554-193510745594438/AnsiballZ_stat.py'
Oct 01 16:48:35 compute-0 sudo[203793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:36 compute-0 python3.9[203795]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:48:36 compute-0 sudo[203793]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:36 compute-0 sudo[203918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niyqyoefppvpogbogalnznlxsjfzqfhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337315.5432935-554-193510745594438/AnsiballZ_copy.py'
Oct 01 16:48:36 compute-0 sudo[203918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:36 compute-0 python3.9[203920]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759337315.5432935-554-193510745594438/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:48:36 compute-0 ceph-mon[74273]: pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:36 compute-0 sudo[203918]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:37 compute-0 sudo[204070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjbnfwfkyjjwvptzhonqzurgpoletkaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337317.0063388-554-42736398757598/AnsiballZ_stat.py'
Oct 01 16:48:37 compute-0 sudo[204070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:37 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:37 compute-0 python3.9[204072]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:48:37 compute-0 sudo[204070]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:38 compute-0 sudo[204195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frdfihxgvnkzwzcjsrcxpukfohpfsmgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337317.0063388-554-42736398757598/AnsiballZ_copy.py'
Oct 01 16:48:38 compute-0 sudo[204195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:38 compute-0 python3.9[204197]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759337317.0063388-554-42736398757598/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:48:38 compute-0 sudo[204195]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:38 compute-0 sudo[204249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:48:38 compute-0 sudo[204249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:48:38 compute-0 sudo[204249]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:38 compute-0 sudo[204299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:48:38 compute-0 sudo[204299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:48:38 compute-0 sudo[204299]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:38 compute-0 sudo[204347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:48:38 compute-0 sudo[204347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:48:38 compute-0 sudo[204347]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:38 compute-0 sudo[204397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 16:48:38 compute-0 sudo[204397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:48:38 compute-0 sudo[204445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycrxwxemwuetjlqhuhwfzlkfbkrvzkah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337318.4124537-554-69348573711847/AnsiballZ_stat.py'
Oct 01 16:48:38 compute-0 sudo[204445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:38 compute-0 ceph-mon[74273]: pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:38 compute-0 python3.9[204449]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:48:38 compute-0 sudo[204445]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:39 compute-0 sudo[204397]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:48:39 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:48:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 16:48:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:48:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 16:48:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:48:39 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 21ad5036-781a-4474-a309-f4804dd46b33 does not exist
Oct 01 16:48:39 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 6993202f-2545-4c34-b295-9959582bf5a3 does not exist
Oct 01 16:48:39 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev e62d02de-7173-4d8f-ac54-296413c28a63 does not exist
Oct 01 16:48:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 16:48:39 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:48:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 16:48:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:48:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:48:39 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:48:39 compute-0 sudo[204625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcjypgxpvkqxukjofztaychlzzhqbrnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337318.4124537-554-69348573711847/AnsiballZ_copy.py'
Oct 01 16:48:39 compute-0 sudo[204625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:39 compute-0 sudo[204577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:48:39 compute-0 sudo[204577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:48:39 compute-0 sudo[204577]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:39 compute-0 sudo[204630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:48:39 compute-0 sudo[204630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:48:39 compute-0 sudo[204630]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:39 compute-0 sudo[204655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:48:39 compute-0 sudo[204655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:48:39 compute-0 sudo[204655]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:39 compute-0 sudo[204680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 16:48:39 compute-0 sudo[204680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:48:39 compute-0 python3.9[204628]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759337318.4124537-554-69348573711847/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:48:39 compute-0 sudo[204625]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:39 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:48:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:48:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:48:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:48:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:48:39 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:48:39 compute-0 podman[204828]: 2025-10-01 16:48:39.919036436 +0000 UTC m=+0.051250938 container create e3ab25583df568687d75b1f83dddb7567f9a63c09ddd5751d01b97aaca50abbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Oct 01 16:48:39 compute-0 systemd[1]: Started libpod-conmon-e3ab25583df568687d75b1f83dddb7567f9a63c09ddd5751d01b97aaca50abbd.scope.
Oct 01 16:48:39 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:48:39 compute-0 podman[204828]: 2025-10-01 16:48:39.90095529 +0000 UTC m=+0.033169822 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:48:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:48:40 compute-0 podman[204828]: 2025-10-01 16:48:40.010062394 +0000 UTC m=+0.142276916 container init e3ab25583df568687d75b1f83dddb7567f9a63c09ddd5751d01b97aaca50abbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Oct 01 16:48:40 compute-0 podman[204828]: 2025-10-01 16:48:40.017070946 +0000 UTC m=+0.149285458 container start e3ab25583df568687d75b1f83dddb7567f9a63c09ddd5751d01b97aaca50abbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ptolemy, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 01 16:48:40 compute-0 podman[204828]: 2025-10-01 16:48:40.020410031 +0000 UTC m=+0.152624543 container attach e3ab25583df568687d75b1f83dddb7567f9a63c09ddd5751d01b97aaca50abbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ptolemy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:48:40 compute-0 exciting_ptolemy[204879]: 167 167
Oct 01 16:48:40 compute-0 systemd[1]: libpod-e3ab25583df568687d75b1f83dddb7567f9a63c09ddd5751d01b97aaca50abbd.scope: Deactivated successfully.
Oct 01 16:48:40 compute-0 conmon[204879]: conmon e3ab25583df568687d75 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e3ab25583df568687d75b1f83dddb7567f9a63c09ddd5751d01b97aaca50abbd.scope/container/memory.events
Oct 01 16:48:40 compute-0 podman[204828]: 2025-10-01 16:48:40.023976011 +0000 UTC m=+0.156190503 container died e3ab25583df568687d75b1f83dddb7567f9a63c09ddd5751d01b97aaca50abbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 01 16:48:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-feba6d59db781aeda55470fc85de740e5c80790360de86902c87ad9d8c7e87ac-merged.mount: Deactivated successfully.
Oct 01 16:48:40 compute-0 podman[204828]: 2025-10-01 16:48:40.065526058 +0000 UTC m=+0.197740560 container remove e3ab25583df568687d75b1f83dddb7567f9a63c09ddd5751d01b97aaca50abbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:48:40 compute-0 systemd[1]: libpod-conmon-e3ab25583df568687d75b1f83dddb7567f9a63c09ddd5751d01b97aaca50abbd.scope: Deactivated successfully.
Oct 01 16:48:40 compute-0 sudo[204926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bucxlrceqicnphwlukszhfbihigyiiuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337319.6821444-554-3746936409468/AnsiballZ_stat.py'
Oct 01 16:48:40 compute-0 sudo[204926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:40 compute-0 podman[204936]: 2025-10-01 16:48:40.276421019 +0000 UTC m=+0.045415082 container create 4a00a17f833e3acaceb7a46e3ac0101ea128ca9dd195f52886f434f99783c8bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shirley, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 01 16:48:40 compute-0 python3.9[204930]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:48:40 compute-0 systemd[1]: Started libpod-conmon-4a00a17f833e3acaceb7a46e3ac0101ea128ca9dd195f52886f434f99783c8bd.scope.
Oct 01 16:48:40 compute-0 sudo[204926]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:40 compute-0 podman[204936]: 2025-10-01 16:48:40.259355173 +0000 UTC m=+0.028349216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:48:40 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:48:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f65cc0ab8cfcc48daaf1b320fcda29e159435598b62cfe2d34ad62e336e91071/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:48:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f65cc0ab8cfcc48daaf1b320fcda29e159435598b62cfe2d34ad62e336e91071/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:48:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f65cc0ab8cfcc48daaf1b320fcda29e159435598b62cfe2d34ad62e336e91071/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:48:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f65cc0ab8cfcc48daaf1b320fcda29e159435598b62cfe2d34ad62e336e91071/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:48:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f65cc0ab8cfcc48daaf1b320fcda29e159435598b62cfe2d34ad62e336e91071/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:48:40 compute-0 podman[204936]: 2025-10-01 16:48:40.379796315 +0000 UTC m=+0.148790378 container init 4a00a17f833e3acaceb7a46e3ac0101ea128ca9dd195f52886f434f99783c8bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:48:40 compute-0 podman[204936]: 2025-10-01 16:48:40.395967049 +0000 UTC m=+0.164961082 container start 4a00a17f833e3acaceb7a46e3ac0101ea128ca9dd195f52886f434f99783c8bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shirley, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:48:40 compute-0 podman[204936]: 2025-10-01 16:48:40.399639077 +0000 UTC m=+0.168633160 container attach 4a00a17f833e3acaceb7a46e3ac0101ea128ca9dd195f52886f434f99783c8bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shirley, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 01 16:48:40 compute-0 sudo[205080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slacrmftfouahggdsixtcafsqounwcau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337319.6821444-554-3746936409468/AnsiballZ_copy.py'
Oct 01 16:48:40 compute-0 sudo[205080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:40 compute-0 ceph-mon[74273]: pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:40 compute-0 python3.9[205082]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759337319.6821444-554-3746936409468/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:48:41 compute-0 sudo[205080]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:48:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:48:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:48:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:48:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:48:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:48:41 compute-0 optimistic_shirley[204955]: --> passed data devices: 0 physical, 3 LVM
Oct 01 16:48:41 compute-0 optimistic_shirley[204955]: --> relative data size: 1.0
Oct 01 16:48:41 compute-0 optimistic_shirley[204955]: --> All data devices are unavailable
Oct 01 16:48:41 compute-0 sudo[205255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxyrnrbeciuxejsizphqyxnulzowmggw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337321.2068532-667-180075569239042/AnsiballZ_command.py'
Oct 01 16:48:41 compute-0 sudo[205255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:41 compute-0 systemd[1]: libpod-4a00a17f833e3acaceb7a46e3ac0101ea128ca9dd195f52886f434f99783c8bd.scope: Deactivated successfully.
Oct 01 16:48:41 compute-0 podman[204936]: 2025-10-01 16:48:41.564464902 +0000 UTC m=+1.333458975 container died 4a00a17f833e3acaceb7a46e3ac0101ea128ca9dd195f52886f434f99783c8bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shirley, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 16:48:41 compute-0 systemd[1]: libpod-4a00a17f833e3acaceb7a46e3ac0101ea128ca9dd195f52886f434f99783c8bd.scope: Consumed 1.101s CPU time.
Oct 01 16:48:41 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-f65cc0ab8cfcc48daaf1b320fcda29e159435598b62cfe2d34ad62e336e91071-merged.mount: Deactivated successfully.
Oct 01 16:48:41 compute-0 podman[204936]: 2025-10-01 16:48:41.651855941 +0000 UTC m=+1.420850014 container remove 4a00a17f833e3acaceb7a46e3ac0101ea128ca9dd195f52886f434f99783c8bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shirley, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:48:41 compute-0 systemd[1]: libpod-conmon-4a00a17f833e3acaceb7a46e3ac0101ea128ca9dd195f52886f434f99783c8bd.scope: Deactivated successfully.
Oct 01 16:48:41 compute-0 sudo[204680]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:41 compute-0 sudo[205271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:48:41 compute-0 sudo[205271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:48:41 compute-0 sudo[205271]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:41 compute-0 python3.9[205258]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct 01 16:48:41 compute-0 sudo[205255]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:41 compute-0 sudo[205296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:48:41 compute-0 sudo[205296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:48:41 compute-0 sudo[205296]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:41 compute-0 sudo[205334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:48:41 compute-0 sudo[205334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:48:41 compute-0 sudo[205334]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:42 compute-0 sudo[205371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 16:48:42 compute-0 sudo[205371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:48:42 compute-0 podman[205535]: 2025-10-01 16:48:42.391190186 +0000 UTC m=+0.045789674 container create 6f24ad053f8babc9307c2756d4651b35f965594284d7564fb1f3a6f4eb9bb0b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_mccarthy, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:48:42 compute-0 systemd[1]: Started libpod-conmon-6f24ad053f8babc9307c2756d4651b35f965594284d7564fb1f3a6f4eb9bb0b3.scope.
Oct 01 16:48:42 compute-0 podman[205535]: 2025-10-01 16:48:42.371276646 +0000 UTC m=+0.025876164 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:48:42 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:48:42 compute-0 sudo[205577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezcghetmvcjcfctbwnpmhbunfhkihxze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337322.054948-676-9040454005670/AnsiballZ_file.py'
Oct 01 16:48:42 compute-0 sudo[205577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:42 compute-0 podman[205535]: 2025-10-01 16:48:42.484776344 +0000 UTC m=+0.139375842 container init 6f24ad053f8babc9307c2756d4651b35f965594284d7564fb1f3a6f4eb9bb0b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_mccarthy, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:48:42 compute-0 podman[205535]: 2025-10-01 16:48:42.491550002 +0000 UTC m=+0.146149490 container start 6f24ad053f8babc9307c2756d4651b35f965594284d7564fb1f3a6f4eb9bb0b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_mccarthy, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 01 16:48:42 compute-0 podman[205535]: 2025-10-01 16:48:42.49468703 +0000 UTC m=+0.149286548 container attach 6f24ad053f8babc9307c2756d4651b35f965594284d7564fb1f3a6f4eb9bb0b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:48:42 compute-0 quizzical_mccarthy[205578]: 167 167
Oct 01 16:48:42 compute-0 systemd[1]: libpod-6f24ad053f8babc9307c2756d4651b35f965594284d7564fb1f3a6f4eb9bb0b3.scope: Deactivated successfully.
Oct 01 16:48:42 compute-0 podman[205535]: 2025-10-01 16:48:42.503968458 +0000 UTC m=+0.158567966 container died 6f24ad053f8babc9307c2756d4651b35f965594284d7564fb1f3a6f4eb9bb0b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_mccarthy, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:48:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-67c368732df689cea78d1a6f1d71849c1cc938e7a213313f6cfb087f405aae41-merged.mount: Deactivated successfully.
Oct 01 16:48:42 compute-0 podman[205535]: 2025-10-01 16:48:42.554328262 +0000 UTC m=+0.208927780 container remove 6f24ad053f8babc9307c2756d4651b35f965594284d7564fb1f3a6f4eb9bb0b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:48:42 compute-0 systemd[1]: libpod-conmon-6f24ad053f8babc9307c2756d4651b35f965594284d7564fb1f3a6f4eb9bb0b3.scope: Deactivated successfully.
Oct 01 16:48:42 compute-0 python3.9[205582]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:48:42 compute-0 podman[205602]: 2025-10-01 16:48:42.743847571 +0000 UTC m=+0.048775466 container create 1229b4b807749c9b30f95c628bbaf52cde0711b5dffdd3e53a839fbe2d8ffaa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_poitras, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 01 16:48:42 compute-0 sudo[205577]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:42 compute-0 systemd[1]: Started libpod-conmon-1229b4b807749c9b30f95c628bbaf52cde0711b5dffdd3e53a839fbe2d8ffaa2.scope.
Oct 01 16:48:42 compute-0 podman[205602]: 2025-10-01 16:48:42.722552408 +0000 UTC m=+0.027480353 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:48:42 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:48:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bd228b6d45561397d7a393508255bf7fced35062ed19d17be136db450a48bd3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:48:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bd228b6d45561397d7a393508255bf7fced35062ed19d17be136db450a48bd3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:48:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bd228b6d45561397d7a393508255bf7fced35062ed19d17be136db450a48bd3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:48:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bd228b6d45561397d7a393508255bf7fced35062ed19d17be136db450a48bd3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:48:42 compute-0 podman[205602]: 2025-10-01 16:48:42.849253318 +0000 UTC m=+0.154181273 container init 1229b4b807749c9b30f95c628bbaf52cde0711b5dffdd3e53a839fbe2d8ffaa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 01 16:48:42 compute-0 podman[205602]: 2025-10-01 16:48:42.857563605 +0000 UTC m=+0.162491500 container start 1229b4b807749c9b30f95c628bbaf52cde0711b5dffdd3e53a839fbe2d8ffaa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_poitras, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:48:42 compute-0 podman[205602]: 2025-10-01 16:48:42.864648956 +0000 UTC m=+0.169576891 container attach 1229b4b807749c9b30f95c628bbaf52cde0711b5dffdd3e53a839fbe2d8ffaa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 01 16:48:42 compute-0 ceph-mon[74273]: pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:43 compute-0 sudo[205772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfitegqobcfeqsjmbldmdyulcftbabfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337322.94837-676-1901100496710/AnsiballZ_file.py'
Oct 01 16:48:43 compute-0 sudo[205772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:43 compute-0 python3.9[205774]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:48:43 compute-0 sudo[205772]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:43 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]: {
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:     "0": [
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:         {
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             "devices": [
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "/dev/loop3"
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             ],
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             "lv_name": "ceph_lv0",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             "lv_size": "21470642176",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             "name": "ceph_lv0",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             "tags": {
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "ceph.cluster_name": "ceph",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "ceph.crush_device_class": "",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "ceph.encrypted": "0",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "ceph.osd_id": "0",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "ceph.type": "block",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "ceph.vdo": "0"
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             },
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             "type": "block",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             "vg_name": "ceph_vg0"
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:         }
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:     ],
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:     "1": [
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:         {
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             "devices": [
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "/dev/loop4"
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             ],
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             "lv_name": "ceph_lv1",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             "lv_size": "21470642176",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             "name": "ceph_lv1",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             "tags": {
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "ceph.cluster_name": "ceph",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "ceph.crush_device_class": "",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "ceph.encrypted": "0",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "ceph.osd_id": "1",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "ceph.type": "block",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "ceph.vdo": "0"
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             },
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             "type": "block",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             "vg_name": "ceph_vg1"
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:         }
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:     ],
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:     "2": [
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:         {
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             "devices": [
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "/dev/loop5"
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             ],
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             "lv_name": "ceph_lv2",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             "lv_size": "21470642176",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             "name": "ceph_lv2",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             "tags": {
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "ceph.cluster_name": "ceph",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "ceph.crush_device_class": "",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "ceph.encrypted": "0",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "ceph.osd_id": "2",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "ceph.type": "block",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:                 "ceph.vdo": "0"
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             },
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             "type": "block",
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:             "vg_name": "ceph_vg2"
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:         }
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]:     ]
Oct 01 16:48:43 compute-0 heuristic_poitras[205619]: }
Oct 01 16:48:43 compute-0 systemd[1]: libpod-1229b4b807749c9b30f95c628bbaf52cde0711b5dffdd3e53a839fbe2d8ffaa2.scope: Deactivated successfully.
Oct 01 16:48:43 compute-0 podman[205602]: 2025-10-01 16:48:43.652248046 +0000 UTC m=+0.957175931 container died 1229b4b807749c9b30f95c628bbaf52cde0711b5dffdd3e53a839fbe2d8ffaa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_poitras, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:48:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bd228b6d45561397d7a393508255bf7fced35062ed19d17be136db450a48bd3-merged.mount: Deactivated successfully.
Oct 01 16:48:43 compute-0 podman[205602]: 2025-10-01 16:48:43.706827758 +0000 UTC m=+1.011755643 container remove 1229b4b807749c9b30f95c628bbaf52cde0711b5dffdd3e53a839fbe2d8ffaa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_poitras, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 01 16:48:43 compute-0 systemd[1]: libpod-conmon-1229b4b807749c9b30f95c628bbaf52cde0711b5dffdd3e53a839fbe2d8ffaa2.scope: Deactivated successfully.
Oct 01 16:48:43 compute-0 sudo[205371]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:43 compute-0 sudo[205842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:48:43 compute-0 sudo[205842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:48:43 compute-0 sudo[205842]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:43 compute-0 sudo[205892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:48:43 compute-0 sudo[205892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:48:43 compute-0 sudo[205892]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:43 compute-0 sudo[205939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:48:43 compute-0 sudo[205939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:48:43 compute-0 sudo[205939]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:43 compute-0 sudo[205987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 16:48:43 compute-0 sudo[205987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:48:44 compute-0 sudo[206039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vatvmjcnlqzvaepzhzcnlbujqkfhufzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337323.7044442-676-258811496734979/AnsiballZ_file.py'
Oct 01 16:48:44 compute-0 sudo[206039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:44 compute-0 python3.9[206041]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:48:44 compute-0 sudo[206039]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:44 compute-0 podman[206107]: 2025-10-01 16:48:44.354335671 +0000 UTC m=+0.050609090 container create 500f3652b7945228eb14959e8bae1804c9ddd009ba12d2c9ecc9414d8dbd89a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:48:44 compute-0 systemd[1]: Started libpod-conmon-500f3652b7945228eb14959e8bae1804c9ddd009ba12d2c9ecc9414d8dbd89a2.scope.
Oct 01 16:48:44 compute-0 podman[206107]: 2025-10-01 16:48:44.33497876 +0000 UTC m=+0.031252219 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:48:44 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:48:44 compute-0 podman[206107]: 2025-10-01 16:48:44.448627675 +0000 UTC m=+0.144901134 container init 500f3652b7945228eb14959e8bae1804c9ddd009ba12d2c9ecc9414d8dbd89a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Oct 01 16:48:44 compute-0 podman[206107]: 2025-10-01 16:48:44.456603169 +0000 UTC m=+0.152876598 container start 500f3652b7945228eb14959e8bae1804c9ddd009ba12d2c9ecc9414d8dbd89a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gauss, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:48:44 compute-0 podman[206107]: 2025-10-01 16:48:44.460734868 +0000 UTC m=+0.157008307 container attach 500f3652b7945228eb14959e8bae1804c9ddd009ba12d2c9ecc9414d8dbd89a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:48:44 compute-0 serene_gauss[206153]: 167 167
Oct 01 16:48:44 compute-0 systemd[1]: libpod-500f3652b7945228eb14959e8bae1804c9ddd009ba12d2c9ecc9414d8dbd89a2.scope: Deactivated successfully.
Oct 01 16:48:44 compute-0 podman[206107]: 2025-10-01 16:48:44.463747919 +0000 UTC m=+0.160021348 container died 500f3652b7945228eb14959e8bae1804c9ddd009ba12d2c9ecc9414d8dbd89a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gauss, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 01 16:48:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-e811fea0fa8121bfdff5019494c76ebb9a6dd940fc318d87bad0cf9f8bc582fa-merged.mount: Deactivated successfully.
Oct 01 16:48:44 compute-0 podman[206107]: 2025-10-01 16:48:44.499778513 +0000 UTC m=+0.196051932 container remove 500f3652b7945228eb14959e8bae1804c9ddd009ba12d2c9ecc9414d8dbd89a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gauss, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:48:44 compute-0 systemd[1]: libpod-conmon-500f3652b7945228eb14959e8bae1804c9ddd009ba12d2c9ecc9414d8dbd89a2.scope: Deactivated successfully.
Oct 01 16:48:44 compute-0 sudo[206279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcfxjspgeaxjbktsdvkpgydcsyxzjsde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337324.352577-676-88578508372911/AnsiballZ_file.py'
Oct 01 16:48:44 compute-0 sudo[206279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:44 compute-0 podman[206254]: 2025-10-01 16:48:44.734202904 +0000 UTC m=+0.071150448 container create 26919dc9307c1bcbe6bbc42fe74e38e45c1b4eeed18d40ee2bc252db5479d03d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_saha, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 01 16:48:44 compute-0 systemd[1]: Started libpod-conmon-26919dc9307c1bcbe6bbc42fe74e38e45c1b4eeed18d40ee2bc252db5479d03d.scope.
Oct 01 16:48:44 compute-0 podman[206254]: 2025-10-01 16:48:44.703346448 +0000 UTC m=+0.040293982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:48:44 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:48:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2a8e719f9bffb34e6c7a078f9462db7ac2b52d3b20966379a762aa5b695ded9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:48:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2a8e719f9bffb34e6c7a078f9462db7ac2b52d3b20966379a762aa5b695ded9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:48:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2a8e719f9bffb34e6c7a078f9462db7ac2b52d3b20966379a762aa5b695ded9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:48:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2a8e719f9bffb34e6c7a078f9462db7ac2b52d3b20966379a762aa5b695ded9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:48:44 compute-0 podman[206254]: 2025-10-01 16:48:44.84114957 +0000 UTC m=+0.178097114 container init 26919dc9307c1bcbe6bbc42fe74e38e45c1b4eeed18d40ee2bc252db5479d03d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_saha, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:48:44 compute-0 podman[206254]: 2025-10-01 16:48:44.849439648 +0000 UTC m=+0.186387182 container start 26919dc9307c1bcbe6bbc42fe74e38e45c1b4eeed18d40ee2bc252db5479d03d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 01 16:48:44 compute-0 podman[206254]: 2025-10-01 16:48:44.85392135 +0000 UTC m=+0.190868864 container attach 26919dc9307c1bcbe6bbc42fe74e38e45c1b4eeed18d40ee2bc252db5479d03d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:48:44 compute-0 ceph-mon[74273]: pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:44 compute-0 python3.9[206286]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:48:44 compute-0 sudo[206279]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:48:45 compute-0 sudo[206446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edwxcwkmszzckyvdgdysnpkeubannlji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337325.094914-676-37553993328284/AnsiballZ_file.py'
Oct 01 16:48:45 compute-0 sudo[206446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:45 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:45 compute-0 python3.9[206448]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:48:45 compute-0 sudo[206446]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:45 compute-0 quizzical_saha[206292]: {
Oct 01 16:48:45 compute-0 quizzical_saha[206292]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 16:48:45 compute-0 quizzical_saha[206292]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:48:45 compute-0 quizzical_saha[206292]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 16:48:45 compute-0 quizzical_saha[206292]:         "osd_id": 2,
Oct 01 16:48:45 compute-0 quizzical_saha[206292]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:48:45 compute-0 quizzical_saha[206292]:         "type": "bluestore"
Oct 01 16:48:45 compute-0 quizzical_saha[206292]:     },
Oct 01 16:48:45 compute-0 quizzical_saha[206292]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 16:48:45 compute-0 quizzical_saha[206292]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:48:45 compute-0 quizzical_saha[206292]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 16:48:45 compute-0 quizzical_saha[206292]:         "osd_id": 0,
Oct 01 16:48:45 compute-0 quizzical_saha[206292]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:48:45 compute-0 quizzical_saha[206292]:         "type": "bluestore"
Oct 01 16:48:45 compute-0 quizzical_saha[206292]:     },
Oct 01 16:48:45 compute-0 quizzical_saha[206292]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 16:48:45 compute-0 quizzical_saha[206292]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:48:45 compute-0 quizzical_saha[206292]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 16:48:45 compute-0 quizzical_saha[206292]:         "osd_id": 1,
Oct 01 16:48:45 compute-0 quizzical_saha[206292]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:48:45 compute-0 quizzical_saha[206292]:         "type": "bluestore"
Oct 01 16:48:45 compute-0 quizzical_saha[206292]:     }
Oct 01 16:48:45 compute-0 quizzical_saha[206292]: }
Oct 01 16:48:45 compute-0 systemd[1]: libpod-26919dc9307c1bcbe6bbc42fe74e38e45c1b4eeed18d40ee2bc252db5479d03d.scope: Deactivated successfully.
Oct 01 16:48:45 compute-0 systemd[1]: libpod-26919dc9307c1bcbe6bbc42fe74e38e45c1b4eeed18d40ee2bc252db5479d03d.scope: Consumed 1.066s CPU time.
Oct 01 16:48:45 compute-0 podman[206254]: 2025-10-01 16:48:45.916532787 +0000 UTC m=+1.253480281 container died 26919dc9307c1bcbe6bbc42fe74e38e45c1b4eeed18d40ee2bc252db5479d03d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_saha, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:48:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2a8e719f9bffb34e6c7a078f9462db7ac2b52d3b20966379a762aa5b695ded9-merged.mount: Deactivated successfully.
Oct 01 16:48:45 compute-0 podman[206254]: 2025-10-01 16:48:45.989417969 +0000 UTC m=+1.326365463 container remove 26919dc9307c1bcbe6bbc42fe74e38e45c1b4eeed18d40ee2bc252db5479d03d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_saha, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:48:45 compute-0 systemd[1]: libpod-conmon-26919dc9307c1bcbe6bbc42fe74e38e45c1b4eeed18d40ee2bc252db5479d03d.scope: Deactivated successfully.
Oct 01 16:48:46 compute-0 sudo[205987]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:48:46 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:48:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:48:46 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:48:46 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 665acf25-42fc-4905-8c67-39e5abc316e7 does not exist
Oct 01 16:48:46 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 4bdf4b04-beb8-4c05-98b5-b3156382a533 does not exist
Oct 01 16:48:46 compute-0 sudo[206587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:48:46 compute-0 sudo[206587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:48:46 compute-0 sudo[206587]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:46 compute-0 sudo[206636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 16:48:46 compute-0 sudo[206636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:48:46 compute-0 sudo[206636]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:46 compute-0 sudo[206687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-texsfuefrwqrfhpxlbnkebpylbywvkqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337325.8498201-676-2423302574606/AnsiballZ_file.py'
Oct 01 16:48:46 compute-0 sudo[206687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:46 compute-0 python3.9[206689]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:48:46 compute-0 sudo[206687]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:46 compute-0 podman[206737]: 2025-10-01 16:48:46.828253396 +0000 UTC m=+0.121780357 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 01 16:48:46 compute-0 ceph-mon[74273]: pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:48:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:48:46 compute-0 sudo[206865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqtfxpkwnolrtmhdyxumhsfqiuqvqvvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337326.6296215-676-150085780572088/AnsiballZ_file.py'
Oct 01 16:48:46 compute-0 sudo[206865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:47 compute-0 python3.9[206867]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:48:47 compute-0 sudo[206865]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:47 compute-0 sudo[207017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrocjxryktpouyzdsbjtaxagmtduencl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337327.2623863-676-55411231255258/AnsiballZ_file.py'
Oct 01 16:48:47 compute-0 sudo[207017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:47 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:47 compute-0 python3.9[207019]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:48:47 compute-0 sudo[207017]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:48 compute-0 sudo[207169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrdynidpklyiokfpshcjhvkiskqbmeoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337327.8682544-676-131155903523565/AnsiballZ_file.py'
Oct 01 16:48:48 compute-0 sudo[207169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:48 compute-0 python3.9[207171]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:48:48 compute-0 sudo[207169]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:48 compute-0 sudo[207321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cspbabgtkzauteqapsrjjwqifdyvgxcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337328.5043638-676-227512808612938/AnsiballZ_file.py'
Oct 01 16:48:48 compute-0 sudo[207321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:48 compute-0 ceph-mon[74273]: pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:49 compute-0 python3.9[207323]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:48:49 compute-0 sudo[207321]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:49 compute-0 sudo[207473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzwpveuhdxkopelpgilqwwjqajyaovxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337329.1942835-676-94251978111146/AnsiballZ_file.py'
Oct 01 16:48:49 compute-0 sudo[207473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:49 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:49 compute-0 python3.9[207475]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:48:49 compute-0 sudo[207473]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:48:50 compute-0 sudo[207625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvssksyqgqcgnecnqzxgupxjaexahrye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337329.807654-676-193097839158820/AnsiballZ_file.py'
Oct 01 16:48:50 compute-0 sudo[207625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:50 compute-0 python3.9[207627]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:48:50 compute-0 sudo[207625]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:50 compute-0 sudo[207777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpfslbwetwjworftxiosvscucxhwntht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337330.5264723-676-37915610455319/AnsiballZ_file.py'
Oct 01 16:48:50 compute-0 sudo[207777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:50 compute-0 ceph-mon[74273]: pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:51 compute-0 python3.9[207779]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:48:51 compute-0 sudo[207777]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:51 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:51 compute-0 sudo[207939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykhvwwrlxxfkakyeeyqwecbjiozrtecn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337331.2473435-676-161648183252950/AnsiballZ_file.py'
Oct 01 16:48:51 compute-0 sudo[207939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:51 compute-0 podman[207903]: 2025-10-01 16:48:51.65992661 +0000 UTC m=+0.069445151 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 01 16:48:51 compute-0 python3.9[207950]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:48:51 compute-0 sudo[207939]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:52 compute-0 sudo[208102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kljssqwvfnolqmcqfwbofkwvpkoxcsed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337332.0630956-775-81193306822135/AnsiballZ_stat.py'
Oct 01 16:48:52 compute-0 sudo[208102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:52 compute-0 python3.9[208104]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:48:52 compute-0 sudo[208102]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:52 compute-0 ceph-mon[74273]: pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:53 compute-0 sudo[208225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkttbaownemhjciobsxiesuighigpzsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337332.0630956-775-81193306822135/AnsiballZ_copy.py'
Oct 01 16:48:53 compute-0 sudo[208225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:53 compute-0 python3.9[208227]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759337332.0630956-775-81193306822135/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:48:53 compute-0 sudo[208225]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:53 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:53 compute-0 sudo[208377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjmupfmqycjmetnbtzwztjzqiljuybap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337333.361369-775-52321961277556/AnsiballZ_stat.py'
Oct 01 16:48:53 compute-0 sudo[208377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:53 compute-0 python3.9[208379]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:48:53 compute-0 sudo[208377]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:54 compute-0 sudo[208500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etfjmwmcfismdroisghfbscwdofxfpwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337333.361369-775-52321961277556/AnsiballZ_copy.py'
Oct 01 16:48:54 compute-0 sudo[208500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:54 compute-0 python3.9[208502]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759337333.361369-775-52321961277556/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:48:54 compute-0 sudo[208500]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:54 compute-0 ceph-mon[74273]: pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:48:55 compute-0 sudo[208652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejmmpxgittnrnecdjphpxxxssimqyjmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337334.6947618-775-229187545303520/AnsiballZ_stat.py'
Oct 01 16:48:55 compute-0 sudo[208652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:55 compute-0 python3.9[208654]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:48:55 compute-0 sudo[208652]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:55 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:55 compute-0 sudo[208775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hieqcitqwcxxxghxdzlehsytbltccjpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337334.6947618-775-229187545303520/AnsiballZ_copy.py'
Oct 01 16:48:55 compute-0 sudo[208775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:55 compute-0 python3.9[208777]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759337334.6947618-775-229187545303520/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:48:55 compute-0 sudo[208775]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:56 compute-0 sudo[208927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnzjdjywrjsogslmuaegeavwhuuhjzwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337336.0815346-775-259506779416239/AnsiballZ_stat.py'
Oct 01 16:48:56 compute-0 sudo[208927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:56 compute-0 python3.9[208929]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:48:56 compute-0 sudo[208927]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:56 compute-0 ceph-mon[74273]: pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:57 compute-0 sudo[209050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muzbxpnwmspsxswgwcirptxsfogpxjhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337336.0815346-775-259506779416239/AnsiballZ_copy.py'
Oct 01 16:48:57 compute-0 sudo[209050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:57 compute-0 python3.9[209052]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759337336.0815346-775-259506779416239/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:48:57 compute-0 sudo[209050]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:57 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:57 compute-0 sudo[209202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxqkvulfklxsdexgxjbgkvkglaeprlwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337337.4674184-775-33524226062770/AnsiballZ_stat.py'
Oct 01 16:48:57 compute-0 sudo[209202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:58 compute-0 python3.9[209204]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:48:58 compute-0 sudo[209202]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:58 compute-0 sudo[209325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkopyrbdbwlkmlrauhncxkbxekqnpyjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337337.4674184-775-33524226062770/AnsiballZ_copy.py'
Oct 01 16:48:58 compute-0 sudo[209325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:58 compute-0 python3.9[209327]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759337337.4674184-775-33524226062770/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:48:58 compute-0 sudo[209325]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:58 compute-0 ceph-mon[74273]: pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:59 compute-0 sudo[209477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkinyauqnvpuwngmatgyjnufadzxuzek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337338.743366-775-90166333317583/AnsiballZ_stat.py'
Oct 01 16:48:59 compute-0 sudo[209477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:59 compute-0 python3.9[209479]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:48:59 compute-0 sudo[209477]: pam_unix(sudo:session): session closed for user root
Oct 01 16:48:59 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:48:59 compute-0 sudo[209600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlqedvlaqzxyzpwrpzsyrldpmgurqyqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337338.743366-775-90166333317583/AnsiballZ_copy.py'
Oct 01 16:48:59 compute-0 sudo[209600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:48:59 compute-0 python3.9[209602]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759337338.743366-775-90166333317583/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:48:59 compute-0 sudo[209600]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:49:00 compute-0 sudo[209752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkkqohwhoxokzlphvhetrxrmmgcjkfwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337340.0630424-775-47460689793586/AnsiballZ_stat.py'
Oct 01 16:49:00 compute-0 sudo[209752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:00 compute-0 python3.9[209754]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:49:00 compute-0 sudo[209752]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:00 compute-0 ceph-mon[74273]: pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:01 compute-0 sudo[209875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kimodpxutiarminlomeekjlxugddfefm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337340.0630424-775-47460689793586/AnsiballZ_copy.py'
Oct 01 16:49:01 compute-0 sudo[209875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:01 compute-0 python3.9[209877]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759337340.0630424-775-47460689793586/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:49:01 compute-0 sudo[209875]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:01 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:01 compute-0 sudo[210027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgasdmqkymdotwiwuuhvfhfkixtkjurd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337341.3891358-775-253494451532260/AnsiballZ_stat.py'
Oct 01 16:49:01 compute-0 sudo[210027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:01 compute-0 anacron[4037]: Job `cron.weekly' started
Oct 01 16:49:01 compute-0 anacron[4037]: Job `cron.weekly' terminated
Oct 01 16:49:01 compute-0 python3.9[210029]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:49:01 compute-0 sudo[210027]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:02 compute-0 sudo[210152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iejxodidklgbgsknsgyvlrswfuhdrdcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337341.3891358-775-253494451532260/AnsiballZ_copy.py'
Oct 01 16:49:02 compute-0 sudo[210152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:02 compute-0 python3.9[210154]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759337341.3891358-775-253494451532260/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:49:02 compute-0 sudo[210152]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:02 compute-0 ceph-mon[74273]: pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:02 compute-0 sudo[210304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkpnjedhipnoxokkfcsksnptuanrmjno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337342.6620743-775-46709453113668/AnsiballZ_stat.py'
Oct 01 16:49:02 compute-0 sudo[210304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:03 compute-0 python3.9[210306]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:49:03 compute-0 sudo[210304]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:03 compute-0 sudo[210427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbpagxrifufhbqxwagrvzzohaokoraub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337342.6620743-775-46709453113668/AnsiballZ_copy.py'
Oct 01 16:49:03 compute-0 sudo[210427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:03 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:03 compute-0 python3.9[210429]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759337342.6620743-775-46709453113668/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:49:03 compute-0 sudo[210427]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:04 compute-0 sudo[210579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbwueucqrmtzxpeyjwptifohiuwuecrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337344.0277333-775-187463060436157/AnsiballZ_stat.py'
Oct 01 16:49:04 compute-0 sudo[210579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:04 compute-0 python3.9[210581]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:49:04 compute-0 sudo[210579]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:04 compute-0 sudo[210702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnmzgkytanqbaokohwowoznyzapmerdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337344.0277333-775-187463060436157/AnsiballZ_copy.py'
Oct 01 16:49:04 compute-0 sudo[210702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:04 compute-0 ceph-mon[74273]: pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:49:05 compute-0 python3.9[210704]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759337344.0277333-775-187463060436157/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:49:05 compute-0 sudo[210702]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:05 compute-0 sudo[210854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvayawyhpqaukwffmbveduqskjtravcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337345.30063-775-91980163778408/AnsiballZ_stat.py'
Oct 01 16:49:05 compute-0 sudo[210854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:05 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:05 compute-0 python3.9[210856]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:49:05 compute-0 sudo[210854]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:06 compute-0 sudo[210977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gskovpiczzuapeyvsncoudfnktjypekz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337345.30063-775-91980163778408/AnsiballZ_copy.py'
Oct 01 16:49:06 compute-0 sudo[210977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:06 compute-0 python3.9[210979]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759337345.30063-775-91980163778408/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:49:06 compute-0 sudo[210977]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:06 compute-0 ceph-mon[74273]: pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:07 compute-0 sudo[211129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghrdxynxlafqyuvbinmviuwinlbhwabb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337346.716788-775-222947050320312/AnsiballZ_stat.py'
Oct 01 16:49:07 compute-0 sudo[211129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:07 compute-0 python3.9[211131]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:49:07 compute-0 sudo[211129]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:07 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:07 compute-0 sudo[211252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fooqajodpjiaxlpazueswdueqaareklm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337346.716788-775-222947050320312/AnsiballZ_copy.py'
Oct 01 16:49:07 compute-0 sudo[211252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:07 compute-0 python3.9[211254]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759337346.716788-775-222947050320312/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:49:07 compute-0 sudo[211252]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:08 compute-0 sudo[211404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wazyhzqyalktvpkncegxedhonykvlmfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337347.9663699-775-59240332401902/AnsiballZ_stat.py'
Oct 01 16:49:08 compute-0 sudo[211404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:08 compute-0 python3.9[211406]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:49:08 compute-0 sudo[211404]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:08 compute-0 sudo[211527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zovozcuhxkojftlzgnzwxdksscgcyqiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337347.9663699-775-59240332401902/AnsiballZ_copy.py'
Oct 01 16:49:08 compute-0 sudo[211527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:09 compute-0 ceph-mon[74273]: pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:09 compute-0 python3.9[211529]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759337347.9663699-775-59240332401902/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:49:09 compute-0 sudo[211527]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:09 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:09 compute-0 sudo[211679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdknrclaqaagvhclihngqfushvparpmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337349.3110728-775-222165181377776/AnsiballZ_stat.py'
Oct 01 16:49:09 compute-0 sudo[211679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:09 compute-0 python3.9[211681]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:49:09 compute-0 sudo[211679]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:49:10 compute-0 sudo[211802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aeijszwjhxihpixeqoukfyyxsfdahwty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337349.3110728-775-222165181377776/AnsiballZ_copy.py'
Oct 01 16:49:10 compute-0 sudo[211802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:10 compute-0 python3.9[211804]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759337349.3110728-775-222165181377776/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:49:10 compute-0 sudo[211802]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:11 compute-0 ceph-mon[74273]: pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_16:49:11
Oct 01 16:49:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 16:49:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 16:49:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', 'vms', 'images', 'default.rgw.meta', '.mgr']
Oct 01 16:49:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 16:49:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:49:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:49:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:49:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:49:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:49:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:49:11 compute-0 python3.9[211954]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:49:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 16:49:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:49:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 16:49:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:49:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:49:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:49:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:49:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:49:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:49:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:49:11 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:12 compute-0 sudo[212107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afubxoluunmcfrwjvmowtmdysmrjailf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337351.6252441-981-43401932766721/AnsiballZ_seboolean.py'
Oct 01 16:49:12 compute-0 sudo[212107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:12 compute-0 ceph-mgr[74571]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3235544197
Oct 01 16:49:12 compute-0 python3.9[212109]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct 01 16:49:13 compute-0 ceph-mon[74273]: pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:13 compute-0 sudo[212107]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:13 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:14 compute-0 sudo[212263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-benhscdvjosglkmqshdcpgzgwgganbqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337353.6661875-989-271649399145204/AnsiballZ_copy.py'
Oct 01 16:49:14 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Oct 01 16:49:14 compute-0 sudo[212263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:14 compute-0 ceph-mon[74273]: pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:14 compute-0 python3.9[212265]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:49:14 compute-0 sudo[212263]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:14 compute-0 sudo[212415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmnhwqcqpcruknwsqjcknxgkxdqgxjni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337354.4136376-989-152777625314623/AnsiballZ_copy.py'
Oct 01 16:49:14 compute-0 sudo[212415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:14 compute-0 python3.9[212417]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:49:15 compute-0 sudo[212415]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:49:15 compute-0 sudo[212567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsyclyumtlquqtawfmqklaucdkvskowt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337355.1519134-989-125727611439863/AnsiballZ_copy.py'
Oct 01 16:49:15 compute-0 sudo[212567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:15 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:15 compute-0 python3.9[212569]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:49:15 compute-0 sudo[212567]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:16 compute-0 sudo[212719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-didixztpudaxvafxthuigsftsglraeof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337355.8330264-989-200713296604604/AnsiballZ_copy.py'
Oct 01 16:49:16 compute-0 sudo[212719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:16 compute-0 python3.9[212721]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:49:16 compute-0 sudo[212719]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:16 compute-0 ceph-mon[74273]: pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:17 compute-0 sudo[212880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjhgqtpempoharkpvmkkzcsgerodyahq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337356.5955594-989-44473828878600/AnsiballZ_copy.py'
Oct 01 16:49:17 compute-0 sudo[212880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:17 compute-0 podman[212845]: 2025-10-01 16:49:17.092531809 +0000 UTC m=+0.134438829 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 01 16:49:17 compute-0 python3.9[212890]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:49:17 compute-0 sudo[212880]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:17 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:17 compute-0 sudo[213049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkkdtvftxlfstzyfavvljnfvrujmzbok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337357.4906993-1025-158417283016564/AnsiballZ_copy.py'
Oct 01 16:49:17 compute-0 sudo[213049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:18 compute-0 python3.9[213051]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:49:18 compute-0 sudo[213049]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:18 compute-0 sudo[213201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydwfggbbogkamgfmieiiivmkctztvmqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337358.2179806-1025-38074080457831/AnsiballZ_copy.py'
Oct 01 16:49:18 compute-0 sudo[213201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:18 compute-0 python3.9[213203]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:49:18 compute-0 ceph-mon[74273]: pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:18 compute-0 sudo[213201]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:19 compute-0 sudo[213353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzwbbnannzjsfonsohgyufialkotbydd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337358.8889248-1025-112360784155964/AnsiballZ_copy.py'
Oct 01 16:49:19 compute-0 sudo[213353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:19 compute-0 python3.9[213355]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:49:19 compute-0 sudo[213353]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:19 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:19 compute-0 sudo[213505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahsufraseyovplskgvuthfcgimiugidx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337359.586308-1025-57376893446752/AnsiballZ_copy.py'
Oct 01 16:49:19 compute-0 sudo[213505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:49:19.948 162304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 16:49:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:49:19.949 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 16:49:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:49:19.949 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 16:49:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:49:20 compute-0 python3.9[213507]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:49:20 compute-0 sudo[213505]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:20 compute-0 sudo[213657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wufwomzxxtssjpapsehalgfyllnmlcnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337360.3023539-1025-144306835210241/AnsiballZ_copy.py'
Oct 01 16:49:20 compute-0 sudo[213657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:20 compute-0 ceph-mon[74273]: pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:20 compute-0 python3.9[213659]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:49:20 compute-0 sudo[213657]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:20 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 16:49:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:49:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 16:49:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:49:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:49:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:49:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:49:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:49:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:49:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:49:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:49:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:49:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 16:49:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:49:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:49:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:49:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 16:49:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:49:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 16:49:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:49:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:49:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:49:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 16:49:21 compute-0 sudo[213809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmndrehrlpxffjpumfulmvwacdufyciu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337361.0589938-1061-181757929471053/AnsiballZ_systemd.py'
Oct 01 16:49:21 compute-0 sudo[213809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:21 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:21 compute-0 python3.9[213811]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 01 16:49:21 compute-0 systemd[1]: Reloading.
Oct 01 16:49:21 compute-0 podman[213813]: 2025-10-01 16:49:21.834548492 +0000 UTC m=+0.045156833 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 16:49:21 compute-0 systemd-rc-local-generator[213854]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:49:21 compute-0 systemd-sysv-generator[213857]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:49:22 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Oct 01 16:49:22 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Oct 01 16:49:22 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Oct 01 16:49:22 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Oct 01 16:49:22 compute-0 systemd[1]: Starting libvirt logging daemon...
Oct 01 16:49:22 compute-0 systemd[1]: Started libvirt logging daemon.
Oct 01 16:49:22 compute-0 sudo[213809]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:22 compute-0 sudo[214020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alaaegwtvzxvwjaeuwfpieudiwztjvsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337362.424765-1061-150912973641963/AnsiballZ_systemd.py'
Oct 01 16:49:22 compute-0 sudo[214020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:22 compute-0 ceph-mon[74273]: pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:23 compute-0 python3.9[214022]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 01 16:49:23 compute-0 systemd[1]: Reloading.
Oct 01 16:49:23 compute-0 systemd-rc-local-generator[214050]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:49:23 compute-0 systemd-sysv-generator[214054]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:49:23 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Oct 01 16:49:23 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Oct 01 16:49:23 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Oct 01 16:49:23 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Oct 01 16:49:23 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Oct 01 16:49:23 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Oct 01 16:49:23 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Oct 01 16:49:23 compute-0 systemd[1]: Started libvirt nodedev daemon.
Oct 01 16:49:23 compute-0 sudo[214020]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:23 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:24 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct 01 16:49:24 compute-0 sudo[214235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfxgifgkmamqjcwnegevndgckroaleii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337363.7581158-1061-102286397223567/AnsiballZ_systemd.py'
Oct 01 16:49:24 compute-0 sudo[214235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:24 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct 01 16:49:24 compute-0 python3.9[214237]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 01 16:49:24 compute-0 systemd[1]: Reloading.
Oct 01 16:49:24 compute-0 systemd-rc-local-generator[214263]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:49:24 compute-0 systemd-sysv-generator[214267]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:49:24 compute-0 ceph-mon[74273]: pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:24 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Oct 01 16:49:24 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Oct 01 16:49:24 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Oct 01 16:49:24 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Oct 01 16:49:24 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct 01 16:49:24 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct 01 16:49:24 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Oct 01 16:49:24 compute-0 sudo[214235]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:24 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Oct 01 16:49:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:49:25 compute-0 sudo[214454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhfmgykvnxuiodpoqijsrkezittbcvbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337365.0464385-1061-144231946022564/AnsiballZ_systemd.py'
Oct 01 16:49:25 compute-0 sudo[214454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:25 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:25 compute-0 python3.9[214456]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 01 16:49:25 compute-0 systemd[1]: Reloading.
Oct 01 16:49:25 compute-0 systemd-sysv-generator[214488]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:49:25 compute-0 systemd-rc-local-generator[214485]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:49:25 compute-0 setroubleshoot[214208]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 87e2bcf5-b049-4837-9313-000ec0e24f73
Oct 01 16:49:25 compute-0 setroubleshoot[214208]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Oct 01 16:49:25 compute-0 setroubleshoot[214208]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 87e2bcf5-b049-4837-9313-000ec0e24f73
Oct 01 16:49:25 compute-0 setroubleshoot[214208]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Oct 01 16:49:26 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Oct 01 16:49:26 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Oct 01 16:49:26 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct 01 16:49:26 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Oct 01 16:49:26 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Oct 01 16:49:26 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Oct 01 16:49:26 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Oct 01 16:49:26 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Oct 01 16:49:26 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Oct 01 16:49:26 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Oct 01 16:49:26 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Oct 01 16:49:26 compute-0 systemd[1]: Started libvirt QEMU daemon.
Oct 01 16:49:26 compute-0 sudo[214454]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:26 compute-0 sudo[214669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klfdnmomzdqyupexqhonkebxtyatwrsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337366.2952862-1061-41086289453692/AnsiballZ_systemd.py'
Oct 01 16:49:26 compute-0 sudo[214669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:26 compute-0 ceph-mon[74273]: pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:26 compute-0 python3.9[214671]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 01 16:49:26 compute-0 systemd[1]: Reloading.
Oct 01 16:49:27 compute-0 systemd-rc-local-generator[214697]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:49:27 compute-0 systemd-sysv-generator[214701]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:49:27 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Oct 01 16:49:27 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Oct 01 16:49:27 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Oct 01 16:49:27 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Oct 01 16:49:27 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Oct 01 16:49:27 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Oct 01 16:49:27 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct 01 16:49:27 compute-0 systemd[1]: Started libvirt secret daemon.
Oct 01 16:49:27 compute-0 sudo[214669]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:27 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:28 compute-0 sudo[214878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdlmskrrlfjyudrbtrocdbsivuierqtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337367.6762006-1098-114789294697177/AnsiballZ_file.py'
Oct 01 16:49:28 compute-0 sudo[214878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:28 compute-0 python3.9[214880]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:49:28 compute-0 sudo[214878]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:28 compute-0 sudo[215030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hootozkbjypqxmzagxhagdebmvvrggxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337368.493358-1106-152377552852119/AnsiballZ_find.py'
Oct 01 16:49:28 compute-0 sudo[215030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:28 compute-0 ceph-mon[74273]: pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:29 compute-0 python3.9[215032]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 01 16:49:29 compute-0 sudo[215030]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:29 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:29 compute-0 sudo[215182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxsoqamntzguirjbgtndsoblixadfbmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337369.41475-1114-69750496250994/AnsiballZ_command.py'
Oct 01 16:49:29 compute-0 sudo[215182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:49:30 compute-0 python3.9[215184]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:49:30 compute-0 sudo[215182]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:30 compute-0 ceph-mon[74273]: pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:31 compute-0 python3.9[215338]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 01 16:49:31 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:32 compute-0 python3.9[215488]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:49:32 compute-0 python3.9[215609]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759337371.5035732-1133-217210805308142/.source.xml follow=False _original_basename=secret.xml.j2 checksum=6a1fcd1fcd69370464320f1db3ef0d540cbf5194 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:49:32 compute-0 ceph-mon[74273]: pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:33 compute-0 sudo[215759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juyeofwrrrzzemlagwjxgyzmrttrztxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337372.8423812-1148-126884241516383/AnsiballZ_command.py'
Oct 01 16:49:33 compute-0 sudo[215759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:33 compute-0 python3.9[215761]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine f44264e3-e26a-5bd3-9e84-b4ba651d9cf5
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:49:33 compute-0 polkitd[6497]: Registered Authentication Agent for unix-process:215763:308823 (system bus name :1.2985 [/usr/bin/pkttyagent --process 215763 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 01 16:49:33 compute-0 polkitd[6497]: Unregistered Authentication Agent for unix-process:215763:308823 (system bus name :1.2985, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 01 16:49:33 compute-0 polkitd[6497]: Registered Authentication Agent for unix-process:215762:308823 (system bus name :1.2986 [/usr/bin/pkttyagent --process 215762 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 01 16:49:33 compute-0 polkitd[6497]: Unregistered Authentication Agent for unix-process:215762:308823 (system bus name :1.2986, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 01 16:49:33 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:33 compute-0 sudo[215759]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:34 compute-0 python3.9[215923]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:49:34 compute-0 ceph-mon[74273]: pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:49:35 compute-0 sudo[216073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvocwzexogiyydynrtlodiaifupkjmxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337374.6500204-1164-23580585673396/AnsiballZ_command.py'
Oct 01 16:49:35 compute-0 sudo[216073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:35 compute-0 sudo[216073]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:35 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:35 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Oct 01 16:49:35 compute-0 sudo[216226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzwadkbqyeebnpxcidmamydezuzlgiie ; FSID=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 KEY=AQC1V91oAAAAABAAOuYA0InTprUH/o2bXP7eNg== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337375.517757-1172-117334574245642/AnsiballZ_command.py'
Oct 01 16:49:35 compute-0 sudo[216226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:35 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct 01 16:49:36 compute-0 polkitd[6497]: Registered Authentication Agent for unix-process:216229:309085 (system bus name :1.2989 [/usr/bin/pkttyagent --process 216229 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 01 16:49:36 compute-0 polkitd[6497]: Unregistered Authentication Agent for unix-process:216229:309085 (system bus name :1.2989, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 01 16:49:36 compute-0 sudo[216226]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:36 compute-0 sudo[216384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqmhbkgjpdczuqewnunhfrrzfehdgzit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337376.4303467-1180-147439316845233/AnsiballZ_copy.py'
Oct 01 16:49:36 compute-0 sudo[216384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:36 compute-0 ceph-mon[74273]: pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:36 compute-0 python3.9[216386]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:49:37 compute-0 sudo[216384]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:37 compute-0 sudo[216536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlpcsgamtdaiujxaqkhstxuzqesfwnkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337377.2625506-1188-85521559953798/AnsiballZ_stat.py'
Oct 01 16:49:37 compute-0 sudo[216536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:37 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:37 compute-0 python3.9[216538]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:49:37 compute-0 sudo[216536]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:38 compute-0 sudo[216659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcavpbulcxzwlffxjnsgitlzvcrgjwrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337377.2625506-1188-85521559953798/AnsiballZ_copy.py'
Oct 01 16:49:38 compute-0 sudo[216659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:38 compute-0 python3.9[216661]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759337377.2625506-1188-85521559953798/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:49:38 compute-0 sudo[216659]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:38 compute-0 ceph-mon[74273]: pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:39 compute-0 sudo[216811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixwiqntcoksfqmuwjojjptbdoylmizpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337378.836356-1204-112353771587545/AnsiballZ_file.py'
Oct 01 16:49:39 compute-0 sudo[216811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:39 compute-0 python3.9[216813]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:49:39 compute-0 sudo[216811]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:39 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:39 compute-0 sudo[216963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shkulfkvojwivbzmqpvpjqgbyybqlzhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337379.5974915-1212-102226887102008/AnsiballZ_stat.py'
Oct 01 16:49:39 compute-0 sudo[216963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:49:40 compute-0 python3.9[216965]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:49:40 compute-0 sudo[216963]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:40 compute-0 sudo[217041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oekdhhkuqdubyztubaoslqjwxazzbuiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337379.5974915-1212-102226887102008/AnsiballZ_file.py'
Oct 01 16:49:40 compute-0 sudo[217041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:40 compute-0 python3.9[217043]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:49:40 compute-0 sudo[217041]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:41 compute-0 ceph-mon[74273]: pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:41 compute-0 sudo[217193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ummvzoughmnyrvpwmcngevpeissfmmql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337380.852266-1224-229658464029181/AnsiballZ_stat.py'
Oct 01 16:49:41 compute-0 sudo[217193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:49:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:49:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:49:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:49:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:49:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:49:41 compute-0 python3.9[217195]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:49:41 compute-0 sudo[217193]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:41 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:41 compute-0 sudo[217271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljbxhjpoupgtfwulztsbjlkkoxnvtanb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337380.852266-1224-229658464029181/AnsiballZ_file.py'
Oct 01 16:49:41 compute-0 sudo[217271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:41 compute-0 python3.9[217273]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.2kxc_xyx recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:49:41 compute-0 sudo[217271]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:42 compute-0 sudo[217423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysmfrzzgcgrsqwfxftijoezypxwmbrzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337382.1545315-1236-192433122869231/AnsiballZ_stat.py'
Oct 01 16:49:42 compute-0 sudo[217423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:42 compute-0 python3.9[217425]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:49:42 compute-0 sudo[217423]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:43 compute-0 ceph-mon[74273]: pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:43 compute-0 sudo[217501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncisdkbwkstevucayjbqpbxoklmunhpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337382.1545315-1236-192433122869231/AnsiballZ_file.py'
Oct 01 16:49:43 compute-0 sudo[217501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:43 compute-0 python3.9[217503]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:49:43 compute-0 sudo[217501]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:43 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:43 compute-0 sudo[217653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnpxewthdozkvavgaklhrrfsrzdwojvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337383.4575932-1249-86459736165807/AnsiballZ_command.py'
Oct 01 16:49:43 compute-0 sudo[217653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:44 compute-0 python3.9[217655]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:49:44 compute-0 sudo[217653]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:44 compute-0 ceph-mon[74273]: pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:44 compute-0 sudo[217806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zspwkflzipjwxxioixszdfawecvbpvdd ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759337384.359856-1257-251045912947130/AnsiballZ_edpm_nftables_from_files.py'
Oct 01 16:49:44 compute-0 sudo[217806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:49:45 compute-0 python3[217808]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 01 16:49:45 compute-0 sudo[217806]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:45 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:45 compute-0 sudo[217958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aikxubukqtamslwldarvzgspcguiekto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337385.374802-1265-263425394956773/AnsiballZ_stat.py'
Oct 01 16:49:45 compute-0 sudo[217958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:45 compute-0 python3.9[217960]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:49:45 compute-0 sudo[217958]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:46 compute-0 sudo[218054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgqmbuihbsgudwrwqazwyeqvfqrwnskg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337385.374802-1265-263425394956773/AnsiballZ_file.py'
Oct 01 16:49:46 compute-0 sudo[218054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:46 compute-0 sudo[218019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:49:46 compute-0 sudo[218019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:49:46 compute-0 sudo[218019]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:46 compute-0 sudo[218064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:49:46 compute-0 sudo[218064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:49:46 compute-0 sudo[218064]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:46 compute-0 sudo[218089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:49:46 compute-0 sudo[218089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:49:46 compute-0 sudo[218089]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:46 compute-0 python3.9[218061]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:49:46 compute-0 sudo[218054]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:46 compute-0 sudo[218114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 16:49:46 compute-0 sudo[218114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:49:46 compute-0 ceph-mon[74273]: pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:47 compute-0 sudo[218114]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:47 compute-0 sudo[218319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvhpkdzbopgbsehfdwqscdqosgkhieki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337386.6919744-1277-140761146898816/AnsiballZ_stat.py'
Oct 01 16:49:47 compute-0 sudo[218319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:49:47 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:49:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 16:49:47 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:49:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 16:49:47 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:49:47 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev f33ef7e7-66af-421b-8d1c-af084073081c does not exist
Oct 01 16:49:47 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev f1735b0b-241f-41c6-976e-7af6b010a3d6 does not exist
Oct 01 16:49:47 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 5477c861-292a-4269-b1d0-6fac96a1ab5e does not exist
Oct 01 16:49:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 16:49:47 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:49:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 16:49:47 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:49:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:49:47 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:49:47 compute-0 sudo[218322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:49:47 compute-0 sudo[218322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:49:47 compute-0 sudo[218322]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:47 compute-0 sudo[218348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:49:47 compute-0 sudo[218348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:49:47 compute-0 sudo[218348]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:47 compute-0 python3.9[218321]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:49:47 compute-0 podman[218346]: 2025-10-01 16:49:47.244707582 +0000 UTC m=+0.086476989 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 01 16:49:47 compute-0 sudo[218319]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:47 compute-0 sudo[218393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:49:47 compute-0 sudo[218393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:49:47 compute-0 sudo[218393]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:47 compute-0 sudo[218425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 16:49:47 compute-0 sudo[218425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:49:47 compute-0 sudo[218535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxkvifvqpgbtnfhznyrvcszjkyjdrbpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337386.6919744-1277-140761146898816/AnsiballZ_file.py'
Oct 01 16:49:47 compute-0 sudo[218535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:47 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:49:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:49:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:49:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:49:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:49:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:49:47 compute-0 podman[218566]: 2025-10-01 16:49:47.726104533 +0000 UTC m=+0.057974725 container create d50c6f57eb3cf3074bcc9e0e852f34e062f50189b6aaa3c232302b72596c2413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jang, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:49:47 compute-0 python3.9[218544]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:49:47 compute-0 sudo[218535]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:47 compute-0 systemd[1]: Started libpod-conmon-d50c6f57eb3cf3074bcc9e0e852f34e062f50189b6aaa3c232302b72596c2413.scope.
Oct 01 16:49:47 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:49:47 compute-0 podman[218566]: 2025-10-01 16:49:47.706776035 +0000 UTC m=+0.038646267 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:49:47 compute-0 podman[218566]: 2025-10-01 16:49:47.814256331 +0000 UTC m=+0.146126533 container init d50c6f57eb3cf3074bcc9e0e852f34e062f50189b6aaa3c232302b72596c2413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:49:47 compute-0 podman[218566]: 2025-10-01 16:49:47.822215 +0000 UTC m=+0.154085192 container start d50c6f57eb3cf3074bcc9e0e852f34e062f50189b6aaa3c232302b72596c2413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jang, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:49:47 compute-0 podman[218566]: 2025-10-01 16:49:47.826462506 +0000 UTC m=+0.158332718 container attach d50c6f57eb3cf3074bcc9e0e852f34e062f50189b6aaa3c232302b72596c2413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jang, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:49:47 compute-0 elegant_jang[218583]: 167 167
Oct 01 16:49:47 compute-0 systemd[1]: libpod-d50c6f57eb3cf3074bcc9e0e852f34e062f50189b6aaa3c232302b72596c2413.scope: Deactivated successfully.
Oct 01 16:49:47 compute-0 podman[218566]: 2025-10-01 16:49:47.829364244 +0000 UTC m=+0.161234436 container died d50c6f57eb3cf3074bcc9e0e852f34e062f50189b6aaa3c232302b72596c2413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct 01 16:49:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-65f153c32788793e3925f307cc2ea414eca91428693ca342a8fad977bc2d346d-merged.mount: Deactivated successfully.
Oct 01 16:49:47 compute-0 podman[218566]: 2025-10-01 16:49:47.882576019 +0000 UTC m=+0.214446211 container remove d50c6f57eb3cf3074bcc9e0e852f34e062f50189b6aaa3c232302b72596c2413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:49:47 compute-0 systemd[1]: libpod-conmon-d50c6f57eb3cf3074bcc9e0e852f34e062f50189b6aaa3c232302b72596c2413.scope: Deactivated successfully.
Oct 01 16:49:48 compute-0 podman[218657]: 2025-10-01 16:49:48.048356931 +0000 UTC m=+0.054827046 container create 6a7ce97b09be4d47a628a719ce3dda1ee199ec594edd23de6f6dc319f43db935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:49:48 compute-0 systemd[1]: Started libpod-conmon-6a7ce97b09be4d47a628a719ce3dda1ee199ec594edd23de6f6dc319f43db935.scope.
Oct 01 16:49:48 compute-0 podman[218657]: 2025-10-01 16:49:48.018828181 +0000 UTC m=+0.025298306 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:49:48 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:49:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8162aa976e855b14c92c7c54a86b5f34931835200e68e83f1ae4abb1eea34014/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:49:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8162aa976e855b14c92c7c54a86b5f34931835200e68e83f1ae4abb1eea34014/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:49:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8162aa976e855b14c92c7c54a86b5f34931835200e68e83f1ae4abb1eea34014/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:49:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8162aa976e855b14c92c7c54a86b5f34931835200e68e83f1ae4abb1eea34014/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:49:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8162aa976e855b14c92c7c54a86b5f34931835200e68e83f1ae4abb1eea34014/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:49:48 compute-0 podman[218657]: 2025-10-01 16:49:48.179873395 +0000 UTC m=+0.186343530 container init 6a7ce97b09be4d47a628a719ce3dda1ee199ec594edd23de6f6dc319f43db935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 01 16:49:48 compute-0 podman[218657]: 2025-10-01 16:49:48.194823706 +0000 UTC m=+0.201293831 container start 6a7ce97b09be4d47a628a719ce3dda1ee199ec594edd23de6f6dc319f43db935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lichterman, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:49:48 compute-0 podman[218657]: 2025-10-01 16:49:48.198982581 +0000 UTC m=+0.205452676 container attach 6a7ce97b09be4d47a628a719ce3dda1ee199ec594edd23de6f6dc319f43db935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 01 16:49:48 compute-0 sudo[218779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvnxmvrkuldcshosshswhhvhcteyfdlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337387.948858-1289-151090884858440/AnsiballZ_stat.py'
Oct 01 16:49:48 compute-0 sudo[218779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:48 compute-0 python3.9[218781]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:49:48 compute-0 sudo[218779]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:48 compute-0 ceph-mon[74273]: pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:48 compute-0 sudo[218864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwuhlutrkpdopatffrrckpvlrqswfwdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337387.948858-1289-151090884858440/AnsiballZ_file.py'
Oct 01 16:49:48 compute-0 sudo[218864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:49 compute-0 python3.9[218867]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:49:49 compute-0 sudo[218864]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:49 compute-0 jolly_lichterman[218702]: --> passed data devices: 0 physical, 3 LVM
Oct 01 16:49:49 compute-0 jolly_lichterman[218702]: --> relative data size: 1.0
Oct 01 16:49:49 compute-0 jolly_lichterman[218702]: --> All data devices are unavailable
Oct 01 16:49:49 compute-0 systemd[1]: libpod-6a7ce97b09be4d47a628a719ce3dda1ee199ec594edd23de6f6dc319f43db935.scope: Deactivated successfully.
Oct 01 16:49:49 compute-0 podman[218657]: 2025-10-01 16:49:49.247956079 +0000 UTC m=+1.254426204 container died 6a7ce97b09be4d47a628a719ce3dda1ee199ec594edd23de6f6dc319f43db935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lichterman, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:49:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-8162aa976e855b14c92c7c54a86b5f34931835200e68e83f1ae4abb1eea34014-merged.mount: Deactivated successfully.
Oct 01 16:49:49 compute-0 podman[218657]: 2025-10-01 16:49:49.330458883 +0000 UTC m=+1.336928978 container remove 6a7ce97b09be4d47a628a719ce3dda1ee199ec594edd23de6f6dc319f43db935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 01 16:49:49 compute-0 systemd[1]: libpod-conmon-6a7ce97b09be4d47a628a719ce3dda1ee199ec594edd23de6f6dc319f43db935.scope: Deactivated successfully.
Oct 01 16:49:49 compute-0 sudo[218425]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:49 compute-0 sudo[218952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:49:49 compute-0 sudo[218952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:49:49 compute-0 sudo[218952]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:49 compute-0 sudo[218998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:49:49 compute-0 sudo[218998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:49:49 compute-0 sudo[218998]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:49 compute-0 sudo[219043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:49:49 compute-0 sudo[219043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:49:49 compute-0 sudo[219043]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:49 compute-0 sudo[219078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 16:49:49 compute-0 sudo[219078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:49:49 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:49 compute-0 sudo[219146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fglnstrcylpvxlpyxegetfhqhwspujnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337389.336442-1301-28879276411339/AnsiballZ_stat.py'
Oct 01 16:49:49 compute-0 sudo[219146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:49 compute-0 python3.9[219148]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:49:49 compute-0 sudo[219146]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:49 compute-0 podman[219191]: 2025-10-01 16:49:49.971104966 +0000 UTC m=+0.048925140 container create 2c055e5340a905e9cf147b8fc6cbb535ea0199183a80bc32186452b75038a310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_easley, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Oct 01 16:49:50 compute-0 systemd[1]: Started libpod-conmon-2c055e5340a905e9cf147b8fc6cbb535ea0199183a80bc32186452b75038a310.scope.
Oct 01 16:49:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:49:50 compute-0 podman[219191]: 2025-10-01 16:49:49.956462447 +0000 UTC m=+0.034282651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:49:50 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:49:50 compute-0 podman[219191]: 2025-10-01 16:49:50.068221329 +0000 UTC m=+0.146041513 container init 2c055e5340a905e9cf147b8fc6cbb535ea0199183a80bc32186452b75038a310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:49:50 compute-0 podman[219191]: 2025-10-01 16:49:50.074993351 +0000 UTC m=+0.152813575 container start 2c055e5340a905e9cf147b8fc6cbb535ea0199183a80bc32186452b75038a310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_easley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:49:50 compute-0 podman[219191]: 2025-10-01 16:49:50.079364817 +0000 UTC m=+0.157185031 container attach 2c055e5340a905e9cf147b8fc6cbb535ea0199183a80bc32186452b75038a310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 01 16:49:50 compute-0 jovial_easley[219248]: 167 167
Oct 01 16:49:50 compute-0 systemd[1]: libpod-2c055e5340a905e9cf147b8fc6cbb535ea0199183a80bc32186452b75038a310.scope: Deactivated successfully.
Oct 01 16:49:50 compute-0 podman[219191]: 2025-10-01 16:49:50.082947139 +0000 UTC m=+0.160767323 container died 2c055e5340a905e9cf147b8fc6cbb535ea0199183a80bc32186452b75038a310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 01 16:49:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-f984652c75d67db4ff32a7af61847dd41a30745bafd2d520016d70b1c59bc7dd-merged.mount: Deactivated successfully.
Oct 01 16:49:50 compute-0 podman[219191]: 2025-10-01 16:49:50.125967772 +0000 UTC m=+0.203787976 container remove 2c055e5340a905e9cf147b8fc6cbb535ea0199183a80bc32186452b75038a310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 01 16:49:50 compute-0 sudo[219296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltpmisphkbungandynagkylrcijmwezh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337389.336442-1301-28879276411339/AnsiballZ_file.py'
Oct 01 16:49:50 compute-0 systemd[1]: libpod-conmon-2c055e5340a905e9cf147b8fc6cbb535ea0199183a80bc32186452b75038a310.scope: Deactivated successfully.
Oct 01 16:49:50 compute-0 sudo[219296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:50 compute-0 podman[219307]: 2025-10-01 16:49:50.310042707 +0000 UTC m=+0.051965379 container create 0bcbd402a0fefe1fa2055e233cf572e29651d5e2c28c19319744e5db574f57f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mayer, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:49:50 compute-0 python3.9[219301]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:49:50 compute-0 systemd[1]: Started libpod-conmon-0bcbd402a0fefe1fa2055e233cf572e29651d5e2c28c19319744e5db574f57f3.scope.
Oct 01 16:49:50 compute-0 sudo[219296]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:50 compute-0 podman[219307]: 2025-10-01 16:49:50.282572939 +0000 UTC m=+0.024495671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:49:50 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:49:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9017d982d86f6f94032704107d5ee657e6626ed7f4bc756daef5faa19874cdcf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:49:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9017d982d86f6f94032704107d5ee657e6626ed7f4bc756daef5faa19874cdcf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:49:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9017d982d86f6f94032704107d5ee657e6626ed7f4bc756daef5faa19874cdcf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:49:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9017d982d86f6f94032704107d5ee657e6626ed7f4bc756daef5faa19874cdcf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:49:50 compute-0 podman[219307]: 2025-10-01 16:49:50.400449339 +0000 UTC m=+0.142371991 container init 0bcbd402a0fefe1fa2055e233cf572e29651d5e2c28c19319744e5db574f57f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:49:50 compute-0 podman[219307]: 2025-10-01 16:49:50.416738328 +0000 UTC m=+0.158660970 container start 0bcbd402a0fefe1fa2055e233cf572e29651d5e2c28c19319744e5db574f57f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:49:50 compute-0 podman[219307]: 2025-10-01 16:49:50.419840847 +0000 UTC m=+0.161763489 container attach 0bcbd402a0fefe1fa2055e233cf572e29651d5e2c28c19319744e5db574f57f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 01 16:49:50 compute-0 ceph-mon[74273]: pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:51 compute-0 sudo[219482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikzdslbvseorihvszwepdbplgkhyzqfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337390.5517313-1313-198455571253806/AnsiballZ_stat.py'
Oct 01 16:49:51 compute-0 sudo[219482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]: {
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:     "0": [
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:         {
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             "devices": [
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "/dev/loop3"
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             ],
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             "lv_name": "ceph_lv0",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             "lv_size": "21470642176",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             "name": "ceph_lv0",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             "tags": {
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "ceph.cluster_name": "ceph",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "ceph.crush_device_class": "",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "ceph.encrypted": "0",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "ceph.osd_id": "0",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "ceph.type": "block",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "ceph.vdo": "0"
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             },
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             "type": "block",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             "vg_name": "ceph_vg0"
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:         }
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:     ],
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:     "1": [
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:         {
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             "devices": [
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "/dev/loop4"
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             ],
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             "lv_name": "ceph_lv1",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             "lv_size": "21470642176",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             "name": "ceph_lv1",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             "tags": {
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "ceph.cluster_name": "ceph",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "ceph.crush_device_class": "",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "ceph.encrypted": "0",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "ceph.osd_id": "1",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "ceph.type": "block",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "ceph.vdo": "0"
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             },
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             "type": "block",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             "vg_name": "ceph_vg1"
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:         }
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:     ],
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:     "2": [
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:         {
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             "devices": [
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "/dev/loop5"
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             ],
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             "lv_name": "ceph_lv2",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             "lv_size": "21470642176",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             "name": "ceph_lv2",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             "tags": {
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "ceph.cluster_name": "ceph",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "ceph.crush_device_class": "",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "ceph.encrypted": "0",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "ceph.osd_id": "2",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "ceph.type": "block",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:                 "ceph.vdo": "0"
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             },
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             "type": "block",
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:             "vg_name": "ceph_vg2"
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:         }
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]:     ]
Oct 01 16:49:51 compute-0 eloquent_mayer[219324]: }
Oct 01 16:49:51 compute-0 systemd[1]: libpod-0bcbd402a0fefe1fa2055e233cf572e29651d5e2c28c19319744e5db574f57f3.scope: Deactivated successfully.
Oct 01 16:49:51 compute-0 podman[219307]: 2025-10-01 16:49:51.205983118 +0000 UTC m=+0.947905800 container died 0bcbd402a0fefe1fa2055e233cf572e29651d5e2c28c19319744e5db574f57f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 01 16:49:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-9017d982d86f6f94032704107d5ee657e6626ed7f4bc756daef5faa19874cdcf-merged.mount: Deactivated successfully.
Oct 01 16:49:51 compute-0 podman[219307]: 2025-10-01 16:49:51.297303506 +0000 UTC m=+1.039226188 container remove 0bcbd402a0fefe1fa2055e233cf572e29651d5e2c28c19319744e5db574f57f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mayer, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:49:51 compute-0 systemd[1]: libpod-conmon-0bcbd402a0fefe1fa2055e233cf572e29651d5e2c28c19319744e5db574f57f3.scope: Deactivated successfully.
Oct 01 16:49:51 compute-0 sudo[219078]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:51 compute-0 python3.9[219484]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:49:51 compute-0 sudo[219482]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:51 compute-0 sudo[219500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:49:51 compute-0 sudo[219500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:49:51 compute-0 sudo[219500]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:51 compute-0 sudo[219534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:49:51 compute-0 sudo[219534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:49:51 compute-0 sudo[219534]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:51 compute-0 sudo[219587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:49:51 compute-0 sudo[219587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:49:51 compute-0 sudo[219587]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:51 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:51 compute-0 sudo[219644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 16:49:51 compute-0 sudo[219644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:49:51 compute-0 sudo[219722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtcjchdcpddqxdyrqvanmnlczhnzxlkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337390.5517313-1313-198455571253806/AnsiballZ_copy.py'
Oct 01 16:49:51 compute-0 sudo[219722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:51 compute-0 podman[219766]: 2025-10-01 16:49:51.983449837 +0000 UTC m=+0.047359810 container create 844c0a19ca8e6bb9094868922bf12bdc86a41dcbda3cf43832c33b2985aa560b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:49:51 compute-0 python3.9[219730]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759337390.5517313-1313-198455571253806/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:49:52 compute-0 sudo[219722]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:52 compute-0 systemd[1]: Started libpod-conmon-844c0a19ca8e6bb9094868922bf12bdc86a41dcbda3cf43832c33b2985aa560b.scope.
Oct 01 16:49:52 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:49:52 compute-0 podman[219766]: 2025-10-01 16:49:52.043088812 +0000 UTC m=+0.106998815 container init 844c0a19ca8e6bb9094868922bf12bdc86a41dcbda3cf43832c33b2985aa560b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_cohen, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:49:52 compute-0 podman[219766]: 2025-10-01 16:49:52.051497733 +0000 UTC m=+0.115407686 container start 844c0a19ca8e6bb9094868922bf12bdc86a41dcbda3cf43832c33b2985aa560b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 01 16:49:52 compute-0 unruffled_cohen[219782]: 167 167
Oct 01 16:49:52 compute-0 podman[219766]: 2025-10-01 16:49:51.961184851 +0000 UTC m=+0.025094854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:49:52 compute-0 podman[219766]: 2025-10-01 16:49:52.055121295 +0000 UTC m=+0.119031288 container attach 844c0a19ca8e6bb9094868922bf12bdc86a41dcbda3cf43832c33b2985aa560b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_cohen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 16:49:52 compute-0 podman[219766]: 2025-10-01 16:49:52.055674779 +0000 UTC m=+0.119584742 container died 844c0a19ca8e6bb9094868922bf12bdc86a41dcbda3cf43832c33b2985aa560b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_cohen, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:49:52 compute-0 systemd[1]: libpod-844c0a19ca8e6bb9094868922bf12bdc86a41dcbda3cf43832c33b2985aa560b.scope: Deactivated successfully.
Oct 01 16:49:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-6445dd00edb1c7baae55b6800c7c9eed706906a949d0941274d304629b9725f8-merged.mount: Deactivated successfully.
Oct 01 16:49:52 compute-0 podman[219766]: 2025-10-01 16:49:52.100548163 +0000 UTC m=+0.164458116 container remove 844c0a19ca8e6bb9094868922bf12bdc86a41dcbda3cf43832c33b2985aa560b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:49:52 compute-0 systemd[1]: libpod-conmon-844c0a19ca8e6bb9094868922bf12bdc86a41dcbda3cf43832c33b2985aa560b.scope: Deactivated successfully.
Oct 01 16:49:52 compute-0 podman[219821]: 2025-10-01 16:49:52.179032372 +0000 UTC m=+0.066563377 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 01 16:49:52 compute-0 podman[219874]: 2025-10-01 16:49:52.28019275 +0000 UTC m=+0.037558980 container create f91e1c65b7da83f0a065432e6c748f8b1ec9e27147054aa2186680b80d051274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lovelace, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:49:52 compute-0 systemd[1]: Started libpod-conmon-f91e1c65b7da83f0a065432e6c748f8b1ec9e27147054aa2186680b80d051274.scope.
Oct 01 16:49:52 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:49:52 compute-0 podman[219874]: 2025-10-01 16:49:52.262560232 +0000 UTC m=+0.019926422 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:49:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57cdcb6c5d6b9b385b4951b4724c595cbd6588ef71df4242c17b33c7d298cf30/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:49:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57cdcb6c5d6b9b385b4951b4724c595cbd6588ef71df4242c17b33c7d298cf30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:49:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57cdcb6c5d6b9b385b4951b4724c595cbd6588ef71df4242c17b33c7d298cf30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:49:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57cdcb6c5d6b9b385b4951b4724c595cbd6588ef71df4242c17b33c7d298cf30/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:49:52 compute-0 podman[219874]: 2025-10-01 16:49:52.371561758 +0000 UTC m=+0.128927968 container init f91e1c65b7da83f0a065432e6c748f8b1ec9e27147054aa2186680b80d051274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lovelace, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 01 16:49:52 compute-0 podman[219874]: 2025-10-01 16:49:52.386828501 +0000 UTC m=+0.144194731 container start f91e1c65b7da83f0a065432e6c748f8b1ec9e27147054aa2186680b80d051274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 01 16:49:52 compute-0 podman[219874]: 2025-10-01 16:49:52.392484616 +0000 UTC m=+0.149850826 container attach f91e1c65b7da83f0a065432e6c748f8b1ec9e27147054aa2186680b80d051274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lovelace, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:49:52 compute-0 sudo[219997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wydxvqsztniptarnthyxeffonwuebdte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337392.2057772-1328-223663898308125/AnsiballZ_file.py'
Oct 01 16:49:52 compute-0 sudo[219997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:52 compute-0 ceph-mon[74273]: pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:52 compute-0 python3.9[219999]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:49:52 compute-0 sudo[219997]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:53 compute-0 sudo[220169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adltdpkfypsjqmegoulxzlznckqyzyhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337392.9766378-1336-226271502214572/AnsiballZ_command.py'
Oct 01 16:49:53 compute-0 sudo[220169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:53 compute-0 elated_lovelace[219919]: {
Oct 01 16:49:53 compute-0 elated_lovelace[219919]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 16:49:53 compute-0 elated_lovelace[219919]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:49:53 compute-0 elated_lovelace[219919]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 16:49:53 compute-0 elated_lovelace[219919]:         "osd_id": 2,
Oct 01 16:49:53 compute-0 elated_lovelace[219919]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:49:53 compute-0 elated_lovelace[219919]:         "type": "bluestore"
Oct 01 16:49:53 compute-0 elated_lovelace[219919]:     },
Oct 01 16:49:53 compute-0 elated_lovelace[219919]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 16:49:53 compute-0 elated_lovelace[219919]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:49:53 compute-0 elated_lovelace[219919]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 16:49:53 compute-0 elated_lovelace[219919]:         "osd_id": 0,
Oct 01 16:49:53 compute-0 elated_lovelace[219919]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:49:53 compute-0 elated_lovelace[219919]:         "type": "bluestore"
Oct 01 16:49:53 compute-0 elated_lovelace[219919]:     },
Oct 01 16:49:53 compute-0 elated_lovelace[219919]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 16:49:53 compute-0 elated_lovelace[219919]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:49:53 compute-0 elated_lovelace[219919]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 16:49:53 compute-0 elated_lovelace[219919]:         "osd_id": 1,
Oct 01 16:49:53 compute-0 elated_lovelace[219919]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:49:53 compute-0 elated_lovelace[219919]:         "type": "bluestore"
Oct 01 16:49:53 compute-0 elated_lovelace[219919]:     }
Oct 01 16:49:53 compute-0 elated_lovelace[219919]: }
Oct 01 16:49:53 compute-0 systemd[1]: libpod-f91e1c65b7da83f0a065432e6c748f8b1ec9e27147054aa2186680b80d051274.scope: Deactivated successfully.
Oct 01 16:49:53 compute-0 systemd[1]: libpod-f91e1c65b7da83f0a065432e6c748f8b1ec9e27147054aa2186680b80d051274.scope: Consumed 1.024s CPU time.
Oct 01 16:49:53 compute-0 podman[219874]: 2025-10-01 16:49:53.413365652 +0000 UTC m=+1.170731882 container died f91e1c65b7da83f0a065432e6c748f8b1ec9e27147054aa2186680b80d051274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lovelace, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 01 16:49:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-57cdcb6c5d6b9b385b4951b4724c595cbd6588ef71df4242c17b33c7d298cf30-merged.mount: Deactivated successfully.
Oct 01 16:49:53 compute-0 podman[219874]: 2025-10-01 16:49:53.482981557 +0000 UTC m=+1.240347787 container remove f91e1c65b7da83f0a065432e6c748f8b1ec9e27147054aa2186680b80d051274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 01 16:49:53 compute-0 systemd[1]: libpod-conmon-f91e1c65b7da83f0a065432e6c748f8b1ec9e27147054aa2186680b80d051274.scope: Deactivated successfully.
Oct 01 16:49:53 compute-0 sudo[219644]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:49:53 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:49:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:49:53 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:49:53 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev c5257922-10ca-43cc-84b4-81cf526f4f13 does not exist
Oct 01 16:49:53 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev e37e4bee-633f-4dc3-874b-bfdf0a3ed6f1 does not exist
Oct 01 16:49:53 compute-0 python3.9[220173]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:49:53 compute-0 sudo[220193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:49:53 compute-0 sudo[220193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:49:53 compute-0 sudo[220193]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:53 compute-0 sudo[220169]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:53 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:53 compute-0 sudo[220221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 16:49:53 compute-0 sudo[220221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:49:53 compute-0 sudo[220221]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:54 compute-0 sudo[220395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqkryrifkmehybhxndvbiceohnvyojtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337393.8206036-1344-4269446055232/AnsiballZ_blockinfile.py'
Oct 01 16:49:54 compute-0 sudo[220395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:54 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:49:54 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:49:54 compute-0 ceph-mon[74273]: pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:54 compute-0 python3.9[220397]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:49:54 compute-0 sudo[220395]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:49:55 compute-0 sudo[220547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vekedtaxlncvfnkghefauzodyvwxobec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337394.886893-1353-127147595657919/AnsiballZ_command.py'
Oct 01 16:49:55 compute-0 sudo[220547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:55 compute-0 python3.9[220549]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:49:55 compute-0 sudo[220547]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:55 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:56 compute-0 sudo[220700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuntodhmbnquhdnlmuwurktuxkymzkut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337395.7384076-1361-59060091758869/AnsiballZ_stat.py'
Oct 01 16:49:56 compute-0 sudo[220700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:56 compute-0 python3.9[220702]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:49:56 compute-0 sudo[220700]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:56 compute-0 ceph-mon[74273]: pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:56 compute-0 sudo[220854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veohoetwgoosfnizomhqhmukpyvbiajg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337396.5004385-1369-59001356194608/AnsiballZ_command.py'
Oct 01 16:49:56 compute-0 sudo[220854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:57 compute-0 python3.9[220856]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:49:57 compute-0 sudo[220854]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:57 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:57 compute-0 sudo[221009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksvsrhbeaxjqwwsjarjteqxjgsdukohc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337397.2897675-1377-74613633119940/AnsiballZ_file.py'
Oct 01 16:49:57 compute-0 sudo[221009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:57 compute-0 python3.9[221011]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:49:57 compute-0 sudo[221009]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:58 compute-0 sudo[221161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvqcosmdzysfmatgyfjtcznbgrpeizbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337398.0906537-1385-272864615423553/AnsiballZ_stat.py'
Oct 01 16:49:58 compute-0 sudo[221161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:58 compute-0 python3.9[221163]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:49:58 compute-0 sudo[221161]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:58 compute-0 ceph-mon[74273]: pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:59 compute-0 sudo[221284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgkldblaxtyiohisemvxkakggalbtlvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337398.0906537-1385-272864615423553/AnsiballZ_copy.py'
Oct 01 16:49:59 compute-0 sudo[221284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:49:59 compute-0 python3.9[221286]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759337398.0906537-1385-272864615423553/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:49:59 compute-0 sudo[221284]: pam_unix(sudo:session): session closed for user root
Oct 01 16:49:59 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:49:59 compute-0 sudo[221436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rspuovwpdodlftqmxagzemntekduisas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337399.5331538-1400-143214218318180/AnsiballZ_stat.py'
Oct 01 16:49:59 compute-0 sudo[221436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:50:00 compute-0 python3.9[221438]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:50:00 compute-0 sudo[221436]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:00 compute-0 sudo[221559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnnsyuebulbnrlsymwoleyprwmhoaxbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337399.5331538-1400-143214218318180/AnsiballZ_copy.py'
Oct 01 16:50:00 compute-0 sudo[221559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:00 compute-0 python3.9[221561]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759337399.5331538-1400-143214218318180/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:50:00 compute-0 sudo[221559]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:00 compute-0 ceph-mon[74273]: pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:01 compute-0 sudo[221711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdcgpfpntlhlxgyyznfahrgymnwymmpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337400.8722825-1415-100704358035952/AnsiballZ_stat.py'
Oct 01 16:50:01 compute-0 sudo[221711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:01 compute-0 python3.9[221713]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:50:01 compute-0 sudo[221711]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:01 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:01 compute-0 sudo[221834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yobpcopvialnnklmwxlelybhmetoaqqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337400.8722825-1415-100704358035952/AnsiballZ_copy.py'
Oct 01 16:50:01 compute-0 sudo[221834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:02 compute-0 python3.9[221836]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759337400.8722825-1415-100704358035952/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:50:02 compute-0 sudo[221834]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:02 compute-0 sudo[221986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkwqekyttuboilfatokukzcoezoqqhgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337402.2201962-1430-250760032569670/AnsiballZ_systemd.py'
Oct 01 16:50:02 compute-0 sudo[221986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:02 compute-0 python3.9[221988]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:50:02 compute-0 ceph-mon[74273]: pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:02 compute-0 systemd[1]: Reloading.
Oct 01 16:50:02 compute-0 systemd-rc-local-generator[222016]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:50:02 compute-0 systemd-sysv-generator[222020]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:50:03 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Oct 01 16:50:03 compute-0 sudo[221986]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:03 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:03 compute-0 sudo[222177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbdtypyonvxgpnwdbwemdjhofcoywftj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337403.332994-1438-137740074172376/AnsiballZ_systemd.py'
Oct 01 16:50:03 compute-0 sudo[222177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:04 compute-0 python3.9[222179]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 01 16:50:04 compute-0 systemd[1]: Reloading.
Oct 01 16:50:04 compute-0 systemd-rc-local-generator[222208]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:50:04 compute-0 systemd-sysv-generator[222213]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:50:04 compute-0 systemd[1]: Reloading.
Oct 01 16:50:04 compute-0 systemd-rc-local-generator[222240]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:50:04 compute-0 systemd-sysv-generator[222244]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:50:04 compute-0 ceph-mon[74273]: pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:04 compute-0 sudo[222177]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:50:05 compute-0 sshd-session[162617]: Connection closed by 192.168.122.30 port 36624
Oct 01 16:50:05 compute-0 sshd-session[162614]: pam_unix(sshd:session): session closed for user zuul
Oct 01 16:50:05 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Oct 01 16:50:05 compute-0 systemd[1]: session-49.scope: Consumed 3min 44.671s CPU time.
Oct 01 16:50:05 compute-0 systemd-logind[788]: Session 49 logged out. Waiting for processes to exit.
Oct 01 16:50:05 compute-0 systemd-logind[788]: Removed session 49.
Oct 01 16:50:05 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:06 compute-0 ceph-mon[74273]: pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:07 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:08 compute-0 ceph-mon[74273]: pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:09 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:50:10 compute-0 ceph-mon[74273]: pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:10 compute-0 sshd-session[222276]: Accepted publickey for zuul from 192.168.122.30 port 44060 ssh2: ECDSA SHA256:cAu4I/kPoFUKOLOQB71BUt6Th09G4PIJ2iHT8DD8gEY
Oct 01 16:50:10 compute-0 systemd-logind[788]: New session 50 of user zuul.
Oct 01 16:50:10 compute-0 systemd[1]: Started Session 50 of User zuul.
Oct 01 16:50:10 compute-0 sshd-session[222276]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 16:50:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_16:50:11
Oct 01 16:50:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 16:50:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 16:50:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['.mgr', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'default.rgw.control', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes']
Oct 01 16:50:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 16:50:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:50:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:50:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:50:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:50:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:50:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:50:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 16:50:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:50:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 16:50:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:50:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:50:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:50:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:50:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:50:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:50:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:50:11 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:12 compute-0 python3.9[222429]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:50:12 compute-0 ceph-mon[74273]: pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:13 compute-0 sudo[222583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejrywynxfyoopxxmgkfktptzmbkgrsac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337412.8434124-34-275067078268258/AnsiballZ_file.py'
Oct 01 16:50:13 compute-0 sudo[222583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:13 compute-0 python3.9[222585]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:50:13 compute-0 sudo[222583]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:13 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:14 compute-0 sudo[222735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhspnvjvgohwvfdzslsukzgkcuoryvkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337413.7911563-34-217270267492318/AnsiballZ_file.py'
Oct 01 16:50:14 compute-0 sudo[222735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:14 compute-0 python3.9[222737]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:50:14 compute-0 sudo[222735]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:14 compute-0 ceph-mon[74273]: pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:50:15 compute-0 sudo[222887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quvcardmmunhpktjybsunjhgylpcfnvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337414.6124198-34-217738393607313/AnsiballZ_file.py'
Oct 01 16:50:15 compute-0 sudo[222887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:15 compute-0 python3.9[222889]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:50:15 compute-0 sudo[222887]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:15 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:15 compute-0 sudo[223039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrwozwrbdzngifetoymibjrtnnmgpfdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337415.4508212-34-156238666522408/AnsiballZ_file.py'
Oct 01 16:50:15 compute-0 sudo[223039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:16 compute-0 python3.9[223041]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 01 16:50:16 compute-0 sudo[223039]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:16 compute-0 sudo[223191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmxnwrncnzoplolbjipmhxhwklxxlnan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337416.2429178-34-206511264135284/AnsiballZ_file.py'
Oct 01 16:50:16 compute-0 sudo[223191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:16 compute-0 python3.9[223193]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:50:16 compute-0 sudo[223191]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:16 compute-0 ceph-mon[74273]: pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:17 compute-0 sudo[223353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edxlijwangmbectydnqajcyhcridcllu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337416.930704-70-246015441321860/AnsiballZ_stat.py'
Oct 01 16:50:17 compute-0 sudo[223353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:17 compute-0 podman[223317]: 2025-10-01 16:50:17.495861667 +0000 UTC m=+0.115800286 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 01 16:50:17 compute-0 python3.9[223356]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:50:17 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:17 compute-0 sudo[223353]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:18 compute-0 sudo[223523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emeizcrnpdrdgxtidsggptyjkxybmhub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337417.7849565-78-173655875162867/AnsiballZ_systemd.py'
Oct 01 16:50:18 compute-0 sudo[223523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:18 compute-0 python3.9[223525]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:50:18 compute-0 systemd[1]: Reloading.
Oct 01 16:50:18 compute-0 ceph-mon[74273]: pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:18 compute-0 systemd-rc-local-generator[223553]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:50:18 compute-0 systemd-sysv-generator[223558]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:50:19 compute-0 sudo[223523]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:19 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:19 compute-0 sudo[223712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzwskqtulzciembrvqtkvhcsgtskbzej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337419.4102352-86-147742759818204/AnsiballZ_service_facts.py'
Oct 01 16:50:19 compute-0 sudo[223712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:50:19.950 162304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 16:50:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:50:19.951 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 16:50:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:50:19.951 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 16:50:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:50:20 compute-0 python3.9[223714]: ansible-ansible.builtin.service_facts Invoked
Oct 01 16:50:20 compute-0 network[223731]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 01 16:50:20 compute-0 network[223732]: 'network-scripts' will be removed from distribution in near future.
Oct 01 16:50:20 compute-0 network[223733]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 01 16:50:20 compute-0 ceph-mon[74273]: pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 16:50:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:50:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 16:50:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:50:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:50:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:50:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:50:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:50:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:50:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:50:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:50:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:50:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 16:50:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:50:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:50:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:50:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 16:50:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:50:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 16:50:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:50:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:50:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:50:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 16:50:21 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:22 compute-0 podman[223755]: 2025-10-01 16:50:22.640229028 +0000 UTC m=+0.083814791 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 01 16:50:22 compute-0 ceph-mon[74273]: pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:23 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:24 compute-0 ceph-mon[74273]: pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:50:25 compute-0 sudo[223712]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:25 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:25 compute-0 sudo[224024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nplravaubbbmcbmnrxqejngymouoqtup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337425.516856-94-215918201146486/AnsiballZ_systemd.py'
Oct 01 16:50:25 compute-0 sudo[224024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:26 compute-0 python3.9[224026]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:50:26 compute-0 systemd[1]: Reloading.
Oct 01 16:50:26 compute-0 systemd-sysv-generator[224059]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:50:26 compute-0 systemd-rc-local-generator[224056]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:50:26 compute-0 sudo[224024]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:26 compute-0 ceph-mon[74273]: pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:27 compute-0 python3.9[224213]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:50:27 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:28 compute-0 sudo[224363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yximoveleqkzfierapassjegoqorbjmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337427.6193688-111-46733847207480/AnsiballZ_podman_container.py'
Oct 01 16:50:28 compute-0 sudo[224363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:28 compute-0 python3.9[224365]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 01 16:50:28 compute-0 ceph-mon[74273]: pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:28 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 16:50:28 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 16:50:29 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:29 compute-0 podman[224378]: 2025-10-01 16:50:29.738220355 +0000 UTC m=+1.267908164 image pull 81d94872551c3ae3c30801602bbb5f0c44872f15dcde472a0ba869fe2f28966e quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 01 16:50:29 compute-0 podman[224436]: 2025-10-01 16:50:29.887228412 +0000 UTC m=+0.062089355 container create ee45b41b5c4bf0647fc77bc0eb91d0cf523b9d78307dddf2f37e596245d18e4c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 01 16:50:29 compute-0 NetworkManager[44927]: <info>  [1759337429.9267] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/21)
Oct 01 16:50:29 compute-0 podman[224436]: 2025-10-01 16:50:29.85158116 +0000 UTC m=+0.026442173 image pull 81d94872551c3ae3c30801602bbb5f0c44872f15dcde472a0ba869fe2f28966e quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 01 16:50:29 compute-0 kernel: podman0: port 1(veth0) entered blocking state
Oct 01 16:50:29 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 01 16:50:29 compute-0 kernel: veth0: entered allmulticast mode
Oct 01 16:50:29 compute-0 kernel: veth0: entered promiscuous mode
Oct 01 16:50:29 compute-0 NetworkManager[44927]: <info>  [1759337429.9466] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/22)
Oct 01 16:50:29 compute-0 kernel: podman0: port 1(veth0) entered blocking state
Oct 01 16:50:29 compute-0 kernel: podman0: port 1(veth0) entered forwarding state
Oct 01 16:50:29 compute-0 NetworkManager[44927]: <info>  [1759337429.9496] device (veth0): carrier: link connected
Oct 01 16:50:29 compute-0 NetworkManager[44927]: <info>  [1759337429.9501] device (podman0): carrier: link connected
Oct 01 16:50:29 compute-0 systemd-udevd[224465]: Network interface NamePolicy= disabled on kernel command line.
Oct 01 16:50:29 compute-0 systemd-udevd[224467]: Network interface NamePolicy= disabled on kernel command line.
Oct 01 16:50:29 compute-0 NetworkManager[44927]: <info>  [1759337429.9960] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 01 16:50:29 compute-0 NetworkManager[44927]: <info>  [1759337429.9972] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 01 16:50:29 compute-0 NetworkManager[44927]: <info>  [1759337429.9984] device (podman0): Activation: starting connection 'podman0' (d31b1986-4719-491f-84dc-0cb288f801c7)
Oct 01 16:50:29 compute-0 NetworkManager[44927]: <info>  [1759337429.9985] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 01 16:50:29 compute-0 NetworkManager[44927]: <info>  [1759337429.9990] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 01 16:50:29 compute-0 NetworkManager[44927]: <info>  [1759337429.9992] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 01 16:50:29 compute-0 NetworkManager[44927]: <info>  [1759337429.9995] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 01 16:50:30 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 01 16:50:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:50:30 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 01 16:50:30 compute-0 NetworkManager[44927]: <info>  [1759337430.0362] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 01 16:50:30 compute-0 NetworkManager[44927]: <info>  [1759337430.0367] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 01 16:50:30 compute-0 NetworkManager[44927]: <info>  [1759337430.0380] device (podman0): Activation: successful, device activated.
Oct 01 16:50:30 compute-0 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Oct 01 16:50:30 compute-0 systemd[1]: Started libpod-conmon-ee45b41b5c4bf0647fc77bc0eb91d0cf523b9d78307dddf2f37e596245d18e4c.scope.
Oct 01 16:50:30 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:50:30 compute-0 podman[224436]: 2025-10-01 16:50:30.36610894 +0000 UTC m=+0.540969933 container init ee45b41b5c4bf0647fc77bc0eb91d0cf523b9d78307dddf2f37e596245d18e4c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 01 16:50:30 compute-0 podman[224436]: 2025-10-01 16:50:30.382467715 +0000 UTC m=+0.557328658 container start ee45b41b5c4bf0647fc77bc0eb91d0cf523b9d78307dddf2f37e596245d18e4c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 01 16:50:30 compute-0 podman[224436]: 2025-10-01 16:50:30.386452716 +0000 UTC m=+0.561313719 container attach ee45b41b5c4bf0647fc77bc0eb91d0cf523b9d78307dddf2f37e596245d18e4c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 16:50:30 compute-0 iscsid_config[224594]: iqn.1994-05.com.redhat:a057b11fc94f
Oct 01 16:50:30 compute-0 systemd[1]: libpod-ee45b41b5c4bf0647fc77bc0eb91d0cf523b9d78307dddf2f37e596245d18e4c.scope: Deactivated successfully.
Oct 01 16:50:30 compute-0 podman[224436]: 2025-10-01 16:50:30.388702256 +0000 UTC m=+0.563563169 container died ee45b41b5c4bf0647fc77bc0eb91d0cf523b9d78307dddf2f37e596245d18e4c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 01 16:50:30 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 01 16:50:30 compute-0 kernel: veth0 (unregistering): left allmulticast mode
Oct 01 16:50:30 compute-0 kernel: veth0 (unregistering): left promiscuous mode
Oct 01 16:50:30 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 01 16:50:30 compute-0 NetworkManager[44927]: <info>  [1759337430.4460] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 01 16:50:30 compute-0 systemd[1]: run-netns-netns\x2d0f818940\x2d33a9\x2defcd\x2dd61f\x2d8a2bfc9f8e7d.mount: Deactivated successfully.
Oct 01 16:50:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ee45b41b5c4bf0647fc77bc0eb91d0cf523b9d78307dddf2f37e596245d18e4c-userdata-shm.mount: Deactivated successfully.
Oct 01 16:50:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-d16bfc09d3b3efb100cda17bc2f6737cef07abc1e6f945f37946d390cb53f122-merged.mount: Deactivated successfully.
Oct 01 16:50:30 compute-0 podman[224436]: 2025-10-01 16:50:30.867398037 +0000 UTC m=+1.042258990 container remove ee45b41b5c4bf0647fc77bc0eb91d0cf523b9d78307dddf2f37e596245d18e4c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 01 16:50:30 compute-0 python3.9[224365]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True quay.io/podified-antelope-centos9/openstack-iscsid:current-podified /usr/sbin/iscsi-iname
Oct 01 16:50:30 compute-0 systemd[1]: libpod-conmon-ee45b41b5c4bf0647fc77bc0eb91d0cf523b9d78307dddf2f37e596245d18e4c.scope: Deactivated successfully.
Oct 01 16:50:30 compute-0 ceph-mon[74273]: pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:31 compute-0 python3.9[224365]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: 
                                             DEPRECATED command:
                                             It is recommended to use Quadlets for running containers and pods under systemd.
                                             
                                             Please refer to podman-systemd.unit(5) for details.
                                             Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Oct 01 16:50:31 compute-0 sudo[224363]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:31 compute-0 sudo[224830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbavnqszggdrqivrnwquqrwobbahdlri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337431.229822-119-83272218556732/AnsiballZ_stat.py'
Oct 01 16:50:31 compute-0 sudo[224830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:31 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:31 compute-0 python3.9[224832]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:50:31 compute-0 sudo[224830]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:32 compute-0 sudo[224953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inexdyrjglalajhbrfremafvsryvkurn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337431.229822-119-83272218556732/AnsiballZ_copy.py'
Oct 01 16:50:32 compute-0 sudo[224953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:32 compute-0 python3.9[224955]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759337431.229822-119-83272218556732/.source.iscsi _original_basename=.ej8vvwic follow=False checksum=811885445ecd63f0e9727a523f6e4a58cfde8a8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:50:32 compute-0 sudo[224953]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:32 compute-0 ceph-mon[74273]: pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:33 compute-0 sudo[225105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnbxiqcwxxjysqhrxwmdohphkceebege ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337432.8994398-134-164301782644070/AnsiballZ_file.py'
Oct 01 16:50:33 compute-0 sudo[225105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:33 compute-0 python3.9[225107]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:50:33 compute-0 sudo[225105]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:33 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:34 compute-0 python3.9[225257]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:50:34 compute-0 ceph-mon[74273]: pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:35 compute-0 sudo[225409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvmtqkdrxpvyogcwxlnsjhcvrubpjcbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337434.4634025-151-264725242623627/AnsiballZ_lineinfile.py'
Oct 01 16:50:35 compute-0 sudo[225409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:50:35 compute-0 python3.9[225411]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:50:35 compute-0 sudo[225409]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:35 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:35 compute-0 sudo[225561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgsuabcfwhomntvpklyaysoqqxgpvzdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337435.5527446-160-144713632962056/AnsiballZ_file.py'
Oct 01 16:50:35 compute-0 sudo[225561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:36 compute-0 python3.9[225563]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:50:36 compute-0 sudo[225561]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:36 compute-0 sudo[225713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crpyfimlznygaqhvmifejhwdpjarpsad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337436.3874807-168-275685350440849/AnsiballZ_stat.py'
Oct 01 16:50:36 compute-0 sudo[225713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:36 compute-0 ceph-mon[74273]: pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:37 compute-0 python3.9[225715]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:50:37 compute-0 sudo[225713]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:37 compute-0 sudo[225791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjknjhizxxbicfxgqanqihnszrytdnbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337436.3874807-168-275685350440849/AnsiballZ_file.py'
Oct 01 16:50:37 compute-0 sudo[225791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:37 compute-0 python3.9[225793]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:50:37 compute-0 sudo[225791]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:37 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:38 compute-0 sudo[225943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imxmmbmczcurfpkhuxriztwhuzavmkhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337437.7815895-168-71516171390558/AnsiballZ_stat.py'
Oct 01 16:50:38 compute-0 sudo[225943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:38 compute-0 python3.9[225945]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:50:38 compute-0 sudo[225943]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:38 compute-0 sudo[226021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjeliadeikjpknskrdjsycdtqbgjwgtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337437.7815895-168-71516171390558/AnsiballZ_file.py'
Oct 01 16:50:38 compute-0 sudo[226021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:38 compute-0 python3.9[226023]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:50:38 compute-0 sudo[226021]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:38 compute-0 ceph-mon[74273]: pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:39 compute-0 sudo[226173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkagnnubpaqtovrokblxewbzvdwlhxhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337439.1667757-191-4733085181776/AnsiballZ_file.py'
Oct 01 16:50:39 compute-0 sudo[226173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:39 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:39 compute-0 python3.9[226175]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:50:39 compute-0 sudo[226173]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:50:40 compute-0 sudo[226325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pycjwiejplucokdwcqgilrehgzsehgsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337439.8931763-199-247673114683689/AnsiballZ_stat.py'
Oct 01 16:50:40 compute-0 sudo[226325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:40 compute-0 python3.9[226327]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:50:40 compute-0 sudo[226325]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:40 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 01 16:50:40 compute-0 sudo[226403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-httbnuepsrusptobmrjwetjtmwhlicah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337439.8931763-199-247673114683689/AnsiballZ_file.py'
Oct 01 16:50:40 compute-0 sudo[226403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:40 compute-0 ceph-mon[74273]: pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:41 compute-0 python3.9[226405]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:50:41 compute-0 sudo[226403]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:50:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:50:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:50:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:50:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:50:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:50:41 compute-0 sudo[226555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upruybnqsrqdzzupvljjqogjqeyrpcrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337441.296845-211-22324394950523/AnsiballZ_stat.py'
Oct 01 16:50:41 compute-0 sudo[226555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:41 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:41 compute-0 python3.9[226557]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:50:41 compute-0 sudo[226555]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:42 compute-0 sudo[226633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zieaiqctyssilswlnpfwnndtnmaggykb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337441.296845-211-22324394950523/AnsiballZ_file.py'
Oct 01 16:50:42 compute-0 sudo[226633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:42 compute-0 python3.9[226635]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:50:42 compute-0 sudo[226633]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:42 compute-0 sudo[226785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idbsraonsbllvlbvlcgotyuenldhkrqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337442.5666077-223-55031796298737/AnsiballZ_systemd.py'
Oct 01 16:50:42 compute-0 sudo[226785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:42 compute-0 ceph-mon[74273]: pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:43 compute-0 python3.9[226787]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:50:43 compute-0 systemd[1]: Reloading.
Oct 01 16:50:43 compute-0 systemd-sysv-generator[226819]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:50:43 compute-0 systemd-rc-local-generator[226815]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:50:43 compute-0 sudo[226785]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:43 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:44 compute-0 sudo[226975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogbyyfzjpvmvvmqcgaatfghprhshfpnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337443.7653322-231-271281774627558/AnsiballZ_stat.py'
Oct 01 16:50:44 compute-0 sudo[226975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:44 compute-0 python3.9[226977]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:50:44 compute-0 sudo[226975]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:44 compute-0 sudo[227053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spgeitewaqkhdsjzfkcskdxtlgfsydwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337443.7653322-231-271281774627558/AnsiballZ_file.py'
Oct 01 16:50:44 compute-0 sudo[227053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:44 compute-0 python3.9[227055]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:50:44 compute-0 sudo[227053]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:44 compute-0 ceph-mon[74273]: pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:50:45 compute-0 sudo[227205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfkbtldqtpjjxhzcdwftwxieeiwntvsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337445.1162112-243-236910212055567/AnsiballZ_stat.py'
Oct 01 16:50:45 compute-0 sudo[227205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:45 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:45 compute-0 python3.9[227207]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:50:45 compute-0 sudo[227205]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:46 compute-0 sudo[227283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmwvajhcbmajnobaczxzmmctlptcysye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337445.1162112-243-236910212055567/AnsiballZ_file.py'
Oct 01 16:50:46 compute-0 sudo[227283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:46 compute-0 python3.9[227285]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:50:46 compute-0 sudo[227283]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:46 compute-0 ceph-mon[74273]: pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:47 compute-0 sudo[227435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koepxhekysfknvqobzlllnchuqqpfstn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337446.6182857-255-99939886060149/AnsiballZ_systemd.py'
Oct 01 16:50:47 compute-0 sudo[227435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:47 compute-0 python3.9[227437]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:50:47 compute-0 systemd[1]: Reloading.
Oct 01 16:50:47 compute-0 systemd-sysv-generator[227465]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:50:47 compute-0 systemd-rc-local-generator[227460]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:50:47 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:47 compute-0 systemd[1]: Starting Create netns directory...
Oct 01 16:50:47 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 01 16:50:47 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 01 16:50:47 compute-0 systemd[1]: Finished Create netns directory.
Oct 01 16:50:47 compute-0 sudo[227435]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:47 compute-0 podman[227474]: 2025-10-01 16:50:47.839867522 +0000 UTC m=+0.140691821 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:50:48 compute-0 sudo[227654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivzatyugzuymtvfgjfnnbxuyrsrmeggt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337448.1308813-265-191295642175273/AnsiballZ_file.py'
Oct 01 16:50:48 compute-0 sudo[227654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:48 compute-0 python3.9[227656]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:50:48 compute-0 sudo[227654]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:49 compute-0 ceph-mon[74273]: pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:49 compute-0 sudo[227806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqxwrmhoqqvecfnotedqanglyfnqrctk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337448.9423172-273-226734979758432/AnsiballZ_stat.py'
Oct 01 16:50:49 compute-0 sudo[227806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:49 compute-0 python3.9[227808]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:50:49 compute-0 sudo[227806]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:49 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:49 compute-0 sudo[227929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvfnigdpccrdxkxsunipgysmoijrptxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337448.9423172-273-226734979758432/AnsiballZ_copy.py'
Oct 01 16:50:49 compute-0 sudo[227929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:50:50 compute-0 python3.9[227931]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759337448.9423172-273-226734979758432/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:50:50 compute-0 sudo[227929]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:50 compute-0 sudo[228081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwgzpqeszfbgbodpbryvmrodvfawxqqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337450.4855685-290-174532202597692/AnsiballZ_file.py'
Oct 01 16:50:50 compute-0 sudo[228081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:51 compute-0 ceph-mon[74273]: pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:51 compute-0 python3.9[228083]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:50:51 compute-0 sudo[228081]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:51 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:51 compute-0 sudo[228233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obrmpyqptbsxliabayrxqwmzjdowoiym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337451.319968-298-229941609534119/AnsiballZ_stat.py'
Oct 01 16:50:51 compute-0 sudo[228233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:51 compute-0 python3.9[228235]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:50:51 compute-0 sudo[228233]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:52 compute-0 sudo[228356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrcrcrwojvzwvxbzzccemevvisjixfnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337451.319968-298-229941609534119/AnsiballZ_copy.py'
Oct 01 16:50:52 compute-0 sudo[228356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:52 compute-0 python3.9[228358]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759337451.319968-298-229941609534119/.source.json _original_basename=.k3qm4l2l follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:50:52 compute-0 sudo[228356]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:52 compute-0 podman[228359]: 2025-10-01 16:50:52.75578546 +0000 UTC m=+0.060827106 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 01 16:50:53 compute-0 ceph-mon[74273]: pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:53 compute-0 sudo[228526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idicmwstrbgnljiyvneqikfehgdfljax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337452.8822453-313-273711543884572/AnsiballZ_file.py'
Oct 01 16:50:53 compute-0 sudo[228526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:53 compute-0 python3.9[228528]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:50:53 compute-0 sudo[228526]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:53 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:53 compute-0 sudo[228553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:50:53 compute-0 sudo[228553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:50:53 compute-0 sudo[228553]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:53 compute-0 sudo[228582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:50:53 compute-0 sudo[228582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:50:53 compute-0 sudo[228582]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:53 compute-0 sudo[228643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:50:53 compute-0 sudo[228643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:50:53 compute-0 sudo[228643]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:54 compute-0 sudo[228692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 16:50:54 compute-0 sudo[228692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:50:54 compute-0 sudo[228778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdxjespytultbdvacjjzecrjwpijapmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337453.7905536-321-38213032842153/AnsiballZ_stat.py'
Oct 01 16:50:54 compute-0 sudo[228778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:54 compute-0 sudo[228778]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:54 compute-0 sudo[228692]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:50:54 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:50:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 16:50:54 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:50:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 16:50:54 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:50:54 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev dc4df656-262f-497e-ab06-7f87f5514851 does not exist
Oct 01 16:50:54 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 9bb23d2f-8bcf-4949-97f2-8bd95f970b64 does not exist
Oct 01 16:50:54 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 356ac56d-561b-4fe2-b984-b9e257d8af7e does not exist
Oct 01 16:50:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 16:50:54 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:50:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 16:50:54 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:50:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:50:54 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:50:54 compute-0 sudo[228906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:50:54 compute-0 sudo[228906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:50:54 compute-0 sudo[228906]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:54 compute-0 sudo[228956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yluvjftsoryiwarupivizbzcvjlyfcrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337453.7905536-321-38213032842153/AnsiballZ_copy.py'
Oct 01 16:50:54 compute-0 sudo[228956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:54 compute-0 sudo[228958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:50:54 compute-0 sudo[228958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:50:54 compute-0 sudo[228958]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:55 compute-0 sudo[228985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:50:55 compute-0 sudo[228985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:50:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:50:55 compute-0 ceph-mon[74273]: pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:50:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:50:55 compute-0 sudo[228985]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:50:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:50:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:50:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:50:55 compute-0 sudo[229010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 16:50:55 compute-0 sudo[229010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:50:55 compute-0 sudo[228956]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:55 compute-0 podman[229099]: 2025-10-01 16:50:55.480718927 +0000 UTC m=+0.065119309 container create 47ebeb71b74ea14007d566eb009abedb6056de46dde315eb4e68844863c55319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_boyd, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 01 16:50:55 compute-0 systemd[1]: Started libpod-conmon-47ebeb71b74ea14007d566eb009abedb6056de46dde315eb4e68844863c55319.scope.
Oct 01 16:50:55 compute-0 podman[229099]: 2025-10-01 16:50:55.45187322 +0000 UTC m=+0.036273632 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:50:55 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:50:55 compute-0 podman[229099]: 2025-10-01 16:50:55.59832041 +0000 UTC m=+0.182720832 container init 47ebeb71b74ea14007d566eb009abedb6056de46dde315eb4e68844863c55319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_boyd, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:50:55 compute-0 podman[229099]: 2025-10-01 16:50:55.607498966 +0000 UTC m=+0.191899318 container start 47ebeb71b74ea14007d566eb009abedb6056de46dde315eb4e68844863c55319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:50:55 compute-0 podman[229099]: 2025-10-01 16:50:55.61105958 +0000 UTC m=+0.195460022 container attach 47ebeb71b74ea14007d566eb009abedb6056de46dde315eb4e68844863c55319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_boyd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 01 16:50:55 compute-0 jolly_boyd[229138]: 167 167
Oct 01 16:50:55 compute-0 systemd[1]: libpod-47ebeb71b74ea14007d566eb009abedb6056de46dde315eb4e68844863c55319.scope: Deactivated successfully.
Oct 01 16:50:55 compute-0 podman[229099]: 2025-10-01 16:50:55.615979522 +0000 UTC m=+0.200379904 container died 47ebeb71b74ea14007d566eb009abedb6056de46dde315eb4e68844863c55319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:50:55 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-def53a38a4b8c22b041dc28a31458ecff1b2a7437b1bc5996320710b6bf5be43-merged.mount: Deactivated successfully.
Oct 01 16:50:55 compute-0 podman[229099]: 2025-10-01 16:50:55.695141809 +0000 UTC m=+0.279542161 container remove 47ebeb71b74ea14007d566eb009abedb6056de46dde315eb4e68844863c55319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 01 16:50:55 compute-0 systemd[1]: libpod-conmon-47ebeb71b74ea14007d566eb009abedb6056de46dde315eb4e68844863c55319.scope: Deactivated successfully.
Oct 01 16:50:55 compute-0 podman[229200]: 2025-10-01 16:50:55.910672015 +0000 UTC m=+0.069216184 container create 026c0ed7efc4c9137d80a507c08fcaa2daded8e40d20732613cc3129a23bf55a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct 01 16:50:55 compute-0 systemd[1]: Started libpod-conmon-026c0ed7efc4c9137d80a507c08fcaa2daded8e40d20732613cc3129a23bf55a.scope.
Oct 01 16:50:55 compute-0 podman[229200]: 2025-10-01 16:50:55.880877042 +0000 UTC m=+0.039421261 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:50:56 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:50:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43bb31232cf3ebb63b6865c59059332e2c627bf5d7fcc668396ee8346e0f7345/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:50:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43bb31232cf3ebb63b6865c59059332e2c627bf5d7fcc668396ee8346e0f7345/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:50:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43bb31232cf3ebb63b6865c59059332e2c627bf5d7fcc668396ee8346e0f7345/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:50:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43bb31232cf3ebb63b6865c59059332e2c627bf5d7fcc668396ee8346e0f7345/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:50:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43bb31232cf3ebb63b6865c59059332e2c627bf5d7fcc668396ee8346e0f7345/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:50:56 compute-0 podman[229200]: 2025-10-01 16:50:56.025400293 +0000 UTC m=+0.183944532 container init 026c0ed7efc4c9137d80a507c08fcaa2daded8e40d20732613cc3129a23bf55a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:50:56 compute-0 podman[229200]: 2025-10-01 16:50:56.043482609 +0000 UTC m=+0.202026788 container start 026c0ed7efc4c9137d80a507c08fcaa2daded8e40d20732613cc3129a23bf55a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 01 16:50:56 compute-0 podman[229200]: 2025-10-01 16:50:56.048228094 +0000 UTC m=+0.206772333 container attach 026c0ed7efc4c9137d80a507c08fcaa2daded8e40d20732613cc3129a23bf55a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ganguly, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:50:56 compute-0 sudo[229287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bahzbjstajmltucwdzhvshkfbjonbnzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337455.5443435-338-199918206472652/AnsiballZ_container_config_data.py'
Oct 01 16:50:56 compute-0 sudo[229287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:56 compute-0 python3.9[229289]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct 01 16:50:56 compute-0 sudo[229287]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:57 compute-0 ceph-mon[74273]: pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:57 compute-0 stupefied_ganguly[229256]: --> passed data devices: 0 physical, 3 LVM
Oct 01 16:50:57 compute-0 stupefied_ganguly[229256]: --> relative data size: 1.0
Oct 01 16:50:57 compute-0 stupefied_ganguly[229256]: --> All data devices are unavailable
Oct 01 16:50:57 compute-0 systemd[1]: libpod-026c0ed7efc4c9137d80a507c08fcaa2daded8e40d20732613cc3129a23bf55a.scope: Deactivated successfully.
Oct 01 16:50:57 compute-0 podman[229200]: 2025-10-01 16:50:57.14377118 +0000 UTC m=+1.302315359 container died 026c0ed7efc4c9137d80a507c08fcaa2daded8e40d20732613cc3129a23bf55a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:50:57 compute-0 systemd[1]: libpod-026c0ed7efc4c9137d80a507c08fcaa2daded8e40d20732613cc3129a23bf55a.scope: Consumed 1.038s CPU time.
Oct 01 16:50:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-43bb31232cf3ebb63b6865c59059332e2c627bf5d7fcc668396ee8346e0f7345-merged.mount: Deactivated successfully.
Oct 01 16:50:57 compute-0 sudo[229475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqoiobrpwqbrdfkwgynzvkrtelvxvcgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337456.6863303-347-50731846730962/AnsiballZ_container_config_hash.py'
Oct 01 16:50:57 compute-0 sudo[229475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:57 compute-0 podman[229200]: 2025-10-01 16:50:57.214266803 +0000 UTC m=+1.372810932 container remove 026c0ed7efc4c9137d80a507c08fcaa2daded8e40d20732613cc3129a23bf55a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ganguly, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 01 16:50:57 compute-0 systemd[1]: libpod-conmon-026c0ed7efc4c9137d80a507c08fcaa2daded8e40d20732613cc3129a23bf55a.scope: Deactivated successfully.
Oct 01 16:50:57 compute-0 sudo[229010]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:57 compute-0 sudo[229478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:50:57 compute-0 sudo[229478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:50:57 compute-0 sudo[229478]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:57 compute-0 sudo[229503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:50:57 compute-0 sudo[229503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:50:57 compute-0 sudo[229503]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:57 compute-0 python3.9[229477]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 01 16:50:57 compute-0 sudo[229528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:50:57 compute-0 sudo[229528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:50:57 compute-0 sudo[229475]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:57 compute-0 sudo[229528]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:57 compute-0 sudo[229553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 16:50:57 compute-0 sudo[229553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:50:57 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:57 compute-0 podman[229694]: 2025-10-01 16:50:57.93208441 +0000 UTC m=+0.064547957 container create 193d909113a5bbdc8efb96f36ff6262a1b3ba9b22cb140c6dc974b0aa4c5ed19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_ellis, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:50:57 compute-0 systemd[1]: Started libpod-conmon-193d909113a5bbdc8efb96f36ff6262a1b3ba9b22cb140c6dc974b0aa4c5ed19.scope.
Oct 01 16:50:57 compute-0 podman[229694]: 2025-10-01 16:50:57.905443879 +0000 UTC m=+0.037907496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:50:58 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:50:58 compute-0 podman[229694]: 2025-10-01 16:50:58.042665644 +0000 UTC m=+0.175129271 container init 193d909113a5bbdc8efb96f36ff6262a1b3ba9b22cb140c6dc974b0aa4c5ed19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_ellis, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:50:58 compute-0 podman[229694]: 2025-10-01 16:50:58.050410512 +0000 UTC m=+0.182874059 container start 193d909113a5bbdc8efb96f36ff6262a1b3ba9b22cb140c6dc974b0aa4c5ed19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_ellis, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:50:58 compute-0 podman[229694]: 2025-10-01 16:50:58.054635186 +0000 UTC m=+0.187098773 container attach 193d909113a5bbdc8efb96f36ff6262a1b3ba9b22cb140c6dc974b0aa4c5ed19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:50:58 compute-0 determined_ellis[229711]: 167 167
Oct 01 16:50:58 compute-0 podman[229694]: 2025-10-01 16:50:58.059258173 +0000 UTC m=+0.191721770 container died 193d909113a5bbdc8efb96f36ff6262a1b3ba9b22cb140c6dc974b0aa4c5ed19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_ellis, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 01 16:50:58 compute-0 systemd[1]: libpod-193d909113a5bbdc8efb96f36ff6262a1b3ba9b22cb140c6dc974b0aa4c5ed19.scope: Deactivated successfully.
Oct 01 16:50:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4a3f4d493dc064dd5f32e505d3375889bbc67d47ea593fc572d40a4c7b46d42-merged.mount: Deactivated successfully.
Oct 01 16:50:58 compute-0 podman[229694]: 2025-10-01 16:50:58.120252705 +0000 UTC m=+0.252716292 container remove 193d909113a5bbdc8efb96f36ff6262a1b3ba9b22cb140c6dc974b0aa4c5ed19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 01 16:50:58 compute-0 systemd[1]: libpod-conmon-193d909113a5bbdc8efb96f36ff6262a1b3ba9b22cb140c6dc974b0aa4c5ed19.scope: Deactivated successfully.
Oct 01 16:50:58 compute-0 sudo[229804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbjscnpydvhnfkuvzfqktfutniurumbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337457.7354057-356-42388234604366/AnsiballZ_podman_container_info.py'
Oct 01 16:50:58 compute-0 sudo[229804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:50:58 compute-0 podman[229811]: 2025-10-01 16:50:58.334293534 +0000 UTC m=+0.064294461 container create 3b71d4e47e6b2cd3803c8cbe308bba166977c7ab4e29e4b73d612ca82d302156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_keller, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 01 16:50:58 compute-0 systemd[1]: Started libpod-conmon-3b71d4e47e6b2cd3803c8cbe308bba166977c7ab4e29e4b73d612ca82d302156.scope.
Oct 01 16:50:58 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:50:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4df03d07e45d2ca845bd5e42dde8acea6baac3a382f909ffcd43fb8514fa05f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:50:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4df03d07e45d2ca845bd5e42dde8acea6baac3a382f909ffcd43fb8514fa05f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:50:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4df03d07e45d2ca845bd5e42dde8acea6baac3a382f909ffcd43fb8514fa05f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:50:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4df03d07e45d2ca845bd5e42dde8acea6baac3a382f909ffcd43fb8514fa05f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:50:58 compute-0 podman[229811]: 2025-10-01 16:50:58.309376855 +0000 UTC m=+0.039377862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:50:58 compute-0 podman[229811]: 2025-10-01 16:50:58.414565633 +0000 UTC m=+0.144566580 container init 3b71d4e47e6b2cd3803c8cbe308bba166977c7ab4e29e4b73d612ca82d302156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 01 16:50:58 compute-0 podman[229811]: 2025-10-01 16:50:58.425452283 +0000 UTC m=+0.155453210 container start 3b71d4e47e6b2cd3803c8cbe308bba166977c7ab4e29e4b73d612ca82d302156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:50:58 compute-0 podman[229811]: 2025-10-01 16:50:58.428672172 +0000 UTC m=+0.158673109 container attach 3b71d4e47e6b2cd3803c8cbe308bba166977c7ab4e29e4b73d612ca82d302156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Oct 01 16:50:58 compute-0 python3.9[229813]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 01 16:50:58 compute-0 sudo[229804]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:59 compute-0 ceph-mon[74273]: pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:59 compute-0 hopeful_keller[229829]: {
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:     "0": [
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:         {
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             "devices": [
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "/dev/loop3"
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             ],
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             "lv_name": "ceph_lv0",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             "lv_size": "21470642176",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             "name": "ceph_lv0",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             "tags": {
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "ceph.cluster_name": "ceph",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "ceph.crush_device_class": "",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "ceph.encrypted": "0",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "ceph.osd_id": "0",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "ceph.type": "block",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "ceph.vdo": "0"
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             },
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             "type": "block",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             "vg_name": "ceph_vg0"
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:         }
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:     ],
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:     "1": [
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:         {
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             "devices": [
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "/dev/loop4"
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             ],
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             "lv_name": "ceph_lv1",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             "lv_size": "21470642176",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             "name": "ceph_lv1",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             "tags": {
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "ceph.cluster_name": "ceph",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "ceph.crush_device_class": "",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "ceph.encrypted": "0",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "ceph.osd_id": "1",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "ceph.type": "block",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "ceph.vdo": "0"
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             },
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             "type": "block",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             "vg_name": "ceph_vg1"
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:         }
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:     ],
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:     "2": [
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:         {
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             "devices": [
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "/dev/loop5"
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             ],
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             "lv_name": "ceph_lv2",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             "lv_size": "21470642176",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             "name": "ceph_lv2",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             "tags": {
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "ceph.cluster_name": "ceph",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "ceph.crush_device_class": "",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "ceph.encrypted": "0",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "ceph.osd_id": "2",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "ceph.type": "block",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:                 "ceph.vdo": "0"
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             },
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             "type": "block",
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:             "vg_name": "ceph_vg2"
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:         }
Oct 01 16:50:59 compute-0 hopeful_keller[229829]:     ]
Oct 01 16:50:59 compute-0 hopeful_keller[229829]: }
Oct 01 16:50:59 compute-0 systemd[1]: libpod-3b71d4e47e6b2cd3803c8cbe308bba166977c7ab4e29e4b73d612ca82d302156.scope: Deactivated successfully.
Oct 01 16:50:59 compute-0 podman[229811]: 2025-10-01 16:50:59.206509736 +0000 UTC m=+0.936510663 container died 3b71d4e47e6b2cd3803c8cbe308bba166977c7ab4e29e4b73d612ca82d302156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 01 16:50:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-4df03d07e45d2ca845bd5e42dde8acea6baac3a382f909ffcd43fb8514fa05f6-merged.mount: Deactivated successfully.
Oct 01 16:50:59 compute-0 podman[229811]: 2025-10-01 16:50:59.284386693 +0000 UTC m=+1.014387620 container remove 3b71d4e47e6b2cd3803c8cbe308bba166977c7ab4e29e4b73d612ca82d302156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 01 16:50:59 compute-0 systemd[1]: libpod-conmon-3b71d4e47e6b2cd3803c8cbe308bba166977c7ab4e29e4b73d612ca82d302156.scope: Deactivated successfully.
Oct 01 16:50:59 compute-0 sudo[229553]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:59 compute-0 sudo[229907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:50:59 compute-0 sudo[229907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:50:59 compute-0 sudo[229907]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:59 compute-0 sudo[229968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:50:59 compute-0 sudo[229968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:50:59 compute-0 sudo[229968]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:59 compute-0 sudo[230005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:50:59 compute-0 sudo[230005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:50:59 compute-0 sudo[230005]: pam_unix(sudo:session): session closed for user root
Oct 01 16:50:59 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:50:59 compute-0 sudo[230030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 16:50:59 compute-0 sudo[230030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:51:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:51:00 compute-0 sudo[230154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxxyhgtxmiynfcbdamzmzdmayzebodgc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759337459.39252-369-11979480204974/AnsiballZ_edpm_container_manage.py'
Oct 01 16:51:00 compute-0 sudo[230154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:00 compute-0 podman[230171]: 2025-10-01 16:51:00.181595653 +0000 UTC m=+0.060173906 container create 2456c98137909a13d59e9aa072b369ab3b4023ca3346791256cfcb34877217ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cray, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 01 16:51:00 compute-0 systemd[1]: Started libpod-conmon-2456c98137909a13d59e9aa072b369ab3b4023ca3346791256cfcb34877217ae.scope.
Oct 01 16:51:00 compute-0 podman[230171]: 2025-10-01 16:51:00.15375493 +0000 UTC m=+0.032333243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:51:00 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:51:00 compute-0 podman[230171]: 2025-10-01 16:51:00.285545771 +0000 UTC m=+0.164124074 container init 2456c98137909a13d59e9aa072b369ab3b4023ca3346791256cfcb34877217ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cray, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 01 16:51:00 compute-0 podman[230171]: 2025-10-01 16:51:00.298220012 +0000 UTC m=+0.176798245 container start 2456c98137909a13d59e9aa072b369ab3b4023ca3346791256cfcb34877217ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cray, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 01 16:51:00 compute-0 podman[230171]: 2025-10-01 16:51:00.303944032 +0000 UTC m=+0.182522335 container attach 2456c98137909a13d59e9aa072b369ab3b4023ca3346791256cfcb34877217ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cray, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:51:00 compute-0 beautiful_cray[230188]: 167 167
Oct 01 16:51:00 compute-0 systemd[1]: libpod-2456c98137909a13d59e9aa072b369ab3b4023ca3346791256cfcb34877217ae.scope: Deactivated successfully.
Oct 01 16:51:00 compute-0 conmon[230188]: conmon 2456c98137909a13d59e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2456c98137909a13d59e9aa072b369ab3b4023ca3346791256cfcb34877217ae.scope/container/memory.events
Oct 01 16:51:00 compute-0 podman[230171]: 2025-10-01 16:51:00.309206229 +0000 UTC m=+0.187784492 container died 2456c98137909a13d59e9aa072b369ab3b4023ca3346791256cfcb34877217ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cray, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 01 16:51:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8ad8687d24897ce3c1bf7f0094a1f864841feeafb5ca64121b8530e8a005ba7-merged.mount: Deactivated successfully.
Oct 01 16:51:00 compute-0 python3[230158]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 01 16:51:00 compute-0 podman[230171]: 2025-10-01 16:51:00.362468223 +0000 UTC m=+0.241046466 container remove 2456c98137909a13d59e9aa072b369ab3b4023ca3346791256cfcb34877217ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 01 16:51:00 compute-0 systemd[1]: libpod-conmon-2456c98137909a13d59e9aa072b369ab3b4023ca3346791256cfcb34877217ae.scope: Deactivated successfully.
Oct 01 16:51:00 compute-0 podman[230236]: 2025-10-01 16:51:00.592973903 +0000 UTC m=+0.062120506 container create cfd5bb52636d855c06d44fbf0ff9c7b66bd6e6e317c54459565b3501c8d8ee23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_banach, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 01 16:51:00 compute-0 podman[230258]: 2025-10-01 16:51:00.628717131 +0000 UTC m=+0.064831463 container create d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, tcib_managed=true, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 01 16:51:00 compute-0 podman[230258]: 2025-10-01 16:51:00.596395719 +0000 UTC m=+0.032510071 image pull 81d94872551c3ae3c30801602bbb5f0c44872f15dcde472a0ba869fe2f28966e quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 01 16:51:00 compute-0 python3[230158]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 01 16:51:00 compute-0 systemd[1]: Started libpod-conmon-cfd5bb52636d855c06d44fbf0ff9c7b66bd6e6e317c54459565b3501c8d8ee23.scope.
Oct 01 16:51:00 compute-0 podman[230236]: 2025-10-01 16:51:00.574313586 +0000 UTC m=+0.043460229 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:51:00 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:51:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/302dd8dc10844983102a6a7e454ab36e3330336cbb2c05d44395492e09f41bdf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:51:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/302dd8dc10844983102a6a7e454ab36e3330336cbb2c05d44395492e09f41bdf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:51:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/302dd8dc10844983102a6a7e454ab36e3330336cbb2c05d44395492e09f41bdf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:51:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/302dd8dc10844983102a6a7e454ab36e3330336cbb2c05d44395492e09f41bdf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:51:00 compute-0 podman[230236]: 2025-10-01 16:51:00.705277779 +0000 UTC m=+0.174424432 container init cfd5bb52636d855c06d44fbf0ff9c7b66bd6e6e317c54459565b3501c8d8ee23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_banach, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:51:00 compute-0 podman[230236]: 2025-10-01 16:51:00.720034227 +0000 UTC m=+0.189180840 container start cfd5bb52636d855c06d44fbf0ff9c7b66bd6e6e317c54459565b3501c8d8ee23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_banach, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 01 16:51:00 compute-0 podman[230236]: 2025-10-01 16:51:00.724061314 +0000 UTC m=+0.193207947 container attach cfd5bb52636d855c06d44fbf0ff9c7b66bd6e6e317c54459565b3501c8d8ee23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_banach, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:51:00 compute-0 sudo[230154]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:01 compute-0 ceph-mon[74273]: pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:01 compute-0 sudo[230455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sluhqlzwsyjxsyarfaanbopvquhvfzta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337461.031202-377-198099320407982/AnsiballZ_stat.py'
Oct 01 16:51:01 compute-0 sudo[230455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:01 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:01 compute-0 python3.9[230461]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:51:01 compute-0 loving_banach[230275]: {
Oct 01 16:51:01 compute-0 loving_banach[230275]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 16:51:01 compute-0 loving_banach[230275]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:51:01 compute-0 loving_banach[230275]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 16:51:01 compute-0 loving_banach[230275]:         "osd_id": 2,
Oct 01 16:51:01 compute-0 loving_banach[230275]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:51:01 compute-0 loving_banach[230275]:         "type": "bluestore"
Oct 01 16:51:01 compute-0 loving_banach[230275]:     },
Oct 01 16:51:01 compute-0 loving_banach[230275]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 16:51:01 compute-0 loving_banach[230275]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:51:01 compute-0 loving_banach[230275]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 16:51:01 compute-0 loving_banach[230275]:         "osd_id": 0,
Oct 01 16:51:01 compute-0 loving_banach[230275]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:51:01 compute-0 loving_banach[230275]:         "type": "bluestore"
Oct 01 16:51:01 compute-0 loving_banach[230275]:     },
Oct 01 16:51:01 compute-0 loving_banach[230275]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 16:51:01 compute-0 loving_banach[230275]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:51:01 compute-0 loving_banach[230275]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 16:51:01 compute-0 loving_banach[230275]:         "osd_id": 1,
Oct 01 16:51:01 compute-0 loving_banach[230275]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:51:01 compute-0 loving_banach[230275]:         "type": "bluestore"
Oct 01 16:51:01 compute-0 loving_banach[230275]:     }
Oct 01 16:51:01 compute-0 loving_banach[230275]: }
Oct 01 16:51:01 compute-0 sudo[230455]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:01 compute-0 systemd[1]: libpod-cfd5bb52636d855c06d44fbf0ff9c7b66bd6e6e317c54459565b3501c8d8ee23.scope: Deactivated successfully.
Oct 01 16:51:01 compute-0 systemd[1]: libpod-cfd5bb52636d855c06d44fbf0ff9c7b66bd6e6e317c54459565b3501c8d8ee23.scope: Consumed 1.035s CPU time.
Oct 01 16:51:01 compute-0 podman[230236]: 2025-10-01 16:51:01.748414048 +0000 UTC m=+1.217560641 container died cfd5bb52636d855c06d44fbf0ff9c7b66bd6e6e317c54459565b3501c8d8ee23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:51:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-302dd8dc10844983102a6a7e454ab36e3330336cbb2c05d44395492e09f41bdf-merged.mount: Deactivated successfully.
Oct 01 16:51:01 compute-0 podman[230236]: 2025-10-01 16:51:01.808554563 +0000 UTC m=+1.277701206 container remove cfd5bb52636d855c06d44fbf0ff9c7b66bd6e6e317c54459565b3501c8d8ee23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 01 16:51:01 compute-0 systemd[1]: libpod-conmon-cfd5bb52636d855c06d44fbf0ff9c7b66bd6e6e317c54459565b3501c8d8ee23.scope: Deactivated successfully.
Oct 01 16:51:01 compute-0 sudo[230030]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:51:01 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:51:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:51:01 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:51:01 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev b8fa7d3f-14ae-4ad2-98ad-80109b53ccda does not exist
Oct 01 16:51:01 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 2a6d0dc4-a869-43d7-8bc2-12e13b82f862 does not exist
Oct 01 16:51:01 compute-0 sudo[230525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:51:01 compute-0 sudo[230525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:51:01 compute-0 sudo[230525]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:02 compute-0 sudo[230562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 16:51:02 compute-0 sudo[230562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:51:02 compute-0 sudo[230562]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:02 compute-0 sudo[230700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgcmhnejkjvaorklphrbbzakfhvcttgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337462.004147-386-119111784050158/AnsiballZ_file.py'
Oct 01 16:51:02 compute-0 sudo[230700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:02 compute-0 python3.9[230702]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:51:02 compute-0 sudo[230700]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:02 compute-0 ceph-mon[74273]: pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:51:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:51:02 compute-0 sudo[230776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulinttmijrigdeetltwkzsyrrnpmejqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337462.004147-386-119111784050158/AnsiballZ_stat.py'
Oct 01 16:51:02 compute-0 sudo[230776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:03 compute-0 python3.9[230778]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:51:03 compute-0 sudo[230776]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:03 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:03 compute-0 sudo[230927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juevhhpyilvsemilndkebuitgphchzxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337463.239965-386-149109123525760/AnsiballZ_copy.py'
Oct 01 16:51:03 compute-0 sudo[230927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:04 compute-0 python3.9[230929]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759337463.239965-386-149109123525760/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:51:04 compute-0 sudo[230927]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:04 compute-0 sudo[231003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wycheltiidpoqkfswxvlawysllrtvbna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337463.239965-386-149109123525760/AnsiballZ_systemd.py'
Oct 01 16:51:04 compute-0 sudo[231003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:04 compute-0 python3.9[231005]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 01 16:51:04 compute-0 systemd[1]: Reloading.
Oct 01 16:51:04 compute-0 systemd-rc-local-generator[231029]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:51:04 compute-0 systemd-sysv-generator[231034]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:51:04 compute-0 ceph-mon[74273]: pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:51:05 compute-0 sudo[231003]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:05 compute-0 sudo[231113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqhtufqclrckvsavwvgwamoyvolraxgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337463.239965-386-149109123525760/AnsiballZ_systemd.py'
Oct 01 16:51:05 compute-0 sudo[231113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:05 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:05 compute-0 python3.9[231115]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:51:05 compute-0 systemd[1]: Reloading.
Oct 01 16:51:05 compute-0 systemd-rc-local-generator[231144]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:51:06 compute-0 systemd-sysv-generator[231149]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:51:06 compute-0 systemd[1]: Starting iscsid container...
Oct 01 16:51:06 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:51:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa4f7d28db492ab1a316de755510d0c01493033cd9a3ce96ed669dc754c7315f/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 01 16:51:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa4f7d28db492ab1a316de755510d0c01493033cd9a3ce96ed669dc754c7315f/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct 01 16:51:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa4f7d28db492ab1a316de755510d0c01493033cd9a3ce96ed669dc754c7315f/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 01 16:51:06 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c.
Oct 01 16:51:06 compute-0 podman[231156]: 2025-10-01 16:51:06.391554796 +0000 UTC m=+0.137459420 container init d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:51:06 compute-0 iscsid[231172]: + sudo -E kolla_set_configs
Oct 01 16:51:06 compute-0 podman[231156]: 2025-10-01 16:51:06.430847614 +0000 UTC m=+0.176752168 container start d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 16:51:06 compute-0 podman[231156]: iscsid
Oct 01 16:51:06 compute-0 systemd[1]: Started iscsid container.
Oct 01 16:51:06 compute-0 sudo[231178]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 01 16:51:06 compute-0 systemd[1]: Created slice User Slice of UID 0.
Oct 01 16:51:06 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 01 16:51:06 compute-0 sudo[231113]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:06 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 01 16:51:06 compute-0 systemd[1]: Starting User Manager for UID 0...
Oct 01 16:51:06 compute-0 systemd[231201]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Oct 01 16:51:06 compute-0 podman[231179]: 2025-10-01 16:51:06.546442665 +0000 UTC m=+0.104030266 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid)
Oct 01 16:51:06 compute-0 systemd[1]: d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c-3fce1f53321510e5.service: Main process exited, code=exited, status=1/FAILURE
Oct 01 16:51:06 compute-0 systemd[1]: d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c-3fce1f53321510e5.service: Failed with result 'exit-code'.
Oct 01 16:51:06 compute-0 systemd[231201]: Queued start job for default target Main User Target.
Oct 01 16:51:06 compute-0 systemd[231201]: Created slice User Application Slice.
Oct 01 16:51:06 compute-0 systemd[231201]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 01 16:51:06 compute-0 systemd[231201]: Started Daily Cleanup of User's Temporary Directories.
Oct 01 16:51:06 compute-0 systemd[231201]: Reached target Paths.
Oct 01 16:51:06 compute-0 systemd[231201]: Reached target Timers.
Oct 01 16:51:06 compute-0 systemd[231201]: Starting D-Bus User Message Bus Socket...
Oct 01 16:51:06 compute-0 systemd[231201]: Starting Create User's Volatile Files and Directories...
Oct 01 16:51:06 compute-0 systemd[231201]: Finished Create User's Volatile Files and Directories.
Oct 01 16:51:06 compute-0 systemd[231201]: Listening on D-Bus User Message Bus Socket.
Oct 01 16:51:06 compute-0 systemd[231201]: Reached target Sockets.
Oct 01 16:51:06 compute-0 systemd[231201]: Reached target Basic System.
Oct 01 16:51:06 compute-0 systemd[231201]: Reached target Main User Target.
Oct 01 16:51:06 compute-0 systemd[231201]: Startup finished in 154ms.
Oct 01 16:51:06 compute-0 systemd[1]: Started User Manager for UID 0.
Oct 01 16:51:06 compute-0 systemd[1]: Started Session c3 of User root.
Oct 01 16:51:06 compute-0 sudo[231178]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 01 16:51:06 compute-0 iscsid[231172]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 01 16:51:06 compute-0 iscsid[231172]: INFO:__main__:Validating config file
Oct 01 16:51:06 compute-0 iscsid[231172]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 01 16:51:06 compute-0 iscsid[231172]: INFO:__main__:Writing out command to execute
Oct 01 16:51:06 compute-0 sudo[231178]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:06 compute-0 systemd[1]: session-c3.scope: Deactivated successfully.
Oct 01 16:51:06 compute-0 iscsid[231172]: ++ cat /run_command
Oct 01 16:51:06 compute-0 iscsid[231172]: + CMD='/usr/sbin/iscsid -f'
Oct 01 16:51:06 compute-0 iscsid[231172]: + ARGS=
Oct 01 16:51:06 compute-0 iscsid[231172]: + sudo kolla_copy_cacerts
Oct 01 16:51:06 compute-0 sudo[231291]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 01 16:51:06 compute-0 systemd[1]: Started Session c4 of User root.
Oct 01 16:51:06 compute-0 sudo[231291]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 01 16:51:06 compute-0 sudo[231291]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:06 compute-0 systemd[1]: session-c4.scope: Deactivated successfully.
Oct 01 16:51:06 compute-0 iscsid[231172]: + [[ ! -n '' ]]
Oct 01 16:51:06 compute-0 iscsid[231172]: + . kolla_extend_start
Oct 01 16:51:06 compute-0 iscsid[231172]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct 01 16:51:06 compute-0 iscsid[231172]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct 01 16:51:06 compute-0 iscsid[231172]: Running command: '/usr/sbin/iscsid -f'
Oct 01 16:51:06 compute-0 iscsid[231172]: + umask 0022
Oct 01 16:51:06 compute-0 iscsid[231172]: + exec /usr/sbin/iscsid -f
Oct 01 16:51:06 compute-0 ceph-mon[74273]: pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:06 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Oct 01 16:51:07 compute-0 python3.9[231377]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:51:07 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:07 compute-0 sudo[231527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sopjdikjmxzlbcycquidovdpshigbenz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337467.5108278-423-146032809692462/AnsiballZ_file.py'
Oct 01 16:51:07 compute-0 sudo[231527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:08 compute-0 python3.9[231529]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:51:08 compute-0 sudo[231527]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:08 compute-0 sudo[231679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abeomuwxglvqwohtbwydncigvzioazli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337468.4854085-434-80208526693017/AnsiballZ_service_facts.py'
Oct 01 16:51:08 compute-0 sudo[231679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:08 compute-0 ceph-mon[74273]: pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:09 compute-0 python3.9[231681]: ansible-ansible.builtin.service_facts Invoked
Oct 01 16:51:09 compute-0 network[231698]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 01 16:51:09 compute-0 network[231699]: 'network-scripts' will be removed from distribution in near future.
Oct 01 16:51:09 compute-0 network[231700]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 01 16:51:09 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:51:10 compute-0 ceph-mon[74273]: pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_16:51:11
Oct 01 16:51:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 16:51:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 16:51:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', '.mgr', 'default.rgw.control', 'images', 'default.rgw.log', '.rgw.root', 'backups', 'cephfs.cephfs.meta']
Oct 01 16:51:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 16:51:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:51:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:51:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:51:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:51:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:51:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:51:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 16:51:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:51:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 16:51:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:51:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:51:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:51:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:51:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:51:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:51:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:51:11 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:12 compute-0 ceph-mon[74273]: pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:13 compute-0 sudo[231679]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:13 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:14 compute-0 sudo[231973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhoqdjjautcbhzaauolsxdxvopaaanfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337473.7459564-444-222075191471901/AnsiballZ_file.py'
Oct 01 16:51:14 compute-0 sudo[231973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:14 compute-0 python3.9[231975]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 01 16:51:14 compute-0 sudo[231973]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:14 compute-0 ceph-mon[74273]: pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:15 compute-0 sudo[232125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyeottozxhrtilyevzleeqzmaednkchl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337474.4928808-452-10739602706208/AnsiballZ_modprobe.py'
Oct 01 16:51:15 compute-0 sudo[232125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:51:15 compute-0 python3.9[232127]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct 01 16:51:15 compute-0 sudo[232125]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:15 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:15 compute-0 sudo[232281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fotychojltyrvvhpicndnsguxonbwguc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337475.4498389-460-279715428279638/AnsiballZ_stat.py'
Oct 01 16:51:15 compute-0 sudo[232281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:16 compute-0 python3.9[232283]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:51:16 compute-0 sudo[232281]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:16 compute-0 sudo[232404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcewdowfxfvyhhvhmnqvhptmzjdjcfdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337475.4498389-460-279715428279638/AnsiballZ_copy.py'
Oct 01 16:51:16 compute-0 sudo[232404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:16 compute-0 python3.9[232406]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759337475.4498389-460-279715428279638/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:51:16 compute-0 sudo[232404]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:16 compute-0 systemd[1]: Stopping User Manager for UID 0...
Oct 01 16:51:16 compute-0 systemd[231201]: Activating special unit Exit the Session...
Oct 01 16:51:16 compute-0 systemd[231201]: Stopped target Main User Target.
Oct 01 16:51:16 compute-0 systemd[231201]: Stopped target Basic System.
Oct 01 16:51:16 compute-0 systemd[231201]: Stopped target Paths.
Oct 01 16:51:16 compute-0 systemd[231201]: Stopped target Sockets.
Oct 01 16:51:16 compute-0 systemd[231201]: Stopped target Timers.
Oct 01 16:51:16 compute-0 systemd[231201]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 01 16:51:16 compute-0 systemd[231201]: Closed D-Bus User Message Bus Socket.
Oct 01 16:51:16 compute-0 systemd[231201]: Stopped Create User's Volatile Files and Directories.
Oct 01 16:51:16 compute-0 systemd[231201]: Removed slice User Application Slice.
Oct 01 16:51:16 compute-0 systemd[231201]: Reached target Shutdown.
Oct 01 16:51:16 compute-0 systemd[231201]: Finished Exit the Session.
Oct 01 16:51:16 compute-0 systemd[231201]: Reached target Exit the Session.
Oct 01 16:51:16 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Oct 01 16:51:16 compute-0 systemd[1]: Stopped User Manager for UID 0.
Oct 01 16:51:16 compute-0 ceph-mon[74273]: pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:16 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 01 16:51:16 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 01 16:51:16 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 01 16:51:16 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 01 16:51:16 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Oct 01 16:51:17 compute-0 sudo[232559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwmnrabavyqxzhgmeerxwcehokkzjnht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337476.9325447-476-175892006281536/AnsiballZ_lineinfile.py'
Oct 01 16:51:17 compute-0 sudo[232559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:17 compute-0 python3.9[232561]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:51:17 compute-0 sudo[232559]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:17 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:18 compute-0 sudo[232724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xaicpjywdnwokfqeaxplmbjdkqqchwux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337477.7095516-484-252214862628059/AnsiballZ_systemd.py'
Oct 01 16:51:18 compute-0 sudo[232724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:18 compute-0 podman[232685]: 2025-10-01 16:51:18.208111824 +0000 UTC m=+0.149639447 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 01 16:51:18 compute-0 python3.9[232732]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 01 16:51:18 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 01 16:51:18 compute-0 systemd[1]: Stopped Load Kernel Modules.
Oct 01 16:51:18 compute-0 systemd[1]: Stopping Load Kernel Modules...
Oct 01 16:51:18 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 01 16:51:18 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 01 16:51:18 compute-0 sudo[232724]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:18 compute-0 ceph-mon[74273]: pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:19 compute-0 sudo[232894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhfubclrmtfraokhsilpiqvkheirlzyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337478.7857878-492-245451150092967/AnsiballZ_file.py'
Oct 01 16:51:19 compute-0 sudo[232894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:19 compute-0 python3.9[232896]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:51:19 compute-0 sudo[232894]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:19 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:51:19.951 162304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 16:51:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:51:19.951 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 16:51:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:51:19.952 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 16:51:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:51:20 compute-0 sudo[233046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkopecwqgsqovfiisfracesfbzcdwwhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337479.705371-501-131978270862617/AnsiballZ_stat.py'
Oct 01 16:51:20 compute-0 sudo[233046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:20 compute-0 python3.9[233048]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:51:20 compute-0 sudo[233046]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:20 compute-0 ceph-mon[74273]: pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:20 compute-0 sudo[233198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjmuznsqchflqvbjpkmodvdspsnjygrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337480.5632124-510-89230948782637/AnsiballZ_stat.py'
Oct 01 16:51:20 compute-0 sudo[233198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 16:51:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:51:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 16:51:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:51:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:51:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:51:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:51:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:51:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:51:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:51:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:51:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:51:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 16:51:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:51:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:51:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:51:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 16:51:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:51:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 16:51:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:51:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:51:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:51:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 16:51:21 compute-0 python3.9[233200]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:51:21 compute-0 sudo[233198]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:21 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:21 compute-0 sudo[233350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtsumjtbxkzvyvqebaambqtwrpumsajv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337481.4389927-518-1159892011329/AnsiballZ_stat.py'
Oct 01 16:51:21 compute-0 sudo[233350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:22 compute-0 python3.9[233352]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:51:22 compute-0 sudo[233350]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:22 compute-0 sudo[233473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xutpvlleebcxvnvnnkvkksbxutaedilg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337481.4389927-518-1159892011329/AnsiballZ_copy.py'
Oct 01 16:51:22 compute-0 sudo[233473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:22 compute-0 python3.9[233475]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759337481.4389927-518-1159892011329/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:51:22 compute-0 sudo[233473]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:22 compute-0 ceph-mon[74273]: pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:23 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Oct 01 16:51:23 compute-0 sudo[233635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfrajochjsqktajdbxlsokvqpomkmlts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337482.9805152-533-26317258623859/AnsiballZ_command.py'
Oct 01 16:51:23 compute-0 sudo[233635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:23 compute-0 podman[233599]: 2025-10-01 16:51:23.62509607 +0000 UTC m=+0.088715438 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:51:23 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:23 compute-0 python3.9[233642]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:51:23 compute-0 sudo[233635]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:24 compute-0 sudo[233797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esfacaozuxqqkesbpjyplyvazolqncdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337484.0333319-541-15798378745422/AnsiballZ_lineinfile.py'
Oct 01 16:51:24 compute-0 sudo[233797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:24 compute-0 python3.9[233799]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:51:24 compute-0 sudo[233797]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:24 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 01 16:51:24 compute-0 ceph-mon[74273]: pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:51:25 compute-0 sudo[233950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pakvfbsexhgaxasqkloxczlnohoqdsud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337484.774396-549-258461981049252/AnsiballZ_replace.py'
Oct 01 16:51:25 compute-0 sudo[233950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:25 compute-0 python3.9[233952]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:51:25 compute-0 sudo[233950]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:25 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:26 compute-0 sudo[234102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcijjpnuaehyqgukohxrmefsjpaeambm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337485.7958326-557-20900593297993/AnsiballZ_replace.py'
Oct 01 16:51:26 compute-0 sudo[234102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:26 compute-0 python3.9[234104]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:51:26 compute-0 sudo[234102]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:26 compute-0 sudo[234254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iphkfexhrecgzxrrjwqvqystqotpygbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337486.6145437-566-245815454399284/AnsiballZ_lineinfile.py'
Oct 01 16:51:26 compute-0 sudo[234254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:26 compute-0 ceph-mon[74273]: pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:27 compute-0 python3.9[234256]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:51:27 compute-0 sudo[234254]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:27 compute-0 sudo[234406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kesninzhblfwcscimhcccstcxncqpnqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337487.2876236-566-207507637790630/AnsiballZ_lineinfile.py'
Oct 01 16:51:27 compute-0 sudo[234406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:27 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:27 compute-0 python3.9[234408]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:51:27 compute-0 sudo[234406]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:28 compute-0 sudo[234558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsafgkzewcwbxyxytbxrmstdisgaixca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337488.0930378-566-20858443716593/AnsiballZ_lineinfile.py'
Oct 01 16:51:28 compute-0 sudo[234558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:28 compute-0 python3.9[234560]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:51:28 compute-0 sudo[234558]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:28 compute-0 ceph-mon[74273]: pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:29 compute-0 sudo[234710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhaefbpxconvfwiirpmxsapdnpwdbebo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337488.82171-566-113742036900411/AnsiballZ_lineinfile.py'
Oct 01 16:51:29 compute-0 sudo[234710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:29 compute-0 python3.9[234712]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:51:29 compute-0 sudo[234710]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:29 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:29 compute-0 sudo[234862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkujlappykcxapsfbolssjstqnlpmcxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337489.5813186-595-56647345461431/AnsiballZ_stat.py'
Oct 01 16:51:29 compute-0 sudo[234862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:51:30 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Oct 01 16:51:30 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:51:30.045222) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 16:51:30 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Oct 01 16:51:30 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337490045263, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1822, "num_deletes": 251, "total_data_size": 3086026, "memory_usage": 3120216, "flush_reason": "Manual Compaction"}
Oct 01 16:51:30 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Oct 01 16:51:30 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337490058626, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1735841, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11766, "largest_seqno": 13587, "table_properties": {"data_size": 1729929, "index_size": 2987, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14782, "raw_average_key_size": 20, "raw_value_size": 1716883, "raw_average_value_size": 2335, "num_data_blocks": 138, "num_entries": 735, "num_filter_entries": 735, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759337281, "oldest_key_time": 1759337281, "file_creation_time": 1759337490, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Oct 01 16:51:30 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 13462 microseconds, and 8226 cpu microseconds.
Oct 01 16:51:30 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 16:51:30 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:51:30.058683) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1735841 bytes OK
Oct 01 16:51:30 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:51:30.058706) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Oct 01 16:51:30 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:51:30.061025) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Oct 01 16:51:30 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:51:30.061046) EVENT_LOG_v1 {"time_micros": 1759337490061039, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 16:51:30 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:51:30.061069) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 16:51:30 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3078325, prev total WAL file size 3078325, number of live WAL files 2.
Oct 01 16:51:30 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 16:51:30 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:51:30.062555) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323532' seq:72057594037927935, type:22 .. '6D67727374617400353034' seq:0, type:0; will stop at (end)
Oct 01 16:51:30 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 16:51:30 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1695KB)], [29(7740KB)]
Oct 01 16:51:30 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337490062647, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9662294, "oldest_snapshot_seqno": -1}
Oct 01 16:51:30 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4026 keys, 7650869 bytes, temperature: kUnknown
Oct 01 16:51:30 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337490117994, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7650869, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7622055, "index_size": 17632, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 95714, "raw_average_key_size": 23, "raw_value_size": 7547608, "raw_average_value_size": 1874, "num_data_blocks": 767, "num_entries": 4026, "num_filter_entries": 4026, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336399, "oldest_key_time": 0, "file_creation_time": 1759337490, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Oct 01 16:51:30 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 16:51:30 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:51:30.118298) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7650869 bytes
Oct 01 16:51:30 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:51:30.120249) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 174.3 rd, 138.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.6 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(10.0) write-amplify(4.4) OK, records in: 4441, records dropped: 415 output_compression: NoCompression
Oct 01 16:51:30 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:51:30.120279) EVENT_LOG_v1 {"time_micros": 1759337490120264, "job": 12, "event": "compaction_finished", "compaction_time_micros": 55436, "compaction_time_cpu_micros": 32544, "output_level": 6, "num_output_files": 1, "total_output_size": 7650869, "num_input_records": 4441, "num_output_records": 4026, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 16:51:30 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 16:51:30 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337490120984, "job": 12, "event": "table_file_deletion", "file_number": 31}
Oct 01 16:51:30 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 16:51:30 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337490123488, "job": 12, "event": "table_file_deletion", "file_number": 29}
Oct 01 16:51:30 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:51:30.062423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:51:30 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:51:30.123601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:51:30 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:51:30.123610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:51:30 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:51:30.123612) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:51:30 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:51:30.123615) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:51:30 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:51:30.123618) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:51:30 compute-0 python3.9[234864]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:51:30 compute-0 sudo[234862]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:30 compute-0 sudo[235016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnabanvekuqyfgqxnlymuuclpvflmrsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337490.4406564-603-64402367516750/AnsiballZ_file.py'
Oct 01 16:51:30 compute-0 sudo[235016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:31 compute-0 ceph-mon[74273]: pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:31 compute-0 python3.9[235018]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:51:31 compute-0 sudo[235016]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:31 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:31 compute-0 sudo[235168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbleujkensitvyewzrkydsoxawtumjtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337491.4563737-612-232961603377665/AnsiballZ_file.py'
Oct 01 16:51:31 compute-0 sudo[235168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:32 compute-0 python3.9[235170]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:51:32 compute-0 sudo[235168]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:32 compute-0 sudo[235320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxuyybcqlcezcfzkijjqggbnqblayklz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337492.2659848-620-236428654474747/AnsiballZ_stat.py'
Oct 01 16:51:32 compute-0 sudo[235320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:32 compute-0 python3.9[235322]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:51:32 compute-0 sudo[235320]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:33 compute-0 ceph-mon[74273]: pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:33 compute-0 sudo[235398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfweraesqjosoyzzglymvxyblnxzmuob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337492.2659848-620-236428654474747/AnsiballZ_file.py'
Oct 01 16:51:33 compute-0 sudo[235398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:33 compute-0 python3.9[235400]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:51:33 compute-0 sudo[235398]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:33 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:33 compute-0 sudo[235550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxaewtkyqglwxqmvrpptlmegbxtbozji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337493.5440607-620-114817362617759/AnsiballZ_stat.py'
Oct 01 16:51:33 compute-0 sudo[235550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:34 compute-0 python3.9[235552]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:51:34 compute-0 sudo[235550]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:34 compute-0 sudo[235628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfdplkfrutpytmatvvwjbnnkxtkkatlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337493.5440607-620-114817362617759/AnsiballZ_file.py'
Oct 01 16:51:34 compute-0 sudo[235628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:34 compute-0 python3.9[235630]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:51:34 compute-0 sudo[235628]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:51:35 compute-0 ceph-mon[74273]: pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:35 compute-0 sudo[235780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqotdtraiwmnhvdmzwprlrjezoijzepn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337494.8661873-643-250910845006581/AnsiballZ_file.py'
Oct 01 16:51:35 compute-0 sudo[235780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:35 compute-0 python3.9[235782]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:51:35 compute-0 sudo[235780]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:35 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:36 compute-0 ceph-mon[74273]: pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:36 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 01 16:51:36 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Oct 01 16:51:36 compute-0 sudo[235934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aigzqtddexnntgukqaeznutmlzqklzck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337495.8464057-651-248021176608057/AnsiballZ_stat.py'
Oct 01 16:51:36 compute-0 sudo[235934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:36 compute-0 python3.9[235936]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:51:36 compute-0 sudo[235934]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:36 compute-0 sudo[236025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecorvxaooshmnfmotbkwlsglsphdmbqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337495.8464057-651-248021176608057/AnsiballZ_file.py'
Oct 01 16:51:36 compute-0 sudo[236025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:36 compute-0 podman[235985]: 2025-10-01 16:51:36.806796184 +0000 UTC m=+0.115096135 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible)
Oct 01 16:51:36 compute-0 python3.9[236035]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:51:36 compute-0 sudo[236025]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:37 compute-0 sudo[236185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvagaalkxrhtuobdfsxlrdtgewqtjdzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337497.2041943-663-269397229601387/AnsiballZ_stat.py'
Oct 01 16:51:37 compute-0 sudo[236185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:37 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:37 compute-0 python3.9[236187]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:51:37 compute-0 sudo[236185]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:38 compute-0 sudo[236263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdmqfwylvlmmnkogvxkswihhtpvbiolp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337497.2041943-663-269397229601387/AnsiballZ_file.py'
Oct 01 16:51:38 compute-0 sudo[236263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:38 compute-0 python3.9[236265]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:51:38 compute-0 sudo[236263]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:38 compute-0 ceph-mon[74273]: pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:39 compute-0 sudo[236415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkqulbdbydgbcxhnryymcagkmaijwkbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337498.5869067-675-275497343914639/AnsiballZ_systemd.py'
Oct 01 16:51:39 compute-0 sudo[236415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:39 compute-0 python3.9[236417]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:51:39 compute-0 systemd[1]: Reloading.
Oct 01 16:51:39 compute-0 systemd-rc-local-generator[236441]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:51:39 compute-0 systemd-sysv-generator[236447]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:51:39 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:39 compute-0 sudo[236415]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:51:40 compute-0 sudo[236603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulyanqclvapttjitodwhtyyhsjrwywey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337500.0569391-683-113682393722715/AnsiballZ_stat.py'
Oct 01 16:51:40 compute-0 sudo[236603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:40 compute-0 python3.9[236605]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:51:40 compute-0 sudo[236603]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:40 compute-0 ceph-mon[74273]: pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:41 compute-0 sudo[236681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbcwmoxgwghliysgmvdrfvggdplhxljl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337500.0569391-683-113682393722715/AnsiballZ_file.py'
Oct 01 16:51:41 compute-0 sudo[236681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:41 compute-0 python3.9[236683]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:51:41 compute-0 sudo[236681]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:51:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:51:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:51:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:51:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:51:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:51:41 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:41 compute-0 sudo[236833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngybjgjrbrsgegmfjxblztoeczwpdgzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337501.533173-695-163638512193308/AnsiballZ_stat.py'
Oct 01 16:51:41 compute-0 sudo[236833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:42 compute-0 python3.9[236835]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:51:42 compute-0 sudo[236833]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:42 compute-0 sudo[236911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvilmhwiqugvxufvsajyashkeonnbkqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337501.533173-695-163638512193308/AnsiballZ_file.py'
Oct 01 16:51:42 compute-0 sudo[236911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:42 compute-0 python3.9[236913]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:51:42 compute-0 sudo[236911]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:42 compute-0 ceph-mon[74273]: pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:43 compute-0 sudo[237063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afkfttvfrbpbsjladxtfqvdxwocmymeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337502.9746723-707-103265313724194/AnsiballZ_systemd.py'
Oct 01 16:51:43 compute-0 sudo[237063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:43 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:43 compute-0 python3.9[237065]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:51:43 compute-0 systemd[1]: Reloading.
Oct 01 16:51:43 compute-0 systemd-rc-local-generator[237094]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:51:43 compute-0 systemd-sysv-generator[237097]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:51:44 compute-0 systemd[1]: Starting Create netns directory...
Oct 01 16:51:44 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 01 16:51:44 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 01 16:51:44 compute-0 systemd[1]: Finished Create netns directory.
Oct 01 16:51:44 compute-0 sudo[237063]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:44 compute-0 ceph-mon[74273]: pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:45 compute-0 sudo[237256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgedcczfytczngfvqizhgztpoleyabpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337504.5933084-717-50337642643723/AnsiballZ_file.py'
Oct 01 16:51:45 compute-0 sudo[237256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:51:45 compute-0 python3.9[237258]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:51:45 compute-0 sudo[237256]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:45 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:46 compute-0 sudo[237408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xiurvyakxsivjilafkewtvrwspybbfch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337505.5924027-725-35922177492822/AnsiballZ_stat.py'
Oct 01 16:51:46 compute-0 sudo[237408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:46 compute-0 python3.9[237410]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:51:46 compute-0 sudo[237408]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:46 compute-0 ceph-mon[74273]: pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:46 compute-0 sudo[237531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjzudilbamleybpwpqsopcmxvkigockv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337505.5924027-725-35922177492822/AnsiballZ_copy.py'
Oct 01 16:51:46 compute-0 sudo[237531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:46 compute-0 python3.9[237533]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759337505.5924027-725-35922177492822/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:51:47 compute-0 sudo[237531]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:47 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:47 compute-0 sudo[237683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hujoousbavllcbosjzwghbwaalkukwpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337507.4927897-742-232802507611512/AnsiballZ_file.py'
Oct 01 16:51:47 compute-0 sudo[237683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:48 compute-0 python3.9[237685]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:51:48 compute-0 sudo[237683]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:48 compute-0 sudo[237846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbmhrlrkgspapmupnnmfsozxdcdijpnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337508.3973255-750-3951192229746/AnsiballZ_stat.py'
Oct 01 16:51:48 compute-0 sudo[237846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:48 compute-0 ceph-mon[74273]: pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:48 compute-0 podman[237809]: 2025-10-01 16:51:48.825968907 +0000 UTC m=+0.129766995 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 01 16:51:48 compute-0 python3.9[237858]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:51:48 compute-0 sudo[237846]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:49 compute-0 sudo[237985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mudsrxgtptsmszzdywlztszwnpoxuvds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337508.3973255-750-3951192229746/AnsiballZ_copy.py'
Oct 01 16:51:49 compute-0 sudo[237985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:49 compute-0 python3.9[237987]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759337508.3973255-750-3951192229746/.source.json _original_basename=.75c3oqjx follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:51:49 compute-0 sudo[237985]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:49 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:51:50 compute-0 sudo[238137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opupvvbznhxgercbbpslalwidcofsfjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337509.8751664-765-37995006930161/AnsiballZ_file.py'
Oct 01 16:51:50 compute-0 sudo[238137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:50 compute-0 python3.9[238139]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:51:50 compute-0 sudo[238137]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:50 compute-0 ceph-mon[74273]: pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:51 compute-0 sudo[238289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugyydmztvfesyvjhglunldlmhsgfrgnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337510.7006679-773-174845047539380/AnsiballZ_stat.py'
Oct 01 16:51:51 compute-0 sudo[238289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:51 compute-0 sudo[238289]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:51 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:51 compute-0 sudo[238412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuqdosvntiigabupfnuoioqggtcarikf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337510.7006679-773-174845047539380/AnsiballZ_copy.py'
Oct 01 16:51:51 compute-0 sudo[238412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:52 compute-0 sudo[238412]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:52 compute-0 ceph-mon[74273]: pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:52 compute-0 sudo[238564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vohfxffzkrhwodfumpzxnfgqccrcpovm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337512.4919517-790-4495148226705/AnsiballZ_container_config_data.py'
Oct 01 16:51:52 compute-0 sudo[238564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:53 compute-0 python3.9[238566]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct 01 16:51:53 compute-0 sudo[238564]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:53 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:53 compute-0 sudo[238717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtzwgffpadaelwrjrcoetrbrclyxccym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337513.35163-799-214771940134615/AnsiballZ_container_config_hash.py'
Oct 01 16:51:53 compute-0 sudo[238717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:53 compute-0 podman[238706]: 2025-10-01 16:51:53.797702192 +0000 UTC m=+0.100569757 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 01 16:51:53 compute-0 python3.9[238724]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 01 16:51:53 compute-0 sudo[238717]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:54 compute-0 sudo[238887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhoypemyxmyaskfjbjzurppaipqwrogk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337514.3082106-808-34415718195151/AnsiballZ_podman_container_info.py'
Oct 01 16:51:54 compute-0 sudo[238887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:54 compute-0 python3.9[238889]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 01 16:51:54 compute-0 ceph-mon[74273]: pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:51:55 compute-0 sudo[238887]: pam_unix(sudo:session): session closed for user root
Oct 01 16:51:55 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:56 compute-0 sudo[239065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjkspwbjcmiphaggycbdsdjabkiizzgg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759337516.1147506-821-31955714740526/AnsiballZ_edpm_container_manage.py'
Oct 01 16:51:56 compute-0 sudo[239065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:51:56 compute-0 python3[239067]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 01 16:51:57 compute-0 ceph-mon[74273]: pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:57 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:58 compute-0 ceph-mon[74273]: pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:59 compute-0 podman[239081]: 2025-10-01 16:51:59.160612353 +0000 UTC m=+2.355646842 image pull 4ee39d2b05f9d7d8e7f025baefe799c33619f4419f4eb27d17ca383a40343475 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct 01 16:51:59 compute-0 podman[239138]: 2025-10-01 16:51:59.346519387 +0000 UTC m=+0.034432176 image pull 4ee39d2b05f9d7d8e7f025baefe799c33619f4419f4eb27d17ca383a40343475 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct 01 16:51:59 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:51:59 compute-0 podman[239138]: 2025-10-01 16:51:59.735807359 +0000 UTC m=+0.423720148 container create 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Oct 01 16:51:59 compute-0 python3[239067]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct 01 16:51:59 compute-0 sudo[239065]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:52:00 compute-0 sudo[239326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbwakavmpjfouuobobtokdngkjutbebr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337520.171885-829-133482535326240/AnsiballZ_stat.py'
Oct 01 16:52:00 compute-0 sudo[239326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:00 compute-0 python3.9[239328]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:52:00 compute-0 sudo[239326]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:00 compute-0 ceph-mon[74273]: pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:01 compute-0 sudo[239480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igxmskcwsuvnotplgzpukctpzcjbjzrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337521.168378-838-128421289591405/AnsiballZ_file.py'
Oct 01 16:52:01 compute-0 sudo[239480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:01 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:01 compute-0 python3.9[239482]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:52:01 compute-0 sudo[239480]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:02 compute-0 sudo[239560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dctvdhwahloydbzgogxvwvkviwdnqntd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337521.168378-838-128421289591405/AnsiballZ_stat.py'
Oct 01 16:52:02 compute-0 sudo[239560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:02 compute-0 sudo[239556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:52:02 compute-0 sudo[239556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:52:02 compute-0 sudo[239556]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:02 compute-0 sudo[239584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:52:02 compute-0 sudo[239584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:52:02 compute-0 sudo[239584]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:02 compute-0 sudo[239609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:52:02 compute-0 sudo[239609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:52:02 compute-0 sudo[239609]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:02 compute-0 python3.9[239576]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:52:02 compute-0 sudo[239560]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:02 compute-0 sudo[239634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 16:52:02 compute-0 sudo[239634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:52:02 compute-0 ceph-mon[74273]: pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:03 compute-0 sudo[239634]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:03 compute-0 sudo[239838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnqhuotgsramuehrwebeggvyfyqvuduj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337522.4508655-838-57587118361698/AnsiballZ_copy.py'
Oct 01 16:52:03 compute-0 sudo[239838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:52:03 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:52:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 16:52:03 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:52:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 16:52:03 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:52:03 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 1ef22740-1469-4402-a82d-a6784e8f1430 does not exist
Oct 01 16:52:03 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev c22a8c38-251e-4bea-a264-f35ed5861912 does not exist
Oct 01 16:52:03 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev e9a123a4-9479-43e3-aac7-d9deb6176421 does not exist
Oct 01 16:52:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 16:52:03 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:52:03 compute-0 python3.9[239840]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759337522.4508655-838-57587118361698/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:52:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 16:52:03 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:52:03 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:52:03 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:52:03 compute-0 sudo[239838]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:03 compute-0 sudo[239841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:52:03 compute-0 sudo[239841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:52:03 compute-0 sudo[239841]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:03 compute-0 sudo[239887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:52:03 compute-0 sudo[239887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:52:03 compute-0 sudo[239887]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:03 compute-0 sudo[239918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:52:03 compute-0 sudo[239918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:52:03 compute-0 sudo[239918]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:03 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:52:03 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:52:03 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:52:03 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:52:03 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:52:03 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:52:03 compute-0 sudo[240012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugqbisjappulfhmbhazljqimbyytbdol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337522.4508655-838-57587118361698/AnsiballZ_systemd.py'
Oct 01 16:52:03 compute-0 sudo[240012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:03 compute-0 sudo[239967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 16:52:03 compute-0 sudo[239967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:52:03 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:03 compute-0 python3.9[240014]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 01 16:52:03 compute-0 systemd[1]: Reloading.
Oct 01 16:52:03 compute-0 systemd-sysv-generator[240102]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:52:03 compute-0 systemd-rc-local-generator[240098]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:52:03 compute-0 podman[240058]: 2025-10-01 16:52:03.97125601 +0000 UTC m=+0.079408510 container create 2d58b4203eb43d03c08de340e785e1b3675723af4e2060a181e9f713a121953b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bassi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:52:04 compute-0 podman[240058]: 2025-10-01 16:52:03.935391924 +0000 UTC m=+0.043544304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:52:04 compute-0 systemd[1]: Started libpod-conmon-2d58b4203eb43d03c08de340e785e1b3675723af4e2060a181e9f713a121953b.scope.
Oct 01 16:52:04 compute-0 sudo[240012]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:04 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:52:04 compute-0 podman[240058]: 2025-10-01 16:52:04.247876732 +0000 UTC m=+0.356029112 container init 2d58b4203eb43d03c08de340e785e1b3675723af4e2060a181e9f713a121953b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:52:04 compute-0 podman[240058]: 2025-10-01 16:52:04.259038437 +0000 UTC m=+0.367190767 container start 2d58b4203eb43d03c08de340e785e1b3675723af4e2060a181e9f713a121953b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 01 16:52:04 compute-0 podman[240058]: 2025-10-01 16:52:04.264948685 +0000 UTC m=+0.373100995 container attach 2d58b4203eb43d03c08de340e785e1b3675723af4e2060a181e9f713a121953b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:52:04 compute-0 dazzling_bassi[240108]: 167 167
Oct 01 16:52:04 compute-0 systemd[1]: libpod-2d58b4203eb43d03c08de340e785e1b3675723af4e2060a181e9f713a121953b.scope: Deactivated successfully.
Oct 01 16:52:04 compute-0 podman[240058]: 2025-10-01 16:52:04.267002642 +0000 UTC m=+0.375154942 container died 2d58b4203eb43d03c08de340e785e1b3675723af4e2060a181e9f713a121953b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bassi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 01 16:52:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-f75785c49079c3a06a54e6f7b4c95d6517cf9dbc7e60c9febde2ae1045708d62-merged.mount: Deactivated successfully.
Oct 01 16:52:04 compute-0 podman[240058]: 2025-10-01 16:52:04.315295351 +0000 UTC m=+0.423447661 container remove 2d58b4203eb43d03c08de340e785e1b3675723af4e2060a181e9f713a121953b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bassi, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 01 16:52:04 compute-0 systemd[1]: libpod-conmon-2d58b4203eb43d03c08de340e785e1b3675723af4e2060a181e9f713a121953b.scope: Deactivated successfully.
Oct 01 16:52:04 compute-0 ceph-mon[74273]: pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:04 compute-0 podman[240180]: 2025-10-01 16:52:04.557817734 +0000 UTC m=+0.070978624 container create 85c2f6afb31fab060c879920dab6a654278e0699129e0614905f7826b69885c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cerf, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 01 16:52:04 compute-0 sudo[240220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzendtukkmnteyecwqcqvxewescueuyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337522.4508655-838-57587118361698/AnsiballZ_systemd.py'
Oct 01 16:52:04 compute-0 sudo[240220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:04 compute-0 systemd[1]: Started libpod-conmon-85c2f6afb31fab060c879920dab6a654278e0699129e0614905f7826b69885c7.scope.
Oct 01 16:52:04 compute-0 podman[240180]: 2025-10-01 16:52:04.528386735 +0000 UTC m=+0.041547625 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:52:04 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:52:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e0228395856c1f9f3e84cb25f129975722247e418da6bbac6ef50a0746473eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:52:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e0228395856c1f9f3e84cb25f129975722247e418da6bbac6ef50a0746473eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:52:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e0228395856c1f9f3e84cb25f129975722247e418da6bbac6ef50a0746473eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:52:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e0228395856c1f9f3e84cb25f129975722247e418da6bbac6ef50a0746473eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:52:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e0228395856c1f9f3e84cb25f129975722247e418da6bbac6ef50a0746473eb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:52:04 compute-0 podman[240180]: 2025-10-01 16:52:04.858162908 +0000 UTC m=+0.371323828 container init 85c2f6afb31fab060c879920dab6a654278e0699129e0614905f7826b69885c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cerf, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:52:04 compute-0 podman[240180]: 2025-10-01 16:52:04.869542363 +0000 UTC m=+0.382703263 container start 85c2f6afb31fab060c879920dab6a654278e0699129e0614905f7826b69885c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cerf, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:52:04 compute-0 podman[240180]: 2025-10-01 16:52:04.879695709 +0000 UTC m=+0.392856609 container attach 85c2f6afb31fab060c879920dab6a654278e0699129e0614905f7826b69885c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 01 16:52:04 compute-0 python3.9[240222]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:52:05 compute-0 systemd[1]: Reloading.
Oct 01 16:52:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:52:05 compute-0 systemd-rc-local-generator[240260]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:52:05 compute-0 systemd-sysv-generator[240263]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:52:05 compute-0 systemd[1]: Starting multipathd container...
Oct 01 16:52:05 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:52:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1422c5c03096155801a83d3f064d8b45b6ad458e0a09b927122ce44d776df9f4/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 01 16:52:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1422c5c03096155801a83d3f064d8b45b6ad458e0a09b927122ce44d776df9f4/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 01 16:52:05 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe.
Oct 01 16:52:05 compute-0 podman[240270]: 2025-10-01 16:52:05.543228582 +0000 UTC m=+0.131529674 container init 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:52:05 compute-0 multipathd[240285]: + sudo -E kolla_set_configs
Oct 01 16:52:05 compute-0 podman[240270]: 2025-10-01 16:52:05.578710164 +0000 UTC m=+0.167011226 container start 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 01 16:52:05 compute-0 sudo[240292]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 01 16:52:05 compute-0 podman[240270]: multipathd
Oct 01 16:52:05 compute-0 sudo[240292]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 01 16:52:05 compute-0 sudo[240292]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 01 16:52:05 compute-0 systemd[1]: Started multipathd container.
Oct 01 16:52:05 compute-0 sudo[240220]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:05 compute-0 multipathd[240285]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 01 16:52:05 compute-0 multipathd[240285]: INFO:__main__:Validating config file
Oct 01 16:52:05 compute-0 multipathd[240285]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 01 16:52:05 compute-0 multipathd[240285]: INFO:__main__:Writing out command to execute
Oct 01 16:52:05 compute-0 sudo[240292]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:05 compute-0 multipathd[240285]: ++ cat /run_command
Oct 01 16:52:05 compute-0 multipathd[240285]: + CMD='/usr/sbin/multipathd -d'
Oct 01 16:52:05 compute-0 multipathd[240285]: + ARGS=
Oct 01 16:52:05 compute-0 multipathd[240285]: + sudo kolla_copy_cacerts
Oct 01 16:52:05 compute-0 podman[240293]: 2025-10-01 16:52:05.685684479 +0000 UTC m=+0.084684186 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.schema-version=1.0)
Oct 01 16:52:05 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:05 compute-0 sudo[240326]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 01 16:52:05 compute-0 systemd[1]: 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe-642d0ba5b816572.service: Main process exited, code=exited, status=1/FAILURE
Oct 01 16:52:05 compute-0 systemd[1]: 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe-642d0ba5b816572.service: Failed with result 'exit-code'.
Oct 01 16:52:05 compute-0 sudo[240326]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 01 16:52:05 compute-0 sudo[240326]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 01 16:52:05 compute-0 sudo[240326]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:05 compute-0 multipathd[240285]: + [[ ! -n '' ]]
Oct 01 16:52:05 compute-0 multipathd[240285]: + . kolla_extend_start
Oct 01 16:52:05 compute-0 multipathd[240285]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 01 16:52:05 compute-0 multipathd[240285]: Running command: '/usr/sbin/multipathd -d'
Oct 01 16:52:05 compute-0 multipathd[240285]: + umask 0022
Oct 01 16:52:05 compute-0 multipathd[240285]: + exec /usr/sbin/multipathd -d
Oct 01 16:52:05 compute-0 multipathd[240285]: 3240.485781 | --------start up--------
Oct 01 16:52:05 compute-0 multipathd[240285]: 3240.485799 | read /etc/multipath.conf
Oct 01 16:52:05 compute-0 multipathd[240285]: 3240.491769 | path checkers start up
Oct 01 16:52:05 compute-0 reverent_cerf[240226]: --> passed data devices: 0 physical, 3 LVM
Oct 01 16:52:05 compute-0 reverent_cerf[240226]: --> relative data size: 1.0
Oct 01 16:52:05 compute-0 reverent_cerf[240226]: --> All data devices are unavailable
Oct 01 16:52:05 compute-0 systemd[1]: libpod-85c2f6afb31fab060c879920dab6a654278e0699129e0614905f7826b69885c7.scope: Deactivated successfully.
Oct 01 16:52:05 compute-0 systemd[1]: libpod-85c2f6afb31fab060c879920dab6a654278e0699129e0614905f7826b69885c7.scope: Consumed 1.005s CPU time.
Oct 01 16:52:05 compute-0 podman[240180]: 2025-10-01 16:52:05.955264458 +0000 UTC m=+1.468425378 container died 85c2f6afb31fab060c879920dab6a654278e0699129e0614905f7826b69885c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Oct 01 16:52:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e0228395856c1f9f3e84cb25f129975722247e418da6bbac6ef50a0746473eb-merged.mount: Deactivated successfully.
Oct 01 16:52:06 compute-0 podman[240180]: 2025-10-01 16:52:06.008943947 +0000 UTC m=+1.522104807 container remove 85c2f6afb31fab060c879920dab6a654278e0699129e0614905f7826b69885c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cerf, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:52:06 compute-0 systemd[1]: libpod-conmon-85c2f6afb31fab060c879920dab6a654278e0699129e0614905f7826b69885c7.scope: Deactivated successfully.
Oct 01 16:52:06 compute-0 sudo[239967]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:06 compute-0 sudo[240464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:52:06 compute-0 sudo[240464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:52:06 compute-0 sudo[240464]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:06 compute-0 sudo[240514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:52:06 compute-0 sudo[240514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:52:06 compute-0 sudo[240514]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:06 compute-0 sudo[240559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:52:06 compute-0 sudo[240559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:52:06 compute-0 sudo[240559]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:06 compute-0 sudo[240584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 16:52:06 compute-0 sudo[240584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:52:06 compute-0 python3.9[240556]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:52:06 compute-0 podman[240736]: 2025-10-01 16:52:06.736097593 +0000 UTC m=+0.058745214 container create d7f2b205509e0143430d11fcdac878e6d5285c65cd580470bd512a89d8121eff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:52:06 compute-0 ceph-mon[74273]: pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:06 compute-0 systemd[1]: Started libpod-conmon-d7f2b205509e0143430d11fcdac878e6d5285c65cd580470bd512a89d8121eff.scope.
Oct 01 16:52:06 compute-0 podman[240736]: 2025-10-01 16:52:06.703166827 +0000 UTC m=+0.025814528 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:52:06 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:52:06 compute-0 podman[240736]: 2025-10-01 16:52:06.828927625 +0000 UTC m=+0.151575226 container init d7f2b205509e0143430d11fcdac878e6d5285c65cd580470bd512a89d8121eff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:52:06 compute-0 podman[240736]: 2025-10-01 16:52:06.841251822 +0000 UTC m=+0.163899443 container start d7f2b205509e0143430d11fcdac878e6d5285c65cd580470bd512a89d8121eff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_hopper, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 01 16:52:06 compute-0 exciting_hopper[240786]: 167 167
Oct 01 16:52:06 compute-0 systemd[1]: libpod-d7f2b205509e0143430d11fcdac878e6d5285c65cd580470bd512a89d8121eff.scope: Deactivated successfully.
Oct 01 16:52:06 compute-0 podman[240736]: 2025-10-01 16:52:06.847109963 +0000 UTC m=+0.169757594 container attach d7f2b205509e0143430d11fcdac878e6d5285c65cd580470bd512a89d8121eff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 01 16:52:06 compute-0 podman[240736]: 2025-10-01 16:52:06.84766354 +0000 UTC m=+0.170311161 container died d7f2b205509e0143430d11fcdac878e6d5285c65cd580470bd512a89d8121eff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:52:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-7398edf20d2da26c07ca5e2137d8fb8f4021b3bcc42e8cbe951f868036e04507-merged.mount: Deactivated successfully.
Oct 01 16:52:06 compute-0 podman[240736]: 2025-10-01 16:52:06.894442201 +0000 UTC m=+0.217089782 container remove d7f2b205509e0143430d11fcdac878e6d5285c65cd580470bd512a89d8121eff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 01 16:52:06 compute-0 sudo[240842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bujmghyirrdygfkyleujasjoaqztwxls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337526.565627-874-204785767899506/AnsiballZ_command.py'
Oct 01 16:52:06 compute-0 systemd[1]: libpod-conmon-d7f2b205509e0143430d11fcdac878e6d5285c65cd580470bd512a89d8121eff.scope: Deactivated successfully.
Oct 01 16:52:06 compute-0 sudo[240842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:06 compute-0 podman[240793]: 2025-10-01 16:52:06.951199035 +0000 UTC m=+0.106061203 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.vendor=CentOS)
Oct 01 16:52:07 compute-0 podman[240859]: 2025-10-01 16:52:07.053433243 +0000 UTC m=+0.042732327 container create 62e2262c1fdd76d23711941384ac84609d2e8c5e2172294c21d9de8e1afdcdd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khorana, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 01 16:52:07 compute-0 systemd[1]: Started libpod-conmon-62e2262c1fdd76d23711941384ac84609d2e8c5e2172294c21d9de8e1afdcdd5.scope.
Oct 01 16:52:07 compute-0 python3.9[240845]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:52:07 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:52:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02cd0b87f64c22a1a4661ddc866f40ebaf5ddf9ec9cc0f3e18bf2c2977b3961e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:52:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02cd0b87f64c22a1a4661ddc866f40ebaf5ddf9ec9cc0f3e18bf2c2977b3961e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:52:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02cd0b87f64c22a1a4661ddc866f40ebaf5ddf9ec9cc0f3e18bf2c2977b3961e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:52:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02cd0b87f64c22a1a4661ddc866f40ebaf5ddf9ec9cc0f3e18bf2c2977b3961e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:52:07 compute-0 podman[240859]: 2025-10-01 16:52:07.032780556 +0000 UTC m=+0.022079920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:52:07 compute-0 podman[240859]: 2025-10-01 16:52:07.135320391 +0000 UTC m=+0.124619505 container init 62e2262c1fdd76d23711941384ac84609d2e8c5e2172294c21d9de8e1afdcdd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khorana, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:52:07 compute-0 podman[240859]: 2025-10-01 16:52:07.145821333 +0000 UTC m=+0.135120397 container start 62e2262c1fdd76d23711941384ac84609d2e8c5e2172294c21d9de8e1afdcdd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khorana, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:52:07 compute-0 podman[240859]: 2025-10-01 16:52:07.148937456 +0000 UTC m=+0.138236550 container attach 62e2262c1fdd76d23711941384ac84609d2e8c5e2172294c21d9de8e1afdcdd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 01 16:52:07 compute-0 sudo[240842]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:07 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:07 compute-0 sudo[241044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqdupfenyxtkjrvkajieebfujcfavukk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337527.412858-882-123465046725694/AnsiballZ_systemd.py'
Oct 01 16:52:07 compute-0 sudo[241044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:07 compute-0 exciting_khorana[240876]: {
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:     "0": [
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:         {
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             "devices": [
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "/dev/loop3"
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             ],
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             "lv_name": "ceph_lv0",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             "lv_size": "21470642176",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             "name": "ceph_lv0",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             "tags": {
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "ceph.cluster_name": "ceph",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "ceph.crush_device_class": "",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "ceph.encrypted": "0",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "ceph.osd_id": "0",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "ceph.type": "block",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "ceph.vdo": "0"
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             },
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             "type": "block",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             "vg_name": "ceph_vg0"
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:         }
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:     ],
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:     "1": [
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:         {
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             "devices": [
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "/dev/loop4"
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             ],
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             "lv_name": "ceph_lv1",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             "lv_size": "21470642176",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             "name": "ceph_lv1",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             "tags": {
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "ceph.cluster_name": "ceph",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "ceph.crush_device_class": "",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "ceph.encrypted": "0",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "ceph.osd_id": "1",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "ceph.type": "block",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "ceph.vdo": "0"
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             },
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             "type": "block",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             "vg_name": "ceph_vg1"
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:         }
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:     ],
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:     "2": [
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:         {
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             "devices": [
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "/dev/loop5"
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             ],
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             "lv_name": "ceph_lv2",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             "lv_size": "21470642176",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             "name": "ceph_lv2",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             "tags": {
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "ceph.cluster_name": "ceph",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "ceph.crush_device_class": "",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "ceph.encrypted": "0",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "ceph.osd_id": "2",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "ceph.type": "block",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:                 "ceph.vdo": "0"
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             },
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             "type": "block",
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:             "vg_name": "ceph_vg2"
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:         }
Oct 01 16:52:07 compute-0 exciting_khorana[240876]:     ]
Oct 01 16:52:07 compute-0 exciting_khorana[240876]: }
Oct 01 16:52:07 compute-0 systemd[1]: libpod-62e2262c1fdd76d23711941384ac84609d2e8c5e2172294c21d9de8e1afdcdd5.scope: Deactivated successfully.
Oct 01 16:52:07 compute-0 podman[240859]: 2025-10-01 16:52:07.903735004 +0000 UTC m=+0.893034088 container died 62e2262c1fdd76d23711941384ac84609d2e8c5e2172294c21d9de8e1afdcdd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 01 16:52:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-02cd0b87f64c22a1a4661ddc866f40ebaf5ddf9ec9cc0f3e18bf2c2977b3961e-merged.mount: Deactivated successfully.
Oct 01 16:52:07 compute-0 podman[240859]: 2025-10-01 16:52:07.976461437 +0000 UTC m=+0.965760521 container remove 62e2262c1fdd76d23711941384ac84609d2e8c5e2172294c21d9de8e1afdcdd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 01 16:52:07 compute-0 systemd[1]: libpod-conmon-62e2262c1fdd76d23711941384ac84609d2e8c5e2172294c21d9de8e1afdcdd5.scope: Deactivated successfully.
Oct 01 16:52:08 compute-0 sudo[240584]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:08 compute-0 python3.9[241047]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 01 16:52:08 compute-0 systemd[1]: Stopping multipathd container...
Oct 01 16:52:08 compute-0 sudo[241062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:52:08 compute-0 sudo[241062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:52:08 compute-0 sudo[241062]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:08 compute-0 multipathd[240285]: 3242.953921 | exit (signal)
Oct 01 16:52:08 compute-0 multipathd[240285]: 3242.954527 | --------shut down-------
Oct 01 16:52:08 compute-0 sudo[241096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:52:08 compute-0 sudo[241096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:52:08 compute-0 sudo[241096]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:08 compute-0 systemd[1]: libpod-82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe.scope: Deactivated successfully.
Oct 01 16:52:08 compute-0 podman[241089]: 2025-10-01 16:52:08.22926736 +0000 UTC m=+0.094689937 container died 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 01 16:52:08 compute-0 systemd[1]: 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe-642d0ba5b816572.timer: Deactivated successfully.
Oct 01 16:52:08 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe.
Oct 01 16:52:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe-userdata-shm.mount: Deactivated successfully.
Oct 01 16:52:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-1422c5c03096155801a83d3f064d8b45b6ad458e0a09b927122ce44d776df9f4-merged.mount: Deactivated successfully.
Oct 01 16:52:08 compute-0 sudo[241128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:52:08 compute-0 sudo[241128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:52:08 compute-0 sudo[241128]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:08 compute-0 sudo[241164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 16:52:08 compute-0 sudo[241164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:52:08 compute-0 podman[241089]: 2025-10-01 16:52:08.42481977 +0000 UTC m=+0.290242347 container cleanup 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 01 16:52:08 compute-0 podman[241089]: multipathd
Oct 01 16:52:08 compute-0 podman[241192]: multipathd
Oct 01 16:52:08 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Oct 01 16:52:08 compute-0 systemd[1]: Stopped multipathd container.
Oct 01 16:52:08 compute-0 systemd[1]: Starting multipathd container...
Oct 01 16:52:08 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:52:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1422c5c03096155801a83d3f064d8b45b6ad458e0a09b927122ce44d776df9f4/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 01 16:52:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1422c5c03096155801a83d3f064d8b45b6ad458e0a09b927122ce44d776df9f4/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 01 16:52:08 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe.
Oct 01 16:52:08 compute-0 podman[241217]: 2025-10-01 16:52:08.61081031 +0000 UTC m=+0.106862150 container init 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 01 16:52:08 compute-0 multipathd[241257]: + sudo -E kolla_set_configs
Oct 01 16:52:08 compute-0 sudo[241265]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 01 16:52:08 compute-0 sudo[241265]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 01 16:52:08 compute-0 sudo[241265]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 01 16:52:08 compute-0 podman[241217]: 2025-10-01 16:52:08.649014701 +0000 UTC m=+0.145066531 container start 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 01 16:52:08 compute-0 podman[241217]: multipathd
Oct 01 16:52:08 compute-0 systemd[1]: Started multipathd container.
Oct 01 16:52:08 compute-0 multipathd[241257]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 01 16:52:08 compute-0 multipathd[241257]: INFO:__main__:Validating config file
Oct 01 16:52:08 compute-0 multipathd[241257]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 01 16:52:08 compute-0 multipathd[241257]: INFO:__main__:Writing out command to execute
Oct 01 16:52:08 compute-0 sudo[241265]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:08 compute-0 multipathd[241257]: ++ cat /run_command
Oct 01 16:52:08 compute-0 multipathd[241257]: + CMD='/usr/sbin/multipathd -d'
Oct 01 16:52:08 compute-0 multipathd[241257]: + ARGS=
Oct 01 16:52:08 compute-0 multipathd[241257]: + sudo kolla_copy_cacerts
Oct 01 16:52:08 compute-0 sudo[241044]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:08 compute-0 sudo[241288]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 01 16:52:08 compute-0 sudo[241288]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 01 16:52:08 compute-0 sudo[241288]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 01 16:52:08 compute-0 sudo[241288]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:08 compute-0 multipathd[241257]: + [[ ! -n '' ]]
Oct 01 16:52:08 compute-0 multipathd[241257]: + . kolla_extend_start
Oct 01 16:52:08 compute-0 multipathd[241257]: Running command: '/usr/sbin/multipathd -d'
Oct 01 16:52:08 compute-0 multipathd[241257]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 01 16:52:08 compute-0 multipathd[241257]: + umask 0022
Oct 01 16:52:08 compute-0 multipathd[241257]: + exec /usr/sbin/multipathd -d
Oct 01 16:52:08 compute-0 podman[241279]: 2025-10-01 16:52:08.730729787 +0000 UTC m=+0.042149421 container create 7f6f4ff2d214a2132bd2301c382d77e52000f934c6602aaae58faf5e34cd7d02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_johnson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 01 16:52:08 compute-0 multipathd[241257]: 3243.500263 | --------start up--------
Oct 01 16:52:08 compute-0 multipathd[241257]: 3243.500307 | read /etc/multipath.conf
Oct 01 16:52:08 compute-0 multipathd[241257]: 3243.506099 | path checkers start up
Oct 01 16:52:08 compute-0 podman[241264]: 2025-10-01 16:52:08.746765722 +0000 UTC m=+0.090149392 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 01 16:52:08 compute-0 systemd[1]: 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe-165a9851e89766c4.service: Main process exited, code=exited, status=1/FAILURE
Oct 01 16:52:08 compute-0 systemd[1]: 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe-165a9851e89766c4.service: Failed with result 'exit-code'.
Oct 01 16:52:08 compute-0 ceph-mon[74273]: pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:08 compute-0 systemd[1]: Started libpod-conmon-7f6f4ff2d214a2132bd2301c382d77e52000f934c6602aaae58faf5e34cd7d02.scope.
Oct 01 16:52:08 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:52:08 compute-0 podman[241279]: 2025-10-01 16:52:08.713420423 +0000 UTC m=+0.024840087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:52:08 compute-0 podman[241279]: 2025-10-01 16:52:08.816269006 +0000 UTC m=+0.127688660 container init 7f6f4ff2d214a2132bd2301c382d77e52000f934c6602aaae58faf5e34cd7d02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 01 16:52:08 compute-0 podman[241279]: 2025-10-01 16:52:08.824390735 +0000 UTC m=+0.135810369 container start 7f6f4ff2d214a2132bd2301c382d77e52000f934c6602aaae58faf5e34cd7d02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 16:52:08 compute-0 podman[241279]: 2025-10-01 16:52:08.827564185 +0000 UTC m=+0.138983819 container attach 7f6f4ff2d214a2132bd2301c382d77e52000f934c6602aaae58faf5e34cd7d02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:52:08 compute-0 sweet_johnson[241339]: 167 167
Oct 01 16:52:08 compute-0 systemd[1]: libpod-7f6f4ff2d214a2132bd2301c382d77e52000f934c6602aaae58faf5e34cd7d02.scope: Deactivated successfully.
Oct 01 16:52:08 compute-0 conmon[241339]: conmon 7f6f4ff2d214a2132bd2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7f6f4ff2d214a2132bd2301c382d77e52000f934c6602aaae58faf5e34cd7d02.scope/container/memory.events
Oct 01 16:52:08 compute-0 podman[241279]: 2025-10-01 16:52:08.830400689 +0000 UTC m=+0.141820323 container died 7f6f4ff2d214a2132bd2301c382d77e52000f934c6602aaae58faf5e34cd7d02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 01 16:52:08 compute-0 podman[241279]: 2025-10-01 16:52:08.909993233 +0000 UTC m=+0.221412867 container remove 7f6f4ff2d214a2132bd2301c382d77e52000f934c6602aaae58faf5e34cd7d02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_johnson, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Oct 01 16:52:08 compute-0 systemd[1]: libpod-conmon-7f6f4ff2d214a2132bd2301c382d77e52000f934c6602aaae58faf5e34cd7d02.scope: Deactivated successfully.
Oct 01 16:52:09 compute-0 podman[241414]: 2025-10-01 16:52:09.072516309 +0000 UTC m=+0.049565788 container create f90c387048e2e441cf011b82233bc9e22bd0e3372cdc6cfa5492a8760501d6c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 01 16:52:09 compute-0 systemd[1]: Started libpod-conmon-f90c387048e2e441cf011b82233bc9e22bd0e3372cdc6cfa5492a8760501d6c5.scope.
Oct 01 16:52:09 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:52:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b2ffd00143c4ed2dfb9ef8c5c9d7020fc0d2ee8a09e8c03e1db805c8c50c272/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:52:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b2ffd00143c4ed2dfb9ef8c5c9d7020fc0d2ee8a09e8c03e1db805c8c50c272/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:52:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b2ffd00143c4ed2dfb9ef8c5c9d7020fc0d2ee8a09e8c03e1db805c8c50c272/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:52:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b2ffd00143c4ed2dfb9ef8c5c9d7020fc0d2ee8a09e8c03e1db805c8c50c272/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:52:09 compute-0 podman[241414]: 2025-10-01 16:52:09.143025062 +0000 UTC m=+0.120074581 container init f90c387048e2e441cf011b82233bc9e22bd0e3372cdc6cfa5492a8760501d6c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_antonelli, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 16:52:09 compute-0 podman[241414]: 2025-10-01 16:52:09.151139061 +0000 UTC m=+0.128188550 container start f90c387048e2e441cf011b82233bc9e22bd0e3372cdc6cfa5492a8760501d6c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_antonelli, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:52:09 compute-0 podman[241414]: 2025-10-01 16:52:09.056288291 +0000 UTC m=+0.033337800 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:52:09 compute-0 podman[241414]: 2025-10-01 16:52:09.155379248 +0000 UTC m=+0.132428747 container attach f90c387048e2e441cf011b82233bc9e22bd0e3372cdc6cfa5492a8760501d6c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_antonelli, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:52:09 compute-0 sudo[241509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leulobxecptnwzkclezetvvzowgvgjiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337528.9348898-890-50237626181592/AnsiballZ_file.py'
Oct 01 16:52:09 compute-0 sudo[241509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:09 compute-0 python3.9[241511]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:52:09 compute-0 sudo[241509]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:09 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:52:10 compute-0 sudo[241684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zurlctvkbpeofjqrioddwmienacwprhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337529.7559314-902-69901373135549/AnsiballZ_file.py'
Oct 01 16:52:10 compute-0 sudo[241684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:10 compute-0 brave_antonelli[241477]: {
Oct 01 16:52:10 compute-0 brave_antonelli[241477]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 16:52:10 compute-0 brave_antonelli[241477]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:52:10 compute-0 brave_antonelli[241477]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 16:52:10 compute-0 brave_antonelli[241477]:         "osd_id": 2,
Oct 01 16:52:10 compute-0 brave_antonelli[241477]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:52:10 compute-0 brave_antonelli[241477]:         "type": "bluestore"
Oct 01 16:52:10 compute-0 brave_antonelli[241477]:     },
Oct 01 16:52:10 compute-0 brave_antonelli[241477]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 16:52:10 compute-0 brave_antonelli[241477]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:52:10 compute-0 brave_antonelli[241477]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 16:52:10 compute-0 brave_antonelli[241477]:         "osd_id": 0,
Oct 01 16:52:10 compute-0 brave_antonelli[241477]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:52:10 compute-0 brave_antonelli[241477]:         "type": "bluestore"
Oct 01 16:52:10 compute-0 brave_antonelli[241477]:     },
Oct 01 16:52:10 compute-0 brave_antonelli[241477]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 16:52:10 compute-0 brave_antonelli[241477]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:52:10 compute-0 brave_antonelli[241477]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 16:52:10 compute-0 brave_antonelli[241477]:         "osd_id": 1,
Oct 01 16:52:10 compute-0 brave_antonelli[241477]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:52:10 compute-0 brave_antonelli[241477]:         "type": "bluestore"
Oct 01 16:52:10 compute-0 brave_antonelli[241477]:     }
Oct 01 16:52:10 compute-0 brave_antonelli[241477]: }
Oct 01 16:52:10 compute-0 systemd[1]: libpod-f90c387048e2e441cf011b82233bc9e22bd0e3372cdc6cfa5492a8760501d6c5.scope: Deactivated successfully.
Oct 01 16:52:10 compute-0 systemd[1]: libpod-f90c387048e2e441cf011b82233bc9e22bd0e3372cdc6cfa5492a8760501d6c5.scope: Consumed 1.029s CPU time.
Oct 01 16:52:10 compute-0 podman[241414]: 2025-10-01 16:52:10.177910232 +0000 UTC m=+1.154959721 container died f90c387048e2e441cf011b82233bc9e22bd0e3372cdc6cfa5492a8760501d6c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_antonelli, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:52:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b2ffd00143c4ed2dfb9ef8c5c9d7020fc0d2ee8a09e8c03e1db805c8c50c272-merged.mount: Deactivated successfully.
Oct 01 16:52:10 compute-0 podman[241414]: 2025-10-01 16:52:10.240870817 +0000 UTC m=+1.217920306 container remove f90c387048e2e441cf011b82233bc9e22bd0e3372cdc6cfa5492a8760501d6c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_antonelli, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 01 16:52:10 compute-0 systemd[1]: libpod-conmon-f90c387048e2e441cf011b82233bc9e22bd0e3372cdc6cfa5492a8760501d6c5.scope: Deactivated successfully.
Oct 01 16:52:10 compute-0 sudo[241164]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:52:10 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:52:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:52:10 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:52:10 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 9bf2152d-e892-4f74-863f-0e93872403e0 does not exist
Oct 01 16:52:10 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 5abfdd4f-574d-4ff9-9741-5500a83e4e6c does not exist
Oct 01 16:52:10 compute-0 python3.9[241689]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 01 16:52:10 compute-0 sudo[241684]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:10 compute-0 sudo[241706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:52:10 compute-0 sudo[241706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:52:10 compute-0 sudo[241706]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:10 compute-0 sudo[241738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 16:52:10 compute-0 sudo[241738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:52:10 compute-0 sudo[241738]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:10 compute-0 sudo[241905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvumtmkmxhqzfvxhelnupgxnricfzztz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337530.4954739-910-72552367150873/AnsiballZ_modprobe.py'
Oct 01 16:52:10 compute-0 sudo[241905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:10 compute-0 ceph-mon[74273]: pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:52:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:52:10 compute-0 python3.9[241907]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct 01 16:52:10 compute-0 kernel: Key type psk registered
Oct 01 16:52:11 compute-0 sudo[241905]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_16:52:11
Oct 01 16:52:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 16:52:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 16:52:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', 'default.rgw.control', 'vms', 'images', '.rgw.root', '.mgr', 'backups']
Oct 01 16:52:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 16:52:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:52:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:52:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:52:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:52:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:52:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:52:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 16:52:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:52:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 16:52:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:52:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:52:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:52:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:52:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:52:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:52:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:52:11 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:11 compute-0 sudo[242069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxvukjbzorjqbayrvoevcnakqsipoyxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337531.2496603-918-120561901325151/AnsiballZ_stat.py'
Oct 01 16:52:11 compute-0 sudo[242069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:11 compute-0 python3.9[242071]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:52:11 compute-0 sudo[242069]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:12 compute-0 sudo[242192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhaagoigrpsghmrryxobztjnokntbnwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337531.2496603-918-120561901325151/AnsiballZ_copy.py'
Oct 01 16:52:12 compute-0 sudo[242192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:12 compute-0 python3.9[242194]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759337531.2496603-918-120561901325151/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:52:12 compute-0 sudo[242192]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:12 compute-0 ceph-mon[74273]: pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:13 compute-0 sudo[242344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjnuupjhhjvlbvfvarifnhhlafrmgkfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337532.774646-934-243006930385765/AnsiballZ_lineinfile.py'
Oct 01 16:52:13 compute-0 sudo[242344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:13 compute-0 python3.9[242346]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:52:13 compute-0 sudo[242344]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:13 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:13 compute-0 sudo[242496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtcsibdrnlptrcgpnbyuwmfcxetyakww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337533.5193455-942-255994770900142/AnsiballZ_systemd.py'
Oct 01 16:52:13 compute-0 sudo[242496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:14 compute-0 python3.9[242498]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 01 16:52:14 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 01 16:52:14 compute-0 systemd[1]: Stopped Load Kernel Modules.
Oct 01 16:52:14 compute-0 systemd[1]: Stopping Load Kernel Modules...
Oct 01 16:52:14 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 01 16:52:14 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 01 16:52:14 compute-0 sudo[242496]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:14 compute-0 sudo[242652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwjklkyzdlldugmslmssdsbqbqghprnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337534.463959-950-184357088408026/AnsiballZ_setup.py'
Oct 01 16:52:14 compute-0 ceph-mon[74273]: pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:14 compute-0 sudo[242652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:15 compute-0 python3.9[242654]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 16:52:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:52:15 compute-0 sudo[242652]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:15 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:15 compute-0 sudo[242736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puprqczsuigbacxffyonxvpnkpataoik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337534.463959-950-184357088408026/AnsiballZ_dnf.py'
Oct 01 16:52:15 compute-0 sudo[242736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:16 compute-0 python3.9[242738]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 16:52:16 compute-0 ceph-mon[74273]: pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:17 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:18 compute-0 ceph-mon[74273]: pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:19 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:19 compute-0 podman[242740]: 2025-10-01 16:52:19.788815781 +0000 UTC m=+0.097992445 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 16:52:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:52:19.952 162304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 16:52:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:52:19.953 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 16:52:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:52:19.953 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 16:52:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:52:20 compute-0 ceph-mon[74273]: pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 16:52:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:52:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 16:52:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:52:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:52:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:52:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:52:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:52:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:52:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:52:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:52:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:52:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 16:52:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:52:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:52:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:52:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 16:52:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:52:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 16:52:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:52:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:52:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:52:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 16:52:21 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:22 compute-0 systemd[1]: Reloading.
Oct 01 16:52:22 compute-0 systemd-rc-local-generator[242797]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:52:22 compute-0 systemd-sysv-generator[242801]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:52:22 compute-0 systemd[1]: Reloading.
Oct 01 16:52:22 compute-0 systemd-rc-local-generator[242832]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:52:22 compute-0 systemd-sysv-generator[242836]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:52:22 compute-0 ceph-mon[74273]: pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:23 compute-0 systemd-logind[788]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 01 16:52:23 compute-0 systemd-logind[788]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 01 16:52:23 compute-0 lvm[242880]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 01 16:52:23 compute-0 lvm[242880]: VG ceph_vg0 finished
Oct 01 16:52:23 compute-0 lvm[242882]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 01 16:52:23 compute-0 lvm[242882]: VG ceph_vg2 finished
Oct 01 16:52:23 compute-0 lvm[242881]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 01 16:52:23 compute-0 lvm[242881]: VG ceph_vg1 finished
Oct 01 16:52:23 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 01 16:52:23 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 01 16:52:23 compute-0 systemd[1]: Reloading.
Oct 01 16:52:23 compute-0 systemd-sysv-generator[242939]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:52:23 compute-0 systemd-rc-local-generator[242935]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:52:23 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:23 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 01 16:52:23 compute-0 podman[242945]: 2025-10-01 16:52:23.935858391 +0000 UTC m=+0.077675647 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 01 16:52:24 compute-0 sudo[242736]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:25 compute-0 sudo[244170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgnafweitktyfhirhkjgjaxmtygfealz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337544.7272267-962-44928588046705/AnsiballZ_file.py'
Oct 01 16:52:25 compute-0 sudo[244170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:25 compute-0 ceph-mon[74273]: pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:52:25 compute-0 python3.9[244184]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:52:25 compute-0 sudo[244170]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:25 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 01 16:52:25 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 01 16:52:25 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.829s CPU time.
Oct 01 16:52:25 compute-0 systemd[1]: run-rcca4690147224fd5bf1b6f92db589cdb.service: Deactivated successfully.
Oct 01 16:52:25 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:25 compute-0 python3.9[244395]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 16:52:26 compute-0 ceph-mon[74273]: pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:26 compute-0 sudo[244549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oragpdhlocgrvlogkbgkdmwcmbdbtldz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337546.4929352-980-119497388940313/AnsiballZ_file.py'
Oct 01 16:52:26 compute-0 sudo[244549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:27 compute-0 python3.9[244551]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:52:27 compute-0 sudo[244549]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:27 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:28 compute-0 sudo[244701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udhfrkbavwhuyvlwoqzljtlrwornfcxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337547.490718-991-36701132822790/AnsiballZ_systemd_service.py'
Oct 01 16:52:28 compute-0 sudo[244701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:28 compute-0 python3.9[244703]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 01 16:52:28 compute-0 systemd[1]: Reloading.
Oct 01 16:52:28 compute-0 systemd-rc-local-generator[244728]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:52:28 compute-0 systemd-sysv-generator[244734]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:52:28 compute-0 ceph-mon[74273]: pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:28 compute-0 sudo[244701]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:29 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:29 compute-0 python3.9[244888]: ansible-ansible.builtin.service_facts Invoked
Oct 01 16:52:29 compute-0 network[244905]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 01 16:52:29 compute-0 network[244906]: 'network-scripts' will be removed from distribution in near future.
Oct 01 16:52:29 compute-0 network[244907]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 01 16:52:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:52:30 compute-0 ceph-mon[74273]: pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:31 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:32 compute-0 ceph-mon[74273]: pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:33 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:34 compute-0 ceph-mon[74273]: pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:34 compute-0 sudo[245183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaahlptjjrdromguetbcctvvykirghfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337554.4967878-1010-35727776435970/AnsiballZ_systemd_service.py'
Oct 01 16:52:34 compute-0 sudo[245183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:52:35 compute-0 python3.9[245185]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:52:35 compute-0 sudo[245183]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:35 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:35 compute-0 sudo[245336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyldfunlvnunfcmvwekltjmrcsyrorst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337555.3799002-1010-86123234720833/AnsiballZ_systemd_service.py'
Oct 01 16:52:35 compute-0 sudo[245336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:36 compute-0 python3.9[245338]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:52:36 compute-0 sudo[245336]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:36 compute-0 sudo[245489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udsrmafwvxoxidqxyfxdrxwhkocazqrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337556.3041246-1010-91642522057588/AnsiballZ_systemd_service.py'
Oct 01 16:52:36 compute-0 sudo[245489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:36 compute-0 ceph-mon[74273]: pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:36 compute-0 python3.9[245491]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:52:37 compute-0 sudo[245489]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:37 compute-0 podman[245493]: 2025-10-01 16:52:37.142754033 +0000 UTC m=+0.115119494 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 01 16:52:37 compute-0 sudo[245663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hekfkzjkddktyuycwxqcfzjytgonblsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337557.2161508-1010-151038443454299/AnsiballZ_systemd_service.py'
Oct 01 16:52:37 compute-0 sudo[245663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:37 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:37 compute-0 python3.9[245665]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:52:38 compute-0 ceph-mon[74273]: pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:38 compute-0 sudo[245663]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:39 compute-0 sudo[245830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbgrtqqjbmfajfatwkjcatckscrrkbvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337559.1537228-1010-99282219690484/AnsiballZ_systemd_service.py'
Oct 01 16:52:39 compute-0 sudo[245830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:39 compute-0 podman[245790]: 2025-10-01 16:52:39.575614904 +0000 UTC m=+0.083951519 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 01 16:52:39 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:39 compute-0 python3.9[245836]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:52:39 compute-0 sudo[245830]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:52:40 compute-0 sudo[245989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cejodjdyfvwshhpectirypsxyslgkpum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337560.0632563-1010-178398002297831/AnsiballZ_systemd_service.py'
Oct 01 16:52:40 compute-0 sudo[245989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:40 compute-0 python3.9[245991]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:52:40 compute-0 sudo[245989]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:40 compute-0 ceph-mon[74273]: pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:41 compute-0 sudo[246142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpgwjcydvaxhzczzerkrabeumuqbneca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337560.9183033-1010-203750486694865/AnsiballZ_systemd_service.py'
Oct 01 16:52:41 compute-0 sudo[246142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:52:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:52:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:52:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:52:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:52:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:52:41 compute-0 python3.9[246144]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:52:41 compute-0 sudo[246142]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:41 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:42 compute-0 sudo[246295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmjanehdzdzsuwdftbdtjxkwzmkudvym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337561.835562-1010-67765800867047/AnsiballZ_systemd_service.py'
Oct 01 16:52:42 compute-0 sudo[246295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:42 compute-0 python3.9[246297]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:52:42 compute-0 sudo[246295]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:42 compute-0 ceph-mon[74273]: pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:43 compute-0 sudo[246448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kaxosicrlciwwpptkvtlcxkgfecbuikj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337562.911505-1069-46665573635372/AnsiballZ_file.py'
Oct 01 16:52:43 compute-0 sudo[246448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:43 compute-0 python3.9[246450]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:52:43 compute-0 sudo[246448]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:43 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:43 compute-0 sudo[246600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqbmpqhjhizwjqumhaohqshsqfmvkroo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337563.589942-1069-6442177354533/AnsiballZ_file.py'
Oct 01 16:52:43 compute-0 sudo[246600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:44 compute-0 python3.9[246602]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:52:44 compute-0 sudo[246600]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:44 compute-0 sudo[246752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-masunjclybqtorlruyintfgibochnrjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337564.2456174-1069-58975845216881/AnsiballZ_file.py'
Oct 01 16:52:44 compute-0 sudo[246752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:44 compute-0 python3.9[246754]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:52:44 compute-0 sudo[246752]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:44 compute-0 ceph-mon[74273]: pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:52:45 compute-0 sudo[246904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqijoymhsdqhzfowygutbnjnhhihalud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337565.0751176-1069-231835465569703/AnsiballZ_file.py'
Oct 01 16:52:45 compute-0 sudo[246904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:45 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:45 compute-0 python3.9[246906]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:52:45 compute-0 sudo[246904]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:46 compute-0 sudo[247056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjlevwahwobukpycqzjbvlkdubqzpwmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337565.8984966-1069-225627873778376/AnsiballZ_file.py'
Oct 01 16:52:46 compute-0 sudo[247056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:46 compute-0 python3.9[247058]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:52:46 compute-0 sudo[247056]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:47 compute-0 sudo[247208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcrjeklucbvdhmuxlxzspoosqjnyctij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337566.6944563-1069-14615894126198/AnsiballZ_file.py'
Oct 01 16:52:47 compute-0 sudo[247208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:47 compute-0 ceph-mon[74273]: pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:47 compute-0 python3.9[247210]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:52:47 compute-0 sudo[247208]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:47 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:47 compute-0 sudo[247360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqiaowpgxszienafseutkltvsjzjscjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337567.4706395-1069-99698795254666/AnsiballZ_file.py'
Oct 01 16:52:47 compute-0 sudo[247360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:48 compute-0 python3.9[247362]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:52:48 compute-0 sudo[247360]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:48 compute-0 ceph-mon[74273]: pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:48 compute-0 sudo[247512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jspygkofckuxjgufcvetrtpgussabxlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337568.3456628-1069-272584207927340/AnsiballZ_file.py'
Oct 01 16:52:48 compute-0 sudo[247512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:48 compute-0 python3.9[247514]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:52:48 compute-0 sudo[247512]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:49 compute-0 sudo[247664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwxwbullpggmviumjutulllyzwpcgjbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337569.227533-1126-196281165304669/AnsiballZ_file.py'
Oct 01 16:52:49 compute-0 sudo[247664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:49 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:49 compute-0 python3.9[247666]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:52:49 compute-0 sudo[247664]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:52:50 compute-0 sudo[247829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqqibtjdrgperctnrhzwuqphehpwdozs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337570.026683-1126-238366428320446/AnsiballZ_file.py'
Oct 01 16:52:50 compute-0 sudo[247829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:50 compute-0 podman[247790]: 2025-10-01 16:52:50.453190158 +0000 UTC m=+0.127380977 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 01 16:52:50 compute-0 python3.9[247837]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:52:50 compute-0 sudo[247829]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:50 compute-0 ceph-mon[74273]: pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:51 compute-0 sudo[247994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqxxikknswafymfheokuloowclczjxdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337570.7959383-1126-7385400799273/AnsiballZ_file.py'
Oct 01 16:52:51 compute-0 sudo[247994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:51 compute-0 python3.9[247996]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:52:51 compute-0 sudo[247994]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:51 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:51 compute-0 sudo[248146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckpfwxkmucflveeivhorrewykupbjqsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337571.573396-1126-194231534664209/AnsiballZ_file.py'
Oct 01 16:52:51 compute-0 sudo[248146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:52 compute-0 python3.9[248148]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:52:52 compute-0 sudo[248146]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:52 compute-0 sudo[248298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcvkgggdspydymsxqwuzkjcuoridtdik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337572.3046103-1126-254093515309896/AnsiballZ_file.py'
Oct 01 16:52:52 compute-0 sudo[248298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:52 compute-0 ceph-mon[74273]: pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:52 compute-0 python3.9[248300]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:52:52 compute-0 sudo[248298]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:53 compute-0 sudo[248450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkzvzqxiulktddyheffonadtstovtvtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337573.0546842-1126-78792970485667/AnsiballZ_file.py'
Oct 01 16:52:53 compute-0 sudo[248450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:53 compute-0 python3.9[248452]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:52:53 compute-0 sudo[248450]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:53 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:54 compute-0 sudo[248618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwrosndogucehouprogfoqpfnrpovbkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337573.8278325-1126-218992999192776/AnsiballZ_file.py'
Oct 01 16:52:54 compute-0 podman[248576]: 2025-10-01 16:52:54.192168035 +0000 UTC m=+0.074792377 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent)
Oct 01 16:52:54 compute-0 sudo[248618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:54 compute-0 python3.9[248622]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:52:54 compute-0 sudo[248618]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:54 compute-0 ceph-mon[74273]: pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:55 compute-0 sudo[248772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqucdbbqmybktlndiwcnqxjpmxgptazy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337574.6451483-1126-52563221988631/AnsiballZ_file.py'
Oct 01 16:52:55 compute-0 sudo[248772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:52:55 compute-0 python3.9[248774]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:52:55 compute-0 sudo[248772]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:55 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:55 compute-0 sudo[248924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltkxjnqnwtnytnjxkswxhahpxdntfgzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337575.5470352-1184-156816717064493/AnsiballZ_command.py'
Oct 01 16:52:55 compute-0 sudo[248924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:56 compute-0 python3.9[248926]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:52:56 compute-0 sudo[248924]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:56 compute-0 ceph-mon[74273]: pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:57 compute-0 python3.9[249078]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 01 16:52:57 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:57 compute-0 sudo[249228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfxjekuxikbijrbfuqcxlqfwfeqxfcrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337577.5175326-1202-162021739405654/AnsiballZ_systemd_service.py'
Oct 01 16:52:57 compute-0 sudo[249228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:58 compute-0 python3.9[249230]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 01 16:52:58 compute-0 systemd[1]: Reloading.
Oct 01 16:52:58 compute-0 systemd-rc-local-generator[249254]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:52:58 compute-0 systemd-sysv-generator[249262]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:52:58 compute-0 sudo[249228]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:58 compute-0 ceph-mon[74273]: pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:59 compute-0 sudo[249415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vulsfjzxojdjnfjayqbgfwchonnmandg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337578.767264-1210-16770857994744/AnsiballZ_command.py'
Oct 01 16:52:59 compute-0 sudo[249415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:52:59 compute-0 python3.9[249417]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:52:59 compute-0 sudo[249415]: pam_unix(sudo:session): session closed for user root
Oct 01 16:52:59 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:52:59 compute-0 sudo[249568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-courtdnnttqiwfdzgkbcrfafvtbjomdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337579.6172354-1210-195236547822106/AnsiballZ_command.py'
Oct 01 16:52:59 compute-0 sudo[249568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:53:00 compute-0 python3.9[249570]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:53:00 compute-0 sudo[249568]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:00 compute-0 sudo[249721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjetdfnqbrwrzwypfgumzmvrpctlofvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337580.415395-1210-107316965531330/AnsiballZ_command.py'
Oct 01 16:53:00 compute-0 sudo[249721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:00 compute-0 ceph-mon[74273]: pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:01 compute-0 python3.9[249723]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:53:01 compute-0 sudo[249721]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:01 compute-0 sudo[249874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgdvnxrcohaiiimarjbksagbiymjcpep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337581.270939-1210-47824273572953/AnsiballZ_command.py'
Oct 01 16:53:01 compute-0 sudo[249874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:01 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:01 compute-0 python3.9[249876]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:53:02 compute-0 sudo[249874]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:02 compute-0 ceph-mon[74273]: pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:03 compute-0 sudo[250027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjvequgqzisrgiqpvnwvwrnfxyaqxqil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337583.0505297-1210-179868818193272/AnsiballZ_command.py'
Oct 01 16:53:03 compute-0 sudo[250027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:03 compute-0 python3.9[250029]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:53:03 compute-0 sudo[250027]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:03 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:04 compute-0 sudo[250180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sraxnndbxzuqfiuyddebtcyrmsuixzvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337583.8129654-1210-254875014152211/AnsiballZ_command.py'
Oct 01 16:53:04 compute-0 sudo[250180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:04 compute-0 python3.9[250182]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:53:04 compute-0 sudo[250180]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:04 compute-0 sudo[250333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvvewfaxzqeeuinfjtkmorelyzowfyll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337584.5776722-1210-58226869937283/AnsiballZ_command.py'
Oct 01 16:53:04 compute-0 sudo[250333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:04 compute-0 ceph-mon[74273]: pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:53:05 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Oct 01 16:53:05 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:53:05.114398) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 16:53:05 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Oct 01 16:53:05 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337585114475, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1217, "num_deletes": 505, "total_data_size": 1387319, "memory_usage": 1417168, "flush_reason": "Manual Compaction"}
Oct 01 16:53:05 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Oct 01 16:53:05 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337585122061, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1374089, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13588, "largest_seqno": 14804, "table_properties": {"data_size": 1368665, "index_size": 2372, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 13832, "raw_average_key_size": 17, "raw_value_size": 1355928, "raw_average_value_size": 1749, "num_data_blocks": 108, "num_entries": 775, "num_filter_entries": 775, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759337491, "oldest_key_time": 1759337491, "file_creation_time": 1759337585, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Oct 01 16:53:05 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 7703 microseconds, and 3689 cpu microseconds.
Oct 01 16:53:05 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 16:53:05 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:53:05.122115) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1374089 bytes OK
Oct 01 16:53:05 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:53:05.122131) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Oct 01 16:53:05 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:53:05.123538) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Oct 01 16:53:05 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:53:05.123549) EVENT_LOG_v1 {"time_micros": 1759337585123545, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 16:53:05 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:53:05.123566) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 16:53:05 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1380719, prev total WAL file size 1380719, number of live WAL files 2.
Oct 01 16:53:05 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 16:53:05 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:53:05.124219) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Oct 01 16:53:05 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 16:53:05 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1341KB)], [32(7471KB)]
Oct 01 16:53:05 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337585124280, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9024958, "oldest_snapshot_seqno": -1}
Oct 01 16:53:05 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3778 keys, 7090327 bytes, temperature: kUnknown
Oct 01 16:53:05 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337585194430, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7090327, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7063338, "index_size": 16447, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9477, "raw_key_size": 92633, "raw_average_key_size": 24, "raw_value_size": 6993197, "raw_average_value_size": 1851, "num_data_blocks": 696, "num_entries": 3778, "num_filter_entries": 3778, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336399, "oldest_key_time": 0, "file_creation_time": 1759337585, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 01 16:53:05 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 16:53:05 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:53:05.194783) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7090327 bytes
Oct 01 16:53:05 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:53:05.196816) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 128.5 rd, 100.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.3 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(11.7) write-amplify(5.2) OK, records in: 4801, records dropped: 1023 output_compression: NoCompression
Oct 01 16:53:05 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:53:05.196859) EVENT_LOG_v1 {"time_micros": 1759337585196838, "job": 14, "event": "compaction_finished", "compaction_time_micros": 70247, "compaction_time_cpu_micros": 30596, "output_level": 6, "num_output_files": 1, "total_output_size": 7090327, "num_input_records": 4801, "num_output_records": 3778, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 16:53:05 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 16:53:05 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337585197570, "job": 14, "event": "table_file_deletion", "file_number": 34}
Oct 01 16:53:05 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 16:53:05 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337585200829, "job": 14, "event": "table_file_deletion", "file_number": 32}
Oct 01 16:53:05 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:53:05.124039) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:53:05 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:53:05.200881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:53:05 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:53:05.200886) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:53:05 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:53:05.200914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:53:05 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:53:05.200916) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:53:05 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:53:05.200917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:53:05 compute-0 python3.9[250335]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:53:05 compute-0 sudo[250333]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:05 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:05 compute-0 sudo[250486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfeumkyyexinqphixvezajpsmsqfgezw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337585.4136677-1210-145700987773090/AnsiballZ_command.py'
Oct 01 16:53:05 compute-0 sudo[250486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:06 compute-0 python3.9[250488]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 16:53:06 compute-0 sudo[250486]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:06 compute-0 ceph-mon[74273]: pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:07 compute-0 sudo[250650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slsafalhzqcyhafvpifrfdfrlxnizjzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337587.0812697-1289-6406417461134/AnsiballZ_file.py'
Oct 01 16:53:07 compute-0 sudo[250650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:07 compute-0 podman[250613]: 2025-10-01 16:53:07.461316281 +0000 UTC m=+0.092061280 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 01 16:53:07 compute-0 python3.9[250658]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:53:07 compute-0 sudo[250650]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:07 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:08 compute-0 sudo[250810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otdhcipoxwubetpsxpaluaymcmvwbprc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337587.8628933-1289-168949479473208/AnsiballZ_file.py'
Oct 01 16:53:08 compute-0 sudo[250810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:08 compute-0 python3.9[250812]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:53:08 compute-0 sudo[250810]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:08 compute-0 ceph-mon[74273]: pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:08 compute-0 sudo[250962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rulazxwpempihixeqwzvfqcdtuswiser ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337588.6390352-1289-138245061908677/AnsiballZ_file.py'
Oct 01 16:53:08 compute-0 sudo[250962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:09 compute-0 python3.9[250964]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:53:09 compute-0 sudo[250962]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:09 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:09 compute-0 podman[251064]: 2025-10-01 16:53:09.787139152 +0000 UTC m=+0.089629703 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 01 16:53:09 compute-0 sudo[251133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nayldfcpsloyhxljljwpvozqavqbtedn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337589.4793117-1311-77490099791990/AnsiballZ_file.py'
Oct 01 16:53:09 compute-0 sudo[251133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:10 compute-0 python3.9[251135]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:53:10 compute-0 sudo[251133]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:53:10 compute-0 sudo[251235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:53:10 compute-0 sudo[251235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:53:10 compute-0 sudo[251235]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:10 compute-0 sudo[251333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwpzzucxgxghhrvhtvjfvpoemkqhmjta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337590.2332194-1311-14619407166613/AnsiballZ_file.py'
Oct 01 16:53:10 compute-0 sudo[251290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:53:10 compute-0 sudo[251333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:10 compute-0 sudo[251290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:53:10 compute-0 sudo[251290]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:10 compute-0 sudo[251338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:53:10 compute-0 sudo[251338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:53:10 compute-0 sudo[251338]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:10 compute-0 sudo[251363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 16:53:10 compute-0 sudo[251363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:53:10 compute-0 python3.9[251336]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:53:10 compute-0 ceph-mon[74273]: pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:10 compute-0 sudo[251333]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:11 compute-0 sudo[251363]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:53:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:53:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 16:53:11 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:53:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 16:53:11 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:53:11 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev d43f83b1-f846-4cc7-9ec0-ea391078406d does not exist
Oct 01 16:53:11 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 69a3398c-aa14-4fa5-aa58-c5760df9f440 does not exist
Oct 01 16:53:11 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev b62f3011-879a-4445-b21f-61b8abbcde22 does not exist
Oct 01 16:53:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 16:53:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:53:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 16:53:11 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:53:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:53:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:53:11 compute-0 sudo[251518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:53:11 compute-0 sudo[251518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:53:11 compute-0 sudo[251518]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:11 compute-0 sudo[251567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:53:11 compute-0 sudo[251567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:53:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_16:53:11
Oct 01 16:53:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 16:53:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 16:53:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['.rgw.root', 'vms', 'images', 'backups', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'volumes']
Oct 01 16:53:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 16:53:11 compute-0 sudo[251567]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:53:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:53:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:53:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:53:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:53:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:53:11 compute-0 sudo[251622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbhvgbnfkvcoselkozeeuakcprdcesgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337591.018534-1311-186164965641872/AnsiballZ_file.py'
Oct 01 16:53:11 compute-0 sudo[251622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:11 compute-0 sudo[251616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:53:11 compute-0 sudo[251616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:53:11 compute-0 sudo[251616]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 16:53:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:53:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 16:53:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:53:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:53:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:53:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:53:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:53:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:53:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:53:11 compute-0 sudo[251646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 16:53:11 compute-0 sudo[251646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:53:11 compute-0 python3.9[251628]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:53:11 compute-0 sudo[251622]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:11 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:11 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:53:11 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:53:11 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:53:11 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:53:11 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:53:11 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:53:11 compute-0 podman[251770]: 2025-10-01 16:53:11.935979763 +0000 UTC m=+0.096001401 container create c52fc81103d39a25693893ce0db5d6e55ac91d99986928dd404b75cae87e519f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_meninsky, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 16:53:11 compute-0 podman[251770]: 2025-10-01 16:53:11.860753733 +0000 UTC m=+0.020775391 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:53:12 compute-0 sudo[251873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krwhakmosbajozxlsgnoqlcyazridlwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337591.8038619-1311-170053928349973/AnsiballZ_file.py'
Oct 01 16:53:12 compute-0 sudo[251873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:12 compute-0 systemd[1]: Started libpod-conmon-c52fc81103d39a25693893ce0db5d6e55ac91d99986928dd404b75cae87e519f.scope.
Oct 01 16:53:12 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:53:12 compute-0 python3.9[251875]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:53:12 compute-0 podman[251770]: 2025-10-01 16:53:12.319949092 +0000 UTC m=+0.479970810 container init c52fc81103d39a25693893ce0db5d6e55ac91d99986928dd404b75cae87e519f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 01 16:53:12 compute-0 sudo[251873]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:12 compute-0 podman[251770]: 2025-10-01 16:53:12.337834272 +0000 UTC m=+0.497855950 container start c52fc81103d39a25693893ce0db5d6e55ac91d99986928dd404b75cae87e519f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:53:12 compute-0 systemd[1]: libpod-c52fc81103d39a25693893ce0db5d6e55ac91d99986928dd404b75cae87e519f.scope: Deactivated successfully.
Oct 01 16:53:12 compute-0 great_meninsky[251878]: 167 167
Oct 01 16:53:12 compute-0 conmon[251878]: conmon c52fc81103d39a256938 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c52fc81103d39a25693893ce0db5d6e55ac91d99986928dd404b75cae87e519f.scope/container/memory.events
Oct 01 16:53:12 compute-0 podman[251770]: 2025-10-01 16:53:12.392249813 +0000 UTC m=+0.552271551 container attach c52fc81103d39a25693893ce0db5d6e55ac91d99986928dd404b75cae87e519f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_meninsky, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 01 16:53:12 compute-0 podman[251770]: 2025-10-01 16:53:12.394111583 +0000 UTC m=+0.554133241 container died c52fc81103d39a25693893ce0db5d6e55ac91d99986928dd404b75cae87e519f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_meninsky, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 01 16:53:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-768804e86f0fc4638e8a10f2ef86472e6a7256b42134c91421a01ccadf7ad903-merged.mount: Deactivated successfully.
Oct 01 16:53:12 compute-0 ceph-mon[74273]: pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:12 compute-0 sudo[252043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwatojslhntkliwkxemfdjaduwfywawz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337592.5284114-1311-230312884209704/AnsiballZ_file.py'
Oct 01 16:53:12 compute-0 sudo[252043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:12 compute-0 podman[251770]: 2025-10-01 16:53:12.956735299 +0000 UTC m=+1.116756947 container remove c52fc81103d39a25693893ce0db5d6e55ac91d99986928dd404b75cae87e519f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_meninsky, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 01 16:53:13 compute-0 systemd[1]: libpod-conmon-c52fc81103d39a25693893ce0db5d6e55ac91d99986928dd404b75cae87e519f.scope: Deactivated successfully.
Oct 01 16:53:13 compute-0 python3.9[252045]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:53:13 compute-0 podman[252053]: 2025-10-01 16:53:13.254719068 +0000 UTC m=+0.107698325 container create 1295ff3fb209af9717c0e3850a0c0dcaaa2cf3b87987ef43346b66defc4d250a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:53:13 compute-0 sudo[252043]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:13 compute-0 podman[252053]: 2025-10-01 16:53:13.198705748 +0000 UTC m=+0.051685065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:53:13 compute-0 systemd[1]: Started libpod-conmon-1295ff3fb209af9717c0e3850a0c0dcaaa2cf3b87987ef43346b66defc4d250a.scope.
Oct 01 16:53:13 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:53:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6adb4832b47b367487c230d1f232075fd7b0f420bd2bbfd54686d2e869137b47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:53:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6adb4832b47b367487c230d1f232075fd7b0f420bd2bbfd54686d2e869137b47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:53:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6adb4832b47b367487c230d1f232075fd7b0f420bd2bbfd54686d2e869137b47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:53:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6adb4832b47b367487c230d1f232075fd7b0f420bd2bbfd54686d2e869137b47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:53:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6adb4832b47b367487c230d1f232075fd7b0f420bd2bbfd54686d2e869137b47/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:53:13 compute-0 podman[252053]: 2025-10-01 16:53:13.557593752 +0000 UTC m=+0.410573029 container init 1295ff3fb209af9717c0e3850a0c0dcaaa2cf3b87987ef43346b66defc4d250a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_johnson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:53:13 compute-0 podman[252053]: 2025-10-01 16:53:13.571616749 +0000 UTC m=+0.424596016 container start 1295ff3fb209af9717c0e3850a0c0dcaaa2cf3b87987ef43346b66defc4d250a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_johnson, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 01 16:53:13 compute-0 podman[252053]: 2025-10-01 16:53:13.705029486 +0000 UTC m=+0.558008753 container attach 1295ff3fb209af9717c0e3850a0c0dcaaa2cf3b87987ef43346b66defc4d250a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_johnson, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Oct 01 16:53:13 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:13 compute-0 sudo[252223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phrpuqhknhyqvtfkuahocyiqfddjhxev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337593.4898863-1311-64167053301763/AnsiballZ_file.py'
Oct 01 16:53:13 compute-0 sudo[252223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:14 compute-0 python3.9[252225]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:53:14 compute-0 sudo[252223]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:14 compute-0 sudo[252391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yojwogsppzxvvuvbuuymwhdkkkqymukz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337594.3345635-1311-178585542183919/AnsiballZ_file.py'
Oct 01 16:53:14 compute-0 sudo[252391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:14 compute-0 priceless_johnson[252092]: --> passed data devices: 0 physical, 3 LVM
Oct 01 16:53:14 compute-0 priceless_johnson[252092]: --> relative data size: 1.0
Oct 01 16:53:14 compute-0 priceless_johnson[252092]: --> All data devices are unavailable
Oct 01 16:53:14 compute-0 systemd[1]: libpod-1295ff3fb209af9717c0e3850a0c0dcaaa2cf3b87987ef43346b66defc4d250a.scope: Deactivated successfully.
Oct 01 16:53:14 compute-0 systemd[1]: libpod-1295ff3fb209af9717c0e3850a0c0dcaaa2cf3b87987ef43346b66defc4d250a.scope: Consumed 1.249s CPU time.
Oct 01 16:53:14 compute-0 conmon[252092]: conmon 1295ff3fb209af9717c0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1295ff3fb209af9717c0e3850a0c0dcaaa2cf3b87987ef43346b66defc4d250a.scope/container/memory.events
Oct 01 16:53:14 compute-0 podman[252053]: 2025-10-01 16:53:14.895479326 +0000 UTC m=+1.748458583 container died 1295ff3fb209af9717c0e3850a0c0dcaaa2cf3b87987ef43346b66defc4d250a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 01 16:53:14 compute-0 python3.9[252394]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:53:15 compute-0 sudo[252391]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:15 compute-0 ceph-mon[74273]: pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:53:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-6adb4832b47b367487c230d1f232075fd7b0f420bd2bbfd54686d2e869137b47-merged.mount: Deactivated successfully.
Oct 01 16:53:15 compute-0 podman[252053]: 2025-10-01 16:53:15.285015691 +0000 UTC m=+2.137994918 container remove 1295ff3fb209af9717c0e3850a0c0dcaaa2cf3b87987ef43346b66defc4d250a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_johnson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:53:15 compute-0 sudo[251646]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:15 compute-0 systemd[1]: libpod-conmon-1295ff3fb209af9717c0e3850a0c0dcaaa2cf3b87987ef43346b66defc4d250a.scope: Deactivated successfully.
Oct 01 16:53:15 compute-0 sudo[252512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:53:15 compute-0 sudo[252512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:53:15 compute-0 sudo[252512]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:15 compute-0 sudo[252560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:53:15 compute-0 sudo[252560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:53:15 compute-0 sudo[252560]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:15 compute-0 sudo[252612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfqwptmlausbcioduicnyhtogxfksocd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337595.1858594-1311-199473036100938/AnsiballZ_file.py'
Oct 01 16:53:15 compute-0 sudo[252612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:15 compute-0 sudo[252613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:53:15 compute-0 sudo[252613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:53:15 compute-0 sudo[252613]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:15 compute-0 sudo[252640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 16:53:15 compute-0 sudo[252640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:53:15 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:15 compute-0 python3.9[252632]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:53:15 compute-0 sudo[252612]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:16 compute-0 podman[252752]: 2025-10-01 16:53:16.070197846 +0000 UTC m=+0.066686567 container create 667627748cdf83fca72c8ef38a3f02b20f5b1a1b662bdb072c023844bd4270e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_franklin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:53:16 compute-0 systemd[1]: Started libpod-conmon-667627748cdf83fca72c8ef38a3f02b20f5b1a1b662bdb072c023844bd4270e9.scope.
Oct 01 16:53:16 compute-0 podman[252752]: 2025-10-01 16:53:16.041084372 +0000 UTC m=+0.037573103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:53:16 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:53:16 compute-0 podman[252752]: 2025-10-01 16:53:16.193528363 +0000 UTC m=+0.190017064 container init 667627748cdf83fca72c8ef38a3f02b20f5b1a1b662bdb072c023844bd4270e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_franklin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:53:16 compute-0 podman[252752]: 2025-10-01 16:53:16.207320759 +0000 UTC m=+0.203809470 container start 667627748cdf83fca72c8ef38a3f02b20f5b1a1b662bdb072c023844bd4270e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:53:16 compute-0 podman[252752]: 2025-10-01 16:53:16.212340173 +0000 UTC m=+0.208828954 container attach 667627748cdf83fca72c8ef38a3f02b20f5b1a1b662bdb072c023844bd4270e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_franklin, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 01 16:53:16 compute-0 gifted_franklin[252797]: 167 167
Oct 01 16:53:16 compute-0 systemd[1]: libpod-667627748cdf83fca72c8ef38a3f02b20f5b1a1b662bdb072c023844bd4270e9.scope: Deactivated successfully.
Oct 01 16:53:16 compute-0 conmon[252797]: conmon 667627748cdf83fca72c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-667627748cdf83fca72c8ef38a3f02b20f5b1a1b662bdb072c023844bd4270e9.scope/container/memory.events
Oct 01 16:53:16 compute-0 podman[252752]: 2025-10-01 16:53:16.215628564 +0000 UTC m=+0.212117335 container died 667627748cdf83fca72c8ef38a3f02b20f5b1a1b662bdb072c023844bd4270e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 01 16:53:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-70fc91b671bd7c217f5e0f0067f5f65f48bcdceec0c7b2100dff1c27c6e7a5b8-merged.mount: Deactivated successfully.
Oct 01 16:53:16 compute-0 podman[252752]: 2025-10-01 16:53:16.274530292 +0000 UTC m=+0.271019003 container remove 667627748cdf83fca72c8ef38a3f02b20f5b1a1b662bdb072c023844bd4270e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 01 16:53:16 compute-0 systemd[1]: libpod-conmon-667627748cdf83fca72c8ef38a3f02b20f5b1a1b662bdb072c023844bd4270e9.scope: Deactivated successfully.
Oct 01 16:53:16 compute-0 sudo[252888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ziuzhzzmvimgwrkwvdepbeeclzaerwnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337596.01177-1311-144379738900647/AnsiballZ_file.py'
Oct 01 16:53:16 compute-0 sudo[252888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:16 compute-0 podman[252896]: 2025-10-01 16:53:16.484817938 +0000 UTC m=+0.051015235 container create decec366f397f822d5fbe5e06e1c831084309a2ab6dd9894721e31ac3daffda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 01 16:53:16 compute-0 systemd[1]: Started libpod-conmon-decec366f397f822d5fbe5e06e1c831084309a2ab6dd9894721e31ac3daffda4.scope.
Oct 01 16:53:16 compute-0 podman[252896]: 2025-10-01 16:53:16.460932498 +0000 UTC m=+0.027129825 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:53:16 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:53:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32095cfa218ee0ac0362c5fb8b4e8e508eec6a8f0bf23f9ed0e5f30fab13785b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:53:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32095cfa218ee0ac0362c5fb8b4e8e508eec6a8f0bf23f9ed0e5f30fab13785b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:53:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32095cfa218ee0ac0362c5fb8b4e8e508eec6a8f0bf23f9ed0e5f30fab13785b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:53:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32095cfa218ee0ac0362c5fb8b4e8e508eec6a8f0bf23f9ed0e5f30fab13785b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:53:16 compute-0 podman[252896]: 2025-10-01 16:53:16.581335446 +0000 UTC m=+0.147532743 container init decec366f397f822d5fbe5e06e1c831084309a2ab6dd9894721e31ac3daffda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:53:16 compute-0 podman[252896]: 2025-10-01 16:53:16.595303085 +0000 UTC m=+0.161500382 container start decec366f397f822d5fbe5e06e1c831084309a2ab6dd9894721e31ac3daffda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_johnson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 01 16:53:16 compute-0 podman[252896]: 2025-10-01 16:53:16.599275216 +0000 UTC m=+0.165472533 container attach decec366f397f822d5fbe5e06e1c831084309a2ab6dd9894721e31ac3daffda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_johnson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 01 16:53:16 compute-0 python3.9[252895]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:53:16 compute-0 sudo[252888]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:17 compute-0 ceph-mon[74273]: pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]: {
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:     "0": [
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:         {
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             "devices": [
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "/dev/loop3"
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             ],
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             "lv_name": "ceph_lv0",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             "lv_size": "21470642176",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             "name": "ceph_lv0",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             "tags": {
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "ceph.cluster_name": "ceph",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "ceph.crush_device_class": "",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "ceph.encrypted": "0",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "ceph.osd_id": "0",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "ceph.type": "block",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "ceph.vdo": "0"
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             },
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             "type": "block",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             "vg_name": "ceph_vg0"
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:         }
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:     ],
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:     "1": [
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:         {
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             "devices": [
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "/dev/loop4"
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             ],
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             "lv_name": "ceph_lv1",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             "lv_size": "21470642176",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             "name": "ceph_lv1",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             "tags": {
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "ceph.cluster_name": "ceph",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "ceph.crush_device_class": "",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "ceph.encrypted": "0",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "ceph.osd_id": "1",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "ceph.type": "block",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "ceph.vdo": "0"
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             },
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             "type": "block",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             "vg_name": "ceph_vg1"
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:         }
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:     ],
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:     "2": [
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:         {
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             "devices": [
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "/dev/loop5"
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             ],
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             "lv_name": "ceph_lv2",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             "lv_size": "21470642176",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             "name": "ceph_lv2",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             "tags": {
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "ceph.cluster_name": "ceph",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "ceph.crush_device_class": "",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "ceph.encrypted": "0",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "ceph.osd_id": "2",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "ceph.type": "block",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:                 "ceph.vdo": "0"
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             },
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             "type": "block",
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:             "vg_name": "ceph_vg2"
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:         }
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]:     ]
Oct 01 16:53:17 compute-0 quizzical_johnson[252912]: }
Oct 01 16:53:17 compute-0 systemd[1]: libpod-decec366f397f822d5fbe5e06e1c831084309a2ab6dd9894721e31ac3daffda4.scope: Deactivated successfully.
Oct 01 16:53:17 compute-0 podman[252945]: 2025-10-01 16:53:17.531561463 +0000 UTC m=+0.042551568 container died decec366f397f822d5fbe5e06e1c831084309a2ab6dd9894721e31ac3daffda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_johnson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 01 16:53:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-32095cfa218ee0ac0362c5fb8b4e8e508eec6a8f0bf23f9ed0e5f30fab13785b-merged.mount: Deactivated successfully.
Oct 01 16:53:17 compute-0 podman[252945]: 2025-10-01 16:53:17.612260096 +0000 UTC m=+0.123250171 container remove decec366f397f822d5fbe5e06e1c831084309a2ab6dd9894721e31ac3daffda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:53:17 compute-0 systemd[1]: libpod-conmon-decec366f397f822d5fbe5e06e1c831084309a2ab6dd9894721e31ac3daffda4.scope: Deactivated successfully.
Oct 01 16:53:17 compute-0 sudo[252640]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:17 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:17 compute-0 sudo[252961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:53:17 compute-0 sudo[252961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:53:17 compute-0 sudo[252961]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:17 compute-0 sudo[252986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:53:17 compute-0 sudo[252986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:53:17 compute-0 sudo[252986]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:17 compute-0 sudo[253011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:53:17 compute-0 sudo[253011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:53:17 compute-0 sudo[253011]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:18 compute-0 sudo[253036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 16:53:18 compute-0 sudo[253036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:53:18 compute-0 ceph-mon[74273]: pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:18 compute-0 podman[253101]: 2025-10-01 16:53:18.579353818 +0000 UTC m=+0.070604600 container create 2927a0e00e7e6f12953e5895d01ca5b8021080171fbb052eb1d26faaf9f902cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hugle, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:53:18 compute-0 systemd[1]: Started libpod-conmon-2927a0e00e7e6f12953e5895d01ca5b8021080171fbb052eb1d26faaf9f902cf.scope.
Oct 01 16:53:18 compute-0 podman[253101]: 2025-10-01 16:53:18.550196596 +0000 UTC m=+0.041447338 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:53:18 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:53:18 compute-0 podman[253101]: 2025-10-01 16:53:18.673023095 +0000 UTC m=+0.164273817 container init 2927a0e00e7e6f12953e5895d01ca5b8021080171fbb052eb1d26faaf9f902cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 01 16:53:18 compute-0 podman[253101]: 2025-10-01 16:53:18.683608727 +0000 UTC m=+0.174859409 container start 2927a0e00e7e6f12953e5895d01ca5b8021080171fbb052eb1d26faaf9f902cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hugle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:53:18 compute-0 podman[253101]: 2025-10-01 16:53:18.688060755 +0000 UTC m=+0.179311447 container attach 2927a0e00e7e6f12953e5895d01ca5b8021080171fbb052eb1d26faaf9f902cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Oct 01 16:53:18 compute-0 infallible_hugle[253117]: 167 167
Oct 01 16:53:18 compute-0 systemd[1]: libpod-2927a0e00e7e6f12953e5895d01ca5b8021080171fbb052eb1d26faaf9f902cf.scope: Deactivated successfully.
Oct 01 16:53:18 compute-0 podman[253101]: 2025-10-01 16:53:18.692565962 +0000 UTC m=+0.183816644 container died 2927a0e00e7e6f12953e5895d01ca5b8021080171fbb052eb1d26faaf9f902cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 01 16:53:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-47b83786d1bcc035c093daa67b1808c7930777d41cae1420639d81b63ee49014-merged.mount: Deactivated successfully.
Oct 01 16:53:18 compute-0 podman[253101]: 2025-10-01 16:53:18.743630494 +0000 UTC m=+0.234881146 container remove 2927a0e00e7e6f12953e5895d01ca5b8021080171fbb052eb1d26faaf9f902cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hugle, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:53:18 compute-0 systemd[1]: libpod-conmon-2927a0e00e7e6f12953e5895d01ca5b8021080171fbb052eb1d26faaf9f902cf.scope: Deactivated successfully.
Oct 01 16:53:19 compute-0 podman[253143]: 2025-10-01 16:53:19.014504572 +0000 UTC m=+0.066637939 container create 6c7ef621179a779bbd1de91a92d9abda9c7685113959ad95d94e0c39a44ba2d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 01 16:53:19 compute-0 systemd[1]: Started libpod-conmon-6c7ef621179a779bbd1de91a92d9abda9c7685113959ad95d94e0c39a44ba2d7.scope.
Oct 01 16:53:19 compute-0 podman[253143]: 2025-10-01 16:53:18.9873418 +0000 UTC m=+0.039475237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:53:19 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:53:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f4c6801d99714b49ae431118de86863b5de198a02748e87637ff7d95bb118ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:53:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f4c6801d99714b49ae431118de86863b5de198a02748e87637ff7d95bb118ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:53:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f4c6801d99714b49ae431118de86863b5de198a02748e87637ff7d95bb118ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:53:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f4c6801d99714b49ae431118de86863b5de198a02748e87637ff7d95bb118ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:53:19 compute-0 podman[253143]: 2025-10-01 16:53:19.139256944 +0000 UTC m=+0.191390371 container init 6c7ef621179a779bbd1de91a92d9abda9c7685113959ad95d94e0c39a44ba2d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_galois, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:53:19 compute-0 podman[253143]: 2025-10-01 16:53:19.151880804 +0000 UTC m=+0.204014181 container start 6c7ef621179a779bbd1de91a92d9abda9c7685113959ad95d94e0c39a44ba2d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 01 16:53:19 compute-0 podman[253143]: 2025-10-01 16:53:19.156129912 +0000 UTC m=+0.208263259 container attach 6c7ef621179a779bbd1de91a92d9abda9c7685113959ad95d94e0c39a44ba2d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_galois, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 01 16:53:19 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:53:19.954 162304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 16:53:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:53:19.956 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 16:53:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:53:19.956 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 16:53:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:53:20 compute-0 focused_galois[253159]: {
Oct 01 16:53:20 compute-0 focused_galois[253159]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 16:53:20 compute-0 focused_galois[253159]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:53:20 compute-0 focused_galois[253159]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 16:53:20 compute-0 focused_galois[253159]:         "osd_id": 2,
Oct 01 16:53:20 compute-0 focused_galois[253159]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:53:20 compute-0 focused_galois[253159]:         "type": "bluestore"
Oct 01 16:53:20 compute-0 focused_galois[253159]:     },
Oct 01 16:53:20 compute-0 focused_galois[253159]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 16:53:20 compute-0 focused_galois[253159]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:53:20 compute-0 focused_galois[253159]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 16:53:20 compute-0 focused_galois[253159]:         "osd_id": 0,
Oct 01 16:53:20 compute-0 focused_galois[253159]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:53:20 compute-0 focused_galois[253159]:         "type": "bluestore"
Oct 01 16:53:20 compute-0 focused_galois[253159]:     },
Oct 01 16:53:20 compute-0 focused_galois[253159]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 16:53:20 compute-0 focused_galois[253159]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:53:20 compute-0 focused_galois[253159]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 16:53:20 compute-0 focused_galois[253159]:         "osd_id": 1,
Oct 01 16:53:20 compute-0 focused_galois[253159]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:53:20 compute-0 focused_galois[253159]:         "type": "bluestore"
Oct 01 16:53:20 compute-0 focused_galois[253159]:     }
Oct 01 16:53:20 compute-0 focused_galois[253159]: }
Oct 01 16:53:20 compute-0 systemd[1]: libpod-6c7ef621179a779bbd1de91a92d9abda9c7685113959ad95d94e0c39a44ba2d7.scope: Deactivated successfully.
Oct 01 16:53:20 compute-0 podman[253143]: 2025-10-01 16:53:20.228563405 +0000 UTC m=+1.280696772 container died 6c7ef621179a779bbd1de91a92d9abda9c7685113959ad95d94e0c39a44ba2d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_galois, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:53:20 compute-0 systemd[1]: libpod-6c7ef621179a779bbd1de91a92d9abda9c7685113959ad95d94e0c39a44ba2d7.scope: Consumed 1.086s CPU time.
Oct 01 16:53:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f4c6801d99714b49ae431118de86863b5de198a02748e87637ff7d95bb118ee-merged.mount: Deactivated successfully.
Oct 01 16:53:20 compute-0 podman[253143]: 2025-10-01 16:53:20.293162745 +0000 UTC m=+1.345296092 container remove 6c7ef621179a779bbd1de91a92d9abda9c7685113959ad95d94e0c39a44ba2d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_galois, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 01 16:53:20 compute-0 systemd[1]: libpod-conmon-6c7ef621179a779bbd1de91a92d9abda9c7685113959ad95d94e0c39a44ba2d7.scope: Deactivated successfully.
Oct 01 16:53:20 compute-0 sudo[253036]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:53:20 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:53:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:53:20 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:53:20 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev a3bab805-20c1-4220-8380-d1406ccf8b81 does not exist
Oct 01 16:53:20 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 59e63d03-ddc0-4bac-9cf5-9d201c379480 does not exist
Oct 01 16:53:20 compute-0 sudo[253204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:53:20 compute-0 sudo[253204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:53:20 compute-0 sudo[253204]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:20 compute-0 sudo[253229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 16:53:20 compute-0 sudo[253229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:53:20 compute-0 sudo[253229]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:20 compute-0 podman[253253]: 2025-10-01 16:53:20.749453774 +0000 UTC m=+0.183080297 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 01 16:53:20 compute-0 ceph-mon[74273]: pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:20 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:53:20 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:53:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 16:53:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:53:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 16:53:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:53:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:53:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:53:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:53:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:53:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:53:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:53:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:53:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:53:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 16:53:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:53:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:53:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:53:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 16:53:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:53:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 16:53:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:53:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:53:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:53:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 16:53:21 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:21 compute-0 sudo[253403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztcboclzbsikqdfqijeljainztoxvmdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337601.407314-1514-85530577729882/AnsiballZ_getent.py'
Oct 01 16:53:21 compute-0 sudo[253403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:22 compute-0 python3.9[253405]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct 01 16:53:22 compute-0 sudo[253403]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:22 compute-0 ceph-mon[74273]: pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:22 compute-0 sudo[253556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfjbgffrazztxiwyzexoxalxvoxyrxga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337602.4039714-1522-57234331303227/AnsiballZ_group.py'
Oct 01 16:53:22 compute-0 sudo[253556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:23 compute-0 python3.9[253558]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 01 16:53:23 compute-0 groupadd[253559]: group added to /etc/group: name=nova, GID=42436
Oct 01 16:53:23 compute-0 groupadd[253559]: group added to /etc/gshadow: name=nova
Oct 01 16:53:23 compute-0 groupadd[253559]: new group: name=nova, GID=42436
Oct 01 16:53:23 compute-0 sudo[253556]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:23 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 16:53:23 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Cumulative writes: 3305 writes, 14K keys, 3305 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 3305 writes, 3305 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1280 writes, 5811 keys, 1280 commit groups, 1.0 writes per commit group, ingest: 8.47 MB, 0.01 MB/s
                                           Interval WAL: 1280 writes, 1280 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     81.3      0.19              0.06         7    0.027       0      0       0.0       0.0
                                             L6      1/0    6.76 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    164.0    134.9      0.31              0.17         6    0.051     24K   3198       0.0       0.0
                                            Sum      1/0    6.76 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.6    100.7    114.2      0.50              0.23        13    0.038     24K   3198       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.9    120.8    121.1      0.28              0.16         8    0.036     17K   2464       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    164.0    134.9      0.31              0.17         6    0.051     24K   3198       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    137.7      0.11              0.06         6    0.019       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.08              0.00         1    0.079       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.015, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.06 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.5 seconds
                                           Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5647d11d91f0#2 capacity: 308.00 MB usage: 1.57 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 6.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(97,1.35 MB,0.43759%) FilterBlock(14,74.67 KB,0.0236759%) IndexBlock(14,148.80 KB,0.0471784%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 01 16:53:23 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:23 compute-0 sudo[253714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdkkhyirkiksvbseyxetwzblmvjcmewv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337603.4152508-1530-28754701749581/AnsiballZ_user.py'
Oct 01 16:53:23 compute-0 sudo[253714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:24 compute-0 python3.9[253716]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 01 16:53:24 compute-0 useradd[253719]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Oct 01 16:53:24 compute-0 podman[253718]: 2025-10-01 16:53:24.323485921 +0000 UTC m=+0.072513044 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent)
Oct 01 16:53:24 compute-0 useradd[253719]: add 'nova' to group 'libvirt'
Oct 01 16:53:24 compute-0 useradd[253719]: add 'nova' to shadow group 'libvirt'
Oct 01 16:53:24 compute-0 sudo[253714]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:24 compute-0 ceph-mon[74273]: pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:53:25 compute-0 sshd-session[253768]: Accepted publickey for zuul from 192.168.122.30 port 33724 ssh2: ECDSA SHA256:cAu4I/kPoFUKOLOQB71BUt6Th09G4PIJ2iHT8DD8gEY
Oct 01 16:53:25 compute-0 systemd-logind[788]: New session 52 of user zuul.
Oct 01 16:53:25 compute-0 systemd[1]: Started Session 52 of User zuul.
Oct 01 16:53:25 compute-0 sshd-session[253768]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 16:53:25 compute-0 sshd-session[253771]: Received disconnect from 192.168.122.30 port 33724:11: disconnected by user
Oct 01 16:53:25 compute-0 sshd-session[253771]: Disconnected from user zuul 192.168.122.30 port 33724
Oct 01 16:53:25 compute-0 sshd-session[253768]: pam_unix(sshd:session): session closed for user zuul
Oct 01 16:53:25 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Oct 01 16:53:25 compute-0 systemd-logind[788]: Session 52 logged out. Waiting for processes to exit.
Oct 01 16:53:25 compute-0 systemd-logind[788]: Removed session 52.
Oct 01 16:53:25 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:26 compute-0 python3.9[253921]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:53:26 compute-0 python3.9[254042]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759337605.5717182-1555-68581922691768/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:53:26 compute-0 ceph-mon[74273]: pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:27 compute-0 python3.9[254192]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:53:27 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:27 compute-0 python3.9[254268]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:53:28 compute-0 python3.9[254418]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:53:28 compute-0 ceph-mon[74273]: pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:29 compute-0 python3.9[254539]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759337608.1317704-1555-96077618567232/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:53:29 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:29 compute-0 python3.9[254689]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:53:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:53:30 compute-0 python3.9[254810]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759337609.4142315-1555-248303524962494/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:53:30 compute-0 ceph-mon[74273]: pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:31 compute-0 python3.9[254960]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:53:31 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:31 compute-0 python3.9[255081]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759337610.7643008-1555-42380332928737/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:53:32 compute-0 sudo[255231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljcqxkomwcfuupobjrowjuigclufdcqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337612.0466247-1624-48369711497687/AnsiballZ_file.py'
Oct 01 16:53:32 compute-0 sudo[255231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:32 compute-0 python3.9[255233]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:53:32 compute-0 sudo[255231]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:32 compute-0 ceph-mon[74273]: pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:33 compute-0 sudo[255383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbzebysxilkshcmmpwoownziifflzufr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337612.8044457-1632-223584365219199/AnsiballZ_copy.py'
Oct 01 16:53:33 compute-0 sudo[255383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:33 compute-0 python3.9[255385]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:53:33 compute-0 sudo[255383]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:33 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:33 compute-0 sudo[255535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkzpgsnaescqxszibtenyckrwsjvfqlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337613.5973208-1640-59043272036520/AnsiballZ_stat.py'
Oct 01 16:53:33 compute-0 sudo[255535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:34 compute-0 python3.9[255537]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:53:34 compute-0 sudo[255535]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:34 compute-0 sudo[255687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vofhaiicbhoonzbuyejgglgycmvnbyiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337614.340524-1648-110575245276911/AnsiballZ_stat.py'
Oct 01 16:53:34 compute-0 sudo[255687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:34 compute-0 python3.9[255689]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:53:34 compute-0 ceph-mon[74273]: pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:34 compute-0 sudo[255687]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:53:35 compute-0 sudo[255810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkeiklvfjbdxialmgsiuybuiltrionet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337614.340524-1648-110575245276911/AnsiballZ_copy.py'
Oct 01 16:53:35 compute-0 sudo[255810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:35 compute-0 python3.9[255812]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1759337614.340524-1648-110575245276911/.source _original_basename=.y2ikhn5u follow=False checksum=4dc9e073a7aa1769b77c197897a581942c52f1f7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Oct 01 16:53:35 compute-0 sudo[255810]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:35 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:36 compute-0 python3.9[255964]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:53:36 compute-0 ceph-mon[74273]: pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:37 compute-0 python3.9[256116]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:53:37 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:37 compute-0 podman[256167]: 2025-10-01 16:53:37.750724225 +0000 UTC m=+0.060313135 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 01 16:53:38 compute-0 python3.9[256257]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759337616.8909726-1674-75326829913118/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=f022386746472553146d29f689b545df70fa8a60 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:53:38 compute-0 python3.9[256407]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 16:53:38 compute-0 ceph-mon[74273]: pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:39 compute-0 python3.9[256528]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759337618.281937-1689-72609062409717/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 16:53:39 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:53:40 compute-0 sudo[256691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-refesrbdufjdpocahinoopbqdewrxnam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337619.8361964-1706-148305723202947/AnsiballZ_container_config_data.py'
Oct 01 16:53:40 compute-0 sudo[256691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:40 compute-0 podman[256652]: 2025-10-01 16:53:40.222057145 +0000 UTC m=+0.094476131 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 01 16:53:40 compute-0 python3.9[256700]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct 01 16:53:40 compute-0 sudo[256691]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:40 compute-0 ceph-mon[74273]: pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:41 compute-0 sudo[256851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqtsnkcmtswapdeadkuczduvsnncnnnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337620.716674-1715-50320584785873/AnsiballZ_container_config_hash.py'
Oct 01 16:53:41 compute-0 sudo[256851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:41 compute-0 python3.9[256853]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 01 16:53:41 compute-0 sudo[256851]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:53:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:53:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:53:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:53:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:53:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:53:41 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:42 compute-0 sudo[257003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqnxsvdluhfhrayrxovtgfkowexyvdwr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759337621.658939-1725-107278486255276/AnsiballZ_edpm_container_manage.py'
Oct 01 16:53:42 compute-0 sudo[257003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:42 compute-0 python3[257005]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct 01 16:53:43 compute-0 ceph-mon[74273]: pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:43 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:45 compute-0 ceph-mon[74273]: pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:53:45 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:46 compute-0 ceph-mon[74273]: pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:47 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:49 compute-0 ceph-mon[74273]: pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:49 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:53:50 compute-0 ceph-mon[74273]: pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:51 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:51 compute-0 podman[257077]: 2025-10-01 16:53:51.85810232 +0000 UTC m=+0.175136876 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 01 16:53:51 compute-0 podman[257017]: 2025-10-01 16:53:51.872032851 +0000 UTC m=+9.460707307 image pull cb7a9bebda1404fc92f1415580e7da04b5fcfd160582e38b9b99703a41ed1519 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct 01 16:53:52 compute-0 podman[257125]: 2025-10-01 16:53:52.082871512 +0000 UTC m=+0.080349560 container create 956ee4e2bf9ef3f3837694c8da1572763df3fb057205dbff195a2e5978566388 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 16:53:52 compute-0 podman[257125]: 2025-10-01 16:53:52.04374752 +0000 UTC m=+0.041225618 image pull cb7a9bebda1404fc92f1415580e7da04b5fcfd160582e38b9b99703a41ed1519 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct 01 16:53:52 compute-0 python3[257005]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Oct 01 16:53:52 compute-0 sudo[257003]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:52 compute-0 sudo[257313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edqixcexioehzvahkfoptcujiwovqvec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337632.5056226-1733-191773889705768/AnsiballZ_stat.py'
Oct 01 16:53:52 compute-0 sudo[257313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:52 compute-0 ceph-mon[74273]: pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:53 compute-0 python3.9[257315]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:53:53 compute-0 sudo[257313]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:53 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:53 compute-0 sudo[257467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwtygninvbmexstbooubjslugfpyjezf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337633.4683888-1745-38453529629327/AnsiballZ_container_config_data.py'
Oct 01 16:53:53 compute-0 sudo[257467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:54 compute-0 python3.9[257469]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct 01 16:53:54 compute-0 sudo[257467]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:54 compute-0 sudo[257633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruckedyzfnuzbkwdaztwhmydacqjyeuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337634.3422775-1754-245753873322224/AnsiballZ_container_config_hash.py'
Oct 01 16:53:54 compute-0 sudo[257633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:54 compute-0 podman[257593]: 2025-10-01 16:53:54.759437228 +0000 UTC m=+0.074046945 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 01 16:53:54 compute-0 ceph-mon[74273]: pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:54 compute-0 python3.9[257641]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 01 16:53:55 compute-0 sudo[257633]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:53:55 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:55 compute-0 sudo[257791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsrmvdgbhkxybhcbublaszzhpqhuluve ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759337635.3573194-1764-87348217699657/AnsiballZ_edpm_container_manage.py'
Oct 01 16:53:55 compute-0 sudo[257791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:56 compute-0 python3[257793]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct 01 16:53:56 compute-0 podman[257832]: 2025-10-01 16:53:56.269273873 +0000 UTC m=+0.041792852 image pull cb7a9bebda1404fc92f1415580e7da04b5fcfd160582e38b9b99703a41ed1519 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct 01 16:53:56 compute-0 podman[257832]: 2025-10-01 16:53:56.460529789 +0000 UTC m=+0.233048768 container create a1220a3038f0ce0173e25709e534e5aa4813dad42f5a4559d3954644eeca6907 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:53:56 compute-0 python3[257793]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Oct 01 16:53:56 compute-0 sudo[257791]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:56 compute-0 ceph-mon[74273]: pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:57 compute-0 sudo[258020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gngjeuejqpowrqbffxjfvedivxrqvdjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337636.8904746-1772-94421611885318/AnsiballZ_stat.py'
Oct 01 16:53:57 compute-0 sudo[258020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:57 compute-0 python3.9[258022]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:53:57 compute-0 sudo[258020]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:57 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:58 compute-0 sudo[258174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xilzocwwrkmscfxhlbsccxqtdvnwmqgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337637.8746922-1781-267302282503850/AnsiballZ_file.py'
Oct 01 16:53:58 compute-0 sudo[258174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:58 compute-0 python3.9[258176]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:53:58 compute-0 sudo[258174]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:58 compute-0 ceph-mon[74273]: pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:59 compute-0 sudo[258325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxxghtwloeqzpzkazcuhbagxrrfvtzrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337638.6052275-1781-7197340952514/AnsiballZ_copy.py'
Oct 01 16:53:59 compute-0 sudo[258325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:53:59 compute-0 python3.9[258327]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759337638.6052275-1781-7197340952514/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 16:53:59 compute-0 sudo[258325]: pam_unix(sudo:session): session closed for user root
Oct 01 16:53:59 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:53:59 compute-0 sudo[258401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuwxknzkrrdxcqlgbzbwwnbltpvcqeeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337638.6052275-1781-7197340952514/AnsiballZ_systemd.py'
Oct 01 16:53:59 compute-0 sudo[258401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:54:00 compute-0 python3.9[258403]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 01 16:54:00 compute-0 systemd[1]: Reloading.
Oct 01 16:54:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:54:00 compute-0 systemd-rc-local-generator[258428]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:54:00 compute-0 systemd-sysv-generator[258431]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:54:00 compute-0 sudo[258401]: pam_unix(sudo:session): session closed for user root
Oct 01 16:54:00 compute-0 sudo[258512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mezukopauzwuinsptcrzgqisjcincejv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337638.6052275-1781-7197340952514/AnsiballZ_systemd.py'
Oct 01 16:54:00 compute-0 sudo[258512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:54:00 compute-0 ceph-mon[74273]: pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:01 compute-0 python3.9[258514]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 16:54:01 compute-0 systemd[1]: Reloading.
Oct 01 16:54:01 compute-0 systemd-sysv-generator[258544]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 16:54:01 compute-0 systemd-rc-local-generator[258540]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 16:54:01 compute-0 systemd[1]: Starting nova_compute container...
Oct 01 16:54:01 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:01 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:54:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2192c9349a1be6088eed524ba4606745a4d4425a6e77595b4f59eeb09a80e48/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 01 16:54:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2192c9349a1be6088eed524ba4606745a4d4425a6e77595b4f59eeb09a80e48/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 01 16:54:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2192c9349a1be6088eed524ba4606745a4d4425a6e77595b4f59eeb09a80e48/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 01 16:54:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2192c9349a1be6088eed524ba4606745a4d4425a6e77595b4f59eeb09a80e48/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 01 16:54:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2192c9349a1be6088eed524ba4606745a4d4425a6e77595b4f59eeb09a80e48/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 01 16:54:01 compute-0 podman[258553]: 2025-10-01 16:54:01.778484071 +0000 UTC m=+0.136145488 container init a1220a3038f0ce0173e25709e534e5aa4813dad42f5a4559d3954644eeca6907 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 01 16:54:01 compute-0 podman[258553]: 2025-10-01 16:54:01.785747123 +0000 UTC m=+0.143408510 container start a1220a3038f0ce0173e25709e534e5aa4813dad42f5a4559d3954644eeca6907 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 01 16:54:01 compute-0 podman[258553]: nova_compute
Oct 01 16:54:01 compute-0 nova_compute[258569]: + sudo -E kolla_set_configs
Oct 01 16:54:01 compute-0 systemd[1]: Started nova_compute container.
Oct 01 16:54:01 compute-0 sudo[258512]: pam_unix(sudo:session): session closed for user root
Oct 01 16:54:01 compute-0 nova_compute[258569]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 01 16:54:01 compute-0 nova_compute[258569]: INFO:__main__:Validating config file
Oct 01 16:54:01 compute-0 nova_compute[258569]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 01 16:54:01 compute-0 nova_compute[258569]: INFO:__main__:Copying service configuration files
Oct 01 16:54:01 compute-0 nova_compute[258569]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 01 16:54:01 compute-0 nova_compute[258569]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 01 16:54:01 compute-0 nova_compute[258569]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 01 16:54:01 compute-0 nova_compute[258569]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 01 16:54:01 compute-0 nova_compute[258569]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 01 16:54:01 compute-0 nova_compute[258569]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 01 16:54:01 compute-0 nova_compute[258569]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 01 16:54:01 compute-0 nova_compute[258569]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 01 16:54:01 compute-0 nova_compute[258569]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 01 16:54:01 compute-0 nova_compute[258569]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 01 16:54:01 compute-0 nova_compute[258569]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 01 16:54:01 compute-0 nova_compute[258569]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 01 16:54:01 compute-0 nova_compute[258569]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 01 16:54:01 compute-0 nova_compute[258569]: INFO:__main__:Deleting /etc/ceph
Oct 01 16:54:01 compute-0 nova_compute[258569]: INFO:__main__:Creating directory /etc/ceph
Oct 01 16:54:01 compute-0 nova_compute[258569]: INFO:__main__:Setting permission for /etc/ceph
Oct 01 16:54:01 compute-0 nova_compute[258569]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 01 16:54:01 compute-0 nova_compute[258569]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 01 16:54:01 compute-0 nova_compute[258569]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 01 16:54:01 compute-0 nova_compute[258569]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 01 16:54:01 compute-0 nova_compute[258569]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 01 16:54:01 compute-0 nova_compute[258569]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 01 16:54:01 compute-0 nova_compute[258569]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 01 16:54:01 compute-0 nova_compute[258569]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 01 16:54:01 compute-0 nova_compute[258569]: INFO:__main__:Writing out command to execute
Oct 01 16:54:01 compute-0 nova_compute[258569]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 01 16:54:01 compute-0 nova_compute[258569]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 01 16:54:01 compute-0 nova_compute[258569]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 01 16:54:01 compute-0 nova_compute[258569]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 01 16:54:01 compute-0 nova_compute[258569]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 01 16:54:01 compute-0 nova_compute[258569]: ++ cat /run_command
Oct 01 16:54:01 compute-0 nova_compute[258569]: + CMD=nova-compute
Oct 01 16:54:01 compute-0 nova_compute[258569]: + ARGS=
Oct 01 16:54:01 compute-0 nova_compute[258569]: + sudo kolla_copy_cacerts
Oct 01 16:54:01 compute-0 nova_compute[258569]: + [[ ! -n '' ]]
Oct 01 16:54:01 compute-0 nova_compute[258569]: + . kolla_extend_start
Oct 01 16:54:01 compute-0 nova_compute[258569]: Running command: 'nova-compute'
Oct 01 16:54:01 compute-0 nova_compute[258569]: + echo 'Running command: '\''nova-compute'\'''
Oct 01 16:54:01 compute-0 nova_compute[258569]: + umask 0022
Oct 01 16:54:01 compute-0 nova_compute[258569]: + exec nova-compute
Oct 01 16:54:02 compute-0 ceph-mon[74273]: pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:02 compute-0 python3.9[258730]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:54:03 compute-0 python3.9[258881]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:54:03 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:04 compute-0 python3.9[259031]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 16:54:04 compute-0 nova_compute[258569]: 2025-10-01 16:54:04.572 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 01 16:54:04 compute-0 nova_compute[258569]: 2025-10-01 16:54:04.573 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 01 16:54:04 compute-0 nova_compute[258569]: 2025-10-01 16:54:04.573 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 01 16:54:04 compute-0 nova_compute[258569]: 2025-10-01 16:54:04.573 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Oct 01 16:54:04 compute-0 nova_compute[258569]: 2025-10-01 16:54:04.712 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 16:54:04 compute-0 nova_compute[258569]: 2025-10-01 16:54:04.747 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 16:54:04 compute-0 ceph-mon[74273]: pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:54:05 compute-0 sudo[259185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzvcwrrzpeusvrswgjtgrdqauaxiyrxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337644.7555065-1841-150332437797864/AnsiballZ_podman_container.py'
Oct 01 16:54:05 compute-0 sudo[259185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:54:05 compute-0 python3.9[259187]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.479 2 INFO nova.virt.driver [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Oct 01 16:54:05 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 16:54:05 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 16:54:05 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 16:54:05 compute-0 sudo[259185]: pam_unix(sudo:session): session closed for user root
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.679 2 INFO nova.compute.provider_config [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.694 2 DEBUG oslo_concurrency.lockutils [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.695 2 DEBUG oslo_concurrency.lockutils [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.695 2 DEBUG oslo_concurrency.lockutils [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.696 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.696 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.696 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.697 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.697 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.697 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.697 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.697 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.698 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.698 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.698 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.698 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.698 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.699 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.699 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.699 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.699 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.700 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.700 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.700 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.700 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.700 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.701 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.701 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.701 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.701 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.701 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.702 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.702 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.702 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.702 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.702 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.703 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.703 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.703 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.703 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.703 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.704 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.704 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.704 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.705 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.705 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.705 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.705 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.705 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.706 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.706 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.706 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.706 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.706 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.707 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.707 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.707 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.707 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.707 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.708 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.708 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.708 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.708 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.708 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.709 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.709 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.709 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.709 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.709 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.710 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.710 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.710 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.710 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.710 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.711 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.711 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.711 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.711 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.711 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.712 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.712 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.712 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.712 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.712 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.713 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.713 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.713 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.713 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.714 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.714 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.714 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.714 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.714 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.715 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.715 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.715 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.715 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.715 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.716 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.716 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.716 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.716 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.717 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.717 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.717 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.717 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.717 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.718 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.718 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.718 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.718 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.719 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.719 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.719 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.719 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.720 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.720 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.720 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.720 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.721 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.721 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.721 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.722 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.722 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.722 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.723 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.723 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.723 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.723 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.724 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.724 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.724 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.725 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.725 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.725 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.725 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.725 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.726 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.726 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.726 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.726 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.727 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.727 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.727 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.728 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.728 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.728 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.728 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.729 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.729 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.729 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.729 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.730 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.730 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.730 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.730 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.730 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.730 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.730 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.731 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.731 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.731 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.731 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.731 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.731 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.731 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.732 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.732 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.732 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.732 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.732 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.732 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.733 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.733 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.733 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.733 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.733 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.733 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.733 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.734 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.734 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.734 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.734 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.734 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.734 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.734 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.735 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.735 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.735 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.735 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.735 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.735 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.736 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.736 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.736 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.736 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.736 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.736 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.737 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.737 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.737 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.737 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.737 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.737 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.737 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.738 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.738 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.738 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.738 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.738 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.738 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.738 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.739 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.739 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.739 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.739 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.739 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.739 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.740 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.740 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.740 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.740 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.740 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.740 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.740 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.741 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.741 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.741 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.741 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.741 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.741 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.741 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.742 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.742 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.742 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.742 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.742 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.742 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.742 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.743 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.743 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.743 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.743 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.743 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.743 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.744 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.744 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.744 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.744 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.744 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.744 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.745 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.745 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.745 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.745 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.745 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.746 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.746 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.746 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.746 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.746 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.746 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.746 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.747 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.747 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.747 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.747 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.747 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.747 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.747 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.748 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.748 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.748 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.748 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.748 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.748 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.749 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.749 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.749 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.749 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.749 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.749 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.749 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.750 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.750 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.750 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.750 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.750 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.750 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.750 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.751 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.751 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.751 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.751 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.751 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.751 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.751 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.751 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.752 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.752 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.752 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.752 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.752 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.752 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.752 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.753 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.753 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.753 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.753 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.753 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.753 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.753 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.754 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.754 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.754 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.754 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.754 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.754 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.754 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.755 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.755 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.755 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.755 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.755 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.755 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.755 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.756 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.756 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.756 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.756 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.756 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.756 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.756 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.756 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.757 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.757 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.757 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.757 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.757 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.757 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.757 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.758 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.758 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.758 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.758 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.758 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.758 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.758 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.759 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.759 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.759 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.759 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.759 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.759 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.759 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.759 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.760 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.760 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.760 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.760 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.760 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.761 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.761 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.761 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.761 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.761 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.761 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.762 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.762 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.762 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.762 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.762 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.762 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.762 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.762 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.763 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.763 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.763 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.763 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.763 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.763 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.763 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.763 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.764 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.764 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.764 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.764 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.764 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.764 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.764 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.764 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.765 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.765 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.765 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.765 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.765 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.765 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.765 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.766 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.766 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.766 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.766 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.766 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.766 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.766 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.767 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.767 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.767 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.767 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.767 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.767 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.767 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.768 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.768 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.768 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.768 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.768 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.768 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.768 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.768 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.769 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.769 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.769 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.769 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.769 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.769 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.769 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.770 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.770 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.770 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.770 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.770 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.770 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.770 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.770 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.771 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.771 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.771 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.771 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.771 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.771 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.771 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.772 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.772 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.772 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.772 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.772 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.772 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.772 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.772 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.773 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.773 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.773 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.773 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.773 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.773 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.773 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.774 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.774 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.774 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.774 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.774 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.774 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.774 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.774 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.775 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.775 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.775 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.775 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.775 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.775 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.775 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.776 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.776 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.776 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.776 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.776 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.776 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.776 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.776 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.777 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.777 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.777 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.777 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.777 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.777 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.777 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.778 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.778 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.778 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.778 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.778 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.778 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.778 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.778 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.779 2 WARNING oslo_config.cfg [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 01 16:54:05 compute-0 nova_compute[258569]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 01 16:54:05 compute-0 nova_compute[258569]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 01 16:54:05 compute-0 nova_compute[258569]: and ``live_migration_inbound_addr`` respectively.
Oct 01 16:54:05 compute-0 nova_compute[258569]: ).  Its value may be silently ignored in the future.
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.779 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.779 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.779 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.779 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.779 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.780 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.780 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.780 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.780 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.780 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.780 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.780 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.781 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.781 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.781 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.781 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.781 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.781 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.782 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.rbd_secret_uuid        = f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.782 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.782 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.782 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.782 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.782 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.782 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.782 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.783 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.783 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.783 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.783 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.783 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.783 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.783 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.784 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.784 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.784 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.784 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.784 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.784 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.784 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.785 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.785 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.785 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.785 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.785 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.785 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.785 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.786 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.786 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.786 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.786 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.786 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.786 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.786 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.786 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.787 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.787 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.787 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.787 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.787 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.787 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.787 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.788 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.788 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.788 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.788 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.788 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.788 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.788 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.789 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.789 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.789 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.789 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.789 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.789 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.789 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.789 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.790 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.790 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.790 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.790 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.790 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.790 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.790 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.791 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.791 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.791 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.791 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.791 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.791 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.791 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.792 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.792 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.792 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.792 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.792 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.792 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.792 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.793 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.793 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.793 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.793 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.793 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.793 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.793 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.794 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.794 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.794 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.794 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.794 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.794 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.794 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.794 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.795 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.795 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.795 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.795 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.795 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.795 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.795 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.796 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.796 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.796 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.796 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.796 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.796 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.796 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.797 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.797 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.797 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.797 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.797 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.797 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.797 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.798 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.798 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.798 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.798 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.798 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.798 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.798 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.799 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.799 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.799 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.799 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.799 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.799 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.799 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.800 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.800 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.800 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.800 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.800 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.800 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.800 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.801 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.801 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.801 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.801 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.801 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.801 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.801 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.802 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.802 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.802 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.802 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.802 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.802 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.802 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.803 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.803 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.803 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.803 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.803 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.803 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.803 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.804 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.804 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.804 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.804 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.804 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.804 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.805 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.805 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.805 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.805 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.805 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.805 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.805 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.806 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.806 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.806 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.806 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.806 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.806 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.806 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.806 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.807 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.807 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.807 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.807 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.807 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.807 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.807 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.808 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.808 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.808 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.808 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.808 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.808 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.808 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.809 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.809 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.809 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.809 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.809 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.809 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.809 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.809 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.810 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.810 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.810 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.810 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.810 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.810 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.810 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.811 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.811 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.811 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.811 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.811 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.811 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.811 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.811 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.812 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.812 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.812 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.812 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.812 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.812 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.812 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.812 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.813 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.813 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.813 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.813 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.813 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.813 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.814 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.814 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.814 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.814 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.814 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.814 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.814 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.815 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.815 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.815 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.815 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.815 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.815 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.815 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.815 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.816 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.816 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.816 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.816 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.816 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.816 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.816 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.817 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.817 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.817 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.817 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.817 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.817 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.817 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.818 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.818 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.818 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.818 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.818 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.818 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.818 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.818 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.819 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.819 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.819 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.819 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.819 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.819 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.819 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.820 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.820 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.820 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.820 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.820 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.820 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.820 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.821 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.821 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.821 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.821 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.821 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.821 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.821 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.821 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.822 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.822 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.822 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.822 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.822 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.822 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.823 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.823 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.823 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.823 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.823 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.823 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.823 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.824 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.824 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.824 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.824 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.824 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.824 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.824 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.825 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.825 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.825 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.825 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.825 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.825 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.825 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.826 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.826 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.826 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.826 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.826 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.826 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.827 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.827 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.827 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.827 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.827 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.827 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.827 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.828 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.828 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.828 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.828 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.828 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.828 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.828 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.829 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.829 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.829 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.829 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.829 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.829 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.830 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.830 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.830 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.830 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.830 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.830 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.830 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.831 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.831 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.831 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.831 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.831 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.831 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.831 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.832 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.832 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.832 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.832 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.832 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.832 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.833 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.833 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.833 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.833 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.833 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.833 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.833 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.833 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.834 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.834 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.834 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.834 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.834 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.834 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.834 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.835 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.835 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.835 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.835 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.835 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.835 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.836 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.836 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.836 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.836 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.836 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.836 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.837 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.837 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.837 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.837 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.837 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.837 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.837 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.837 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.838 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.838 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.838 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.838 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.838 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.838 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.838 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.839 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.839 2 DEBUG oslo_service.service [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.840 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.853 2 DEBUG nova.virt.libvirt.host [None req-24a81374-5899-48e7-a792-410150bd5d69 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.853 2 DEBUG nova.virt.libvirt.host [None req-24a81374-5899-48e7-a792-410150bd5d69 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.854 2 DEBUG nova.virt.libvirt.host [None req-24a81374-5899-48e7-a792-410150bd5d69 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Oct 01 16:54:05 compute-0 nova_compute[258569]: 2025-10-01 16:54:05.854 2 DEBUG nova.virt.libvirt.host [None req-24a81374-5899-48e7-a792-410150bd5d69 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Oct 01 16:54:05 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Oct 01 16:54:05 compute-0 systemd[1]: Started libvirt QEMU daemon.
Oct 01 16:54:06 compute-0 nova_compute[258569]: 2025-10-01 16:54:06.009 2 DEBUG nova.virt.libvirt.host [None req-24a81374-5899-48e7-a792-410150bd5d69 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fba4c375430> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Oct 01 16:54:06 compute-0 nova_compute[258569]: 2025-10-01 16:54:06.013 2 DEBUG nova.virt.libvirt.host [None req-24a81374-5899-48e7-a792-410150bd5d69 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fba4c375430> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Oct 01 16:54:06 compute-0 nova_compute[258569]: 2025-10-01 16:54:06.014 2 INFO nova.virt.libvirt.driver [None req-24a81374-5899-48e7-a792-410150bd5d69 - - - - - -] Connection event '1' reason 'None'
Oct 01 16:54:06 compute-0 nova_compute[258569]: 2025-10-01 16:54:06.047 2 WARNING nova.virt.libvirt.driver [None req-24a81374-5899-48e7-a792-410150bd5d69 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Oct 01 16:54:06 compute-0 nova_compute[258569]: 2025-10-01 16:54:06.048 2 DEBUG nova.virt.libvirt.volume.mount [None req-24a81374-5899-48e7-a792-410150bd5d69 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Oct 01 16:54:06 compute-0 sudo[259405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whmbunaphqsrqfosqunhipblssdarsjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337645.7682748-1849-227033389829229/AnsiballZ_systemd.py'
Oct 01 16:54:06 compute-0 sudo[259405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:54:06 compute-0 python3.9[259407]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 01 16:54:06 compute-0 systemd[1]: Stopping nova_compute container...
Oct 01 16:54:06 compute-0 nova_compute[258569]: 2025-10-01 16:54:06.513 2 DEBUG oslo_concurrency.lockutils [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 01 16:54:06 compute-0 nova_compute[258569]: 2025-10-01 16:54:06.514 2 DEBUG oslo_concurrency.lockutils [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 01 16:54:06 compute-0 nova_compute[258569]: 2025-10-01 16:54:06.514 2 DEBUG oslo_concurrency.lockutils [None req-54aca261-6bbb-48a6-ac94-20f50465b32e - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 01 16:54:06 compute-0 ceph-mon[74273]: pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:07 compute-0 virtqemud[259310]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct 01 16:54:07 compute-0 virtqemud[259310]: hostname: compute-0
Oct 01 16:54:07 compute-0 virtqemud[259310]: End of file while reading data: Input/output error
Oct 01 16:54:07 compute-0 systemd[1]: libpod-a1220a3038f0ce0173e25709e534e5aa4813dad42f5a4559d3954644eeca6907.scope: Deactivated successfully.
Oct 01 16:54:07 compute-0 systemd[1]: libpod-a1220a3038f0ce0173e25709e534e5aa4813dad42f5a4559d3954644eeca6907.scope: Consumed 3.158s CPU time.
Oct 01 16:54:07 compute-0 conmon[258569]: conmon a1220a3038f0ce0173e2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a1220a3038f0ce0173e25709e534e5aa4813dad42f5a4559d3954644eeca6907.scope/container/memory.events
Oct 01 16:54:07 compute-0 podman[259419]: 2025-10-01 16:54:07.509011489 +0000 UTC m=+1.056588621 container died a1220a3038f0ce0173e25709e534e5aa4813dad42f5a4559d3954644eeca6907 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 16:54:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a1220a3038f0ce0173e25709e534e5aa4813dad42f5a4559d3954644eeca6907-userdata-shm.mount: Deactivated successfully.
Oct 01 16:54:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2192c9349a1be6088eed524ba4606745a4d4425a6e77595b4f59eeb09a80e48-merged.mount: Deactivated successfully.
Oct 01 16:54:07 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:08 compute-0 ceph-mon[74273]: pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:08 compute-0 podman[259455]: 2025-10-01 16:54:08.770232908 +0000 UTC m=+0.081353554 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 01 16:54:09 compute-0 podman[259419]: 2025-10-01 16:54:09.341322899 +0000 UTC m=+2.888900031 container cleanup a1220a3038f0ce0173e25709e534e5aa4813dad42f5a4559d3954644eeca6907 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Oct 01 16:54:09 compute-0 podman[259419]: nova_compute
Oct 01 16:54:09 compute-0 podman[259475]: nova_compute
Oct 01 16:54:09 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct 01 16:54:09 compute-0 systemd[1]: Stopped nova_compute container.
Oct 01 16:54:09 compute-0 systemd[1]: Starting nova_compute container...
Oct 01 16:54:09 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:54:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2192c9349a1be6088eed524ba4606745a4d4425a6e77595b4f59eeb09a80e48/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 01 16:54:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2192c9349a1be6088eed524ba4606745a4d4425a6e77595b4f59eeb09a80e48/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 01 16:54:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2192c9349a1be6088eed524ba4606745a4d4425a6e77595b4f59eeb09a80e48/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 01 16:54:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2192c9349a1be6088eed524ba4606745a4d4425a6e77595b4f59eeb09a80e48/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 01 16:54:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2192c9349a1be6088eed524ba4606745a4d4425a6e77595b4f59eeb09a80e48/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 01 16:54:09 compute-0 podman[259488]: 2025-10-01 16:54:09.577517765 +0000 UTC m=+0.124915016 container init a1220a3038f0ce0173e25709e534e5aa4813dad42f5a4559d3954644eeca6907 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team)
Oct 01 16:54:09 compute-0 podman[259488]: 2025-10-01 16:54:09.592547555 +0000 UTC m=+0.139944776 container start a1220a3038f0ce0173e25709e534e5aa4813dad42f5a4559d3954644eeca6907 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 01 16:54:09 compute-0 podman[259488]: nova_compute
Oct 01 16:54:09 compute-0 nova_compute[259504]: + sudo -E kolla_set_configs
Oct 01 16:54:09 compute-0 systemd[1]: Started nova_compute container.
Oct 01 16:54:09 compute-0 sudo[259405]: pam_unix(sudo:session): session closed for user root
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Validating config file
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Copying service configuration files
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Deleting /etc/ceph
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Creating directory /etc/ceph
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Setting permission for /etc/ceph
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Writing out command to execute
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 01 16:54:09 compute-0 nova_compute[259504]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 01 16:54:09 compute-0 nova_compute[259504]: ++ cat /run_command
Oct 01 16:54:09 compute-0 nova_compute[259504]: + CMD=nova-compute
Oct 01 16:54:09 compute-0 nova_compute[259504]: + ARGS=
Oct 01 16:54:09 compute-0 nova_compute[259504]: + sudo kolla_copy_cacerts
Oct 01 16:54:09 compute-0 nova_compute[259504]: + [[ ! -n '' ]]
Oct 01 16:54:09 compute-0 nova_compute[259504]: + . kolla_extend_start
Oct 01 16:54:09 compute-0 nova_compute[259504]: Running command: 'nova-compute'
Oct 01 16:54:09 compute-0 nova_compute[259504]: + echo 'Running command: '\''nova-compute'\'''
Oct 01 16:54:09 compute-0 nova_compute[259504]: + umask 0022
Oct 01 16:54:09 compute-0 nova_compute[259504]: + exec nova-compute
Oct 01 16:54:09 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:54:10 compute-0 sudo[259665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anxslotcwyfgfygtgcbqohkfkbofxsmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759337649.9081511-1858-86700321071083/AnsiballZ_podman_container.py'
Oct 01 16:54:10 compute-0 sudo[259665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 16:54:10 compute-0 python3.9[259667]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 01 16:54:10 compute-0 systemd[1]: Started libpod-conmon-956ee4e2bf9ef3f3837694c8da1572763df3fb057205dbff195a2e5978566388.scope.
Oct 01 16:54:10 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:54:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b19ab2e9ddc4cc0c5e38cc8728d820a8af5db69e0285d857506762158c832e4/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct 01 16:54:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b19ab2e9ddc4cc0c5e38cc8728d820a8af5db69e0285d857506762158c832e4/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 01 16:54:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b19ab2e9ddc4cc0c5e38cc8728d820a8af5db69e0285d857506762158c832e4/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct 01 16:54:10 compute-0 podman[259694]: 2025-10-01 16:54:10.700581189 +0000 UTC m=+0.109741101 container init 956ee4e2bf9ef3f3837694c8da1572763df3fb057205dbff195a2e5978566388 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 01 16:54:10 compute-0 podman[259694]: 2025-10-01 16:54:10.710033442 +0000 UTC m=+0.119193314 container start 956ee4e2bf9ef3f3837694c8da1572763df3fb057205dbff195a2e5978566388 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 01 16:54:10 compute-0 python3.9[259667]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct 01 16:54:10 compute-0 podman[259707]: 2025-10-01 16:54:10.729014065 +0000 UTC m=+0.079561396 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 01 16:54:10 compute-0 nova_compute_init[259735]: INFO:nova_statedir:Applying nova statedir ownership
Oct 01 16:54:10 compute-0 nova_compute_init[259735]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct 01 16:54:10 compute-0 nova_compute_init[259735]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Oct 01 16:54:10 compute-0 nova_compute_init[259735]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Oct 01 16:54:10 compute-0 nova_compute_init[259735]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct 01 16:54:10 compute-0 nova_compute_init[259735]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Oct 01 16:54:10 compute-0 nova_compute_init[259735]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Oct 01 16:54:10 compute-0 nova_compute_init[259735]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct 01 16:54:10 compute-0 nova_compute_init[259735]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct 01 16:54:10 compute-0 nova_compute_init[259735]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct 01 16:54:10 compute-0 nova_compute_init[259735]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct 01 16:54:10 compute-0 nova_compute_init[259735]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct 01 16:54:10 compute-0 nova_compute_init[259735]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct 01 16:54:10 compute-0 nova_compute_init[259735]: INFO:nova_statedir:Nova statedir ownership complete
Oct 01 16:54:10 compute-0 systemd[1]: libpod-956ee4e2bf9ef3f3837694c8da1572763df3fb057205dbff195a2e5978566388.scope: Deactivated successfully.
Oct 01 16:54:10 compute-0 podman[259737]: 2025-10-01 16:54:10.76716473 +0000 UTC m=+0.028291232 container died 956ee4e2bf9ef3f3837694c8da1572763df3fb057205dbff195a2e5978566388 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 01 16:54:10 compute-0 ceph-mon[74273]: pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-956ee4e2bf9ef3f3837694c8da1572763df3fb057205dbff195a2e5978566388-userdata-shm.mount: Deactivated successfully.
Oct 01 16:54:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b19ab2e9ddc4cc0c5e38cc8728d820a8af5db69e0285d857506762158c832e4-merged.mount: Deactivated successfully.
Oct 01 16:54:10 compute-0 podman[259748]: 2025-10-01 16:54:10.849658152 +0000 UTC m=+0.072908306 container cleanup 956ee4e2bf9ef3f3837694c8da1572763df3fb057205dbff195a2e5978566388 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=nova_compute_init, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 01 16:54:10 compute-0 systemd[1]: libpod-conmon-956ee4e2bf9ef3f3837694c8da1572763df3fb057205dbff195a2e5978566388.scope: Deactivated successfully.
Oct 01 16:54:10 compute-0 sudo[259665]: pam_unix(sudo:session): session closed for user root
Oct 01 16:54:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_16:54:11
Oct 01 16:54:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 16:54:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 16:54:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', 'volumes', 'default.rgw.meta', 'vms', '.rgw.root', 'backups']
Oct 01 16:54:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 16:54:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:54:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:54:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:54:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:54:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:54:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:54:11 compute-0 sshd-session[222279]: Connection closed by 192.168.122.30 port 44060
Oct 01 16:54:11 compute-0 sshd-session[222276]: pam_unix(sshd:session): session closed for user zuul
Oct 01 16:54:11 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Oct 01 16:54:11 compute-0 systemd[1]: session-50.scope: Consumed 3min 10.552s CPU time.
Oct 01 16:54:11 compute-0 systemd-logind[788]: Session 50 logged out. Waiting for processes to exit.
Oct 01 16:54:11 compute-0 systemd-logind[788]: Removed session 50.
Oct 01 16:54:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 16:54:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 16:54:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:54:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:54:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:54:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:54:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:54:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:54:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:54:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:54:11 compute-0 nova_compute[259504]: 2025-10-01 16:54:11.590 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 01 16:54:11 compute-0 nova_compute[259504]: 2025-10-01 16:54:11.591 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 01 16:54:11 compute-0 nova_compute[259504]: 2025-10-01 16:54:11.591 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 01 16:54:11 compute-0 nova_compute[259504]: 2025-10-01 16:54:11.591 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Oct 01 16:54:11 compute-0 nova_compute[259504]: 2025-10-01 16:54:11.712 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 16:54:11 compute-0 nova_compute[259504]: 2025-10-01 16:54:11.740 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 16:54:11 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.228 2 INFO nova.virt.driver [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.315 2 INFO nova.compute.provider_config [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.329 2 DEBUG oslo_concurrency.lockutils [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.330 2 DEBUG oslo_concurrency.lockutils [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.330 2 DEBUG oslo_concurrency.lockutils [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.330 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.330 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.331 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.331 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.331 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.331 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.331 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.331 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.331 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.331 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.332 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.332 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.332 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.332 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.332 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.332 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.332 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.333 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.333 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.333 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.333 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.333 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.333 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.333 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.334 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.334 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.334 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.334 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.334 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.334 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.334 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.335 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.335 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.335 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.335 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.335 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.335 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.335 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.336 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.336 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.336 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.336 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.336 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.336 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.337 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.337 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.337 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.337 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.337 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.337 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.337 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.338 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.338 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.338 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.338 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.338 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.339 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.339 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.339 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.339 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.339 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.339 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.340 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.340 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.340 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.340 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.340 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.341 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.341 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.341 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.341 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.341 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.342 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.342 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.342 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.342 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.342 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.343 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.343 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.343 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.343 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.344 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.344 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.344 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.344 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.344 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.345 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.345 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.345 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.345 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.345 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.346 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.346 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.346 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.346 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.346 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.346 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.347 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.347 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.347 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.347 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.347 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.347 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.347 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.348 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.348 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.348 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.348 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.348 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.348 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.349 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.349 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.349 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.349 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.349 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.349 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.349 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.350 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.350 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.350 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.350 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.350 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.350 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.350 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.351 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.351 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.351 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.351 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.351 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.351 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.351 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.351 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.352 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.352 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.352 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.352 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.352 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.352 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.352 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.353 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.353 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.353 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.353 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.353 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.353 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.354 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.354 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.354 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.354 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.354 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.354 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.354 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.355 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.355 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.355 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.355 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.355 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.355 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.355 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.356 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.356 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.356 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.356 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.356 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.356 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.356 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.357 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.357 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.357 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.357 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.357 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.357 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.357 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.358 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.358 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.358 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.358 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.358 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.358 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.359 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.359 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.359 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.359 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.359 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.359 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.359 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.359 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.360 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.360 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.360 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.360 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.360 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.360 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.360 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.361 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.361 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.361 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.361 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.361 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.361 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.362 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.362 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.362 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.362 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.362 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.362 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.362 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.362 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.363 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.363 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.363 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.363 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.363 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.363 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.363 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.364 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.364 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.364 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.364 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.364 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.364 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.364 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.365 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.365 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.365 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.365 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.365 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.365 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.365 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.366 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.366 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.366 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.366 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.366 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.366 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.366 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.367 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.367 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.367 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.367 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.367 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.367 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.367 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.368 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.368 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.368 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.368 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.368 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.368 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.368 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.369 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.369 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.369 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.369 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.369 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.369 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.369 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.370 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.370 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.370 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.370 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.370 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.370 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.370 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.371 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.371 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.371 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.371 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.371 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.371 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.372 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.372 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.372 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.372 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.372 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.372 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.372 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.372 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.373 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.373 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.373 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.373 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.373 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.373 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.373 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.374 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.374 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.374 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.374 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.374 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.374 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.374 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.375 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.375 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.375 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.375 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.375 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.375 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.375 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.376 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.376 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.376 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.376 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.376 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.376 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.376 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.377 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.377 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.377 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.377 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.377 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.377 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.377 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.378 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.378 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.378 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.378 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.378 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.378 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.378 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.379 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.379 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.379 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.379 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.379 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.379 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.379 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.380 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.380 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.380 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.380 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.380 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.380 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.380 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.381 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.381 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.381 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.381 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.381 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.381 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.381 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.382 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.382 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.382 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.382 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.382 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.382 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.382 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.383 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.383 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.383 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.383 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.383 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.383 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.383 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.384 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.384 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.384 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.384 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.384 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.384 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.385 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.385 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.385 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.385 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.385 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.385 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.385 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.386 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.386 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.386 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.386 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.386 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.386 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.386 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.387 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.387 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.387 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.387 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.387 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.387 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.387 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.388 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.388 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.388 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.388 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.388 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.388 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.388 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.389 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.389 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.389 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.389 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.389 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.389 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.389 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.390 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.390 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.390 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.390 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.390 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.390 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.390 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.391 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.391 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.391 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.391 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.391 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.391 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.391 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.392 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.392 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.392 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.392 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.392 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.392 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.392 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.393 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.393 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.393 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.393 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.393 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.393 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.393 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.394 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.394 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.394 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.394 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.394 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.394 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.394 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.395 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.395 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.395 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.395 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.395 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.395 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.395 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.396 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.396 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.396 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.396 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.396 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.396 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.396 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.397 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.397 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.397 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.397 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.397 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.397 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.398 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.398 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.398 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.398 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.398 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.398 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.398 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.399 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.399 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.399 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.399 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.399 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.399 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.399 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.400 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.400 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.400 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.400 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.400 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.400 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.400 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.401 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.401 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.401 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.401 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.401 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.401 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.401 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.402 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.402 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.402 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.402 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.402 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.402 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.402 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.402 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.403 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.403 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.403 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.403 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.403 2 WARNING oslo_config.cfg [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 01 16:54:12 compute-0 nova_compute[259504]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 01 16:54:12 compute-0 nova_compute[259504]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 01 16:54:12 compute-0 nova_compute[259504]: and ``live_migration_inbound_addr`` respectively.
Oct 01 16:54:12 compute-0 nova_compute[259504]: ).  Its value may be silently ignored in the future.
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.403 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.404 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.404 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.404 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.404 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.404 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.404 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.405 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.405 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.405 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.405 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.405 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.405 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.405 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.406 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.406 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.406 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.406 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.406 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.rbd_secret_uuid        = f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.406 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.406 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.407 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.407 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.407 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.407 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.407 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.407 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.408 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.408 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.408 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.408 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.408 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.408 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.409 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.409 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.409 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.409 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.409 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.409 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.409 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.410 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.410 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.410 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.410 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.410 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.410 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.410 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.411 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.411 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.411 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.411 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.411 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.411 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.411 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.412 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.412 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.412 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.412 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.412 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.412 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.413 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.413 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.413 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.413 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.413 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.413 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.414 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.414 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.414 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.414 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.414 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.414 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.415 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.415 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.415 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.415 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.415 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.415 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.416 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.416 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.416 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.416 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.416 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.417 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.417 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.417 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.417 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.417 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.418 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.418 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.418 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.418 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.418 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.418 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.419 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.419 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.419 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.419 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.419 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.419 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.419 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.420 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.420 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.420 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.420 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.420 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.420 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.421 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.421 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.421 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.421 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.421 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.421 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.421 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.422 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.422 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.422 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.422 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.422 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.422 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.422 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.423 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.423 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.423 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.423 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.423 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.423 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.423 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.424 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.424 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.424 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.424 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.424 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.424 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.424 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.425 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.425 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.425 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.425 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.425 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.425 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.426 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.426 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.426 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.426 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.426 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.426 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.426 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.427 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.427 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.427 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.427 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.427 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.427 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.428 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.428 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.428 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.428 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.428 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.428 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.428 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.429 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.429 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.429 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.429 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.429 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.429 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.430 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.430 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.430 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.430 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.430 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.430 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.431 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.431 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.431 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.431 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.431 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.431 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.432 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.432 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.432 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.432 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.432 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.432 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.432 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.433 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.433 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.433 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.433 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.433 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.433 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.433 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.434 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.434 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.434 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.434 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.434 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.434 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.435 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.435 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.435 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.435 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.435 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.435 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.435 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.436 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.436 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.436 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.436 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.436 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.436 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.437 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.437 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.437 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.437 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.437 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.437 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.438 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.438 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.438 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.438 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.438 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.438 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.439 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.439 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.439 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.439 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.439 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.439 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.439 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.440 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.440 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.440 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.440 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.440 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.440 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.440 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.441 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.441 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.441 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.441 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.441 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.441 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.442 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.442 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.442 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.442 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.442 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.443 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.443 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.443 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.443 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.444 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.444 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.444 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.444 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.444 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.444 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.445 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.445 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.445 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.445 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.445 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.445 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.445 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.446 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.446 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.446 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.446 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.446 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.447 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.447 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.447 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.447 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.447 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.447 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.448 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.448 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.448 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.448 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.449 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.449 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.449 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.449 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.449 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.450 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.450 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.450 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.450 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.450 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.451 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.451 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.451 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.451 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.451 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.452 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.452 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.452 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.452 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.452 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.452 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.453 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.453 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.453 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.453 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.453 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.454 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.454 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.454 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.454 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.454 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.454 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.455 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.455 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.455 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.455 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.455 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.456 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.456 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.456 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.456 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.456 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.457 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.457 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.457 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.457 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.457 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.457 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.458 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.458 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.458 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.458 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.458 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.459 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.459 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.459 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.459 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.459 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.460 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.460 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.460 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.460 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.460 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.461 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.461 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.461 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.461 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.461 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.461 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.462 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.462 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.462 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.462 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.462 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.462 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.462 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.463 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.463 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.463 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.463 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.463 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.463 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.463 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.464 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.464 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.464 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.464 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.464 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.464 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.464 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.464 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.465 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.465 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.465 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.465 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.465 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.465 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.466 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.466 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.466 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.466 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.466 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.466 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.467 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.467 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.467 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.467 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.467 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.467 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.467 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.468 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.468 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.468 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.468 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.468 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.468 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.468 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.469 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.469 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.469 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.469 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.469 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.469 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.469 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.470 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.470 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.470 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.470 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.470 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.470 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.471 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.471 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.471 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.471 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.471 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.471 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.472 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.472 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.472 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.472 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.472 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.472 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.472 2 DEBUG oslo_service.service [None req-a453c4a0-48ad-43ec-aa78-38ecc0b5a7f2 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.473 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.497 2 DEBUG nova.virt.libvirt.host [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.498 2 DEBUG nova.virt.libvirt.host [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.498 2 DEBUG nova.virt.libvirt.host [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.498 2 DEBUG nova.virt.libvirt.host [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.512 2 DEBUG nova.virt.libvirt.host [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f8fdbdbd4c0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.515 2 DEBUG nova.virt.libvirt.host [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f8fdbdbd4c0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.516 2 INFO nova.virt.libvirt.driver [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Connection event '1' reason 'None'
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.521 2 INFO nova.virt.libvirt.host [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Libvirt host capabilities <capabilities>
Oct 01 16:54:12 compute-0 nova_compute[259504]: 
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <host>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <uuid>815dd0ef-d378-4739-986b-1e44e6c1aa0a</uuid>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <cpu>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <arch>x86_64</arch>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model>EPYC-Rome-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <vendor>AMD</vendor>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <microcode version='16777317'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <signature family='23' model='49' stepping='0'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <maxphysaddr mode='emulate' bits='40'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature name='x2apic'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature name='tsc-deadline'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature name='osxsave'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature name='hypervisor'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature name='tsc_adjust'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature name='spec-ctrl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature name='stibp'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature name='arch-capabilities'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature name='ssbd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature name='cmp_legacy'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature name='topoext'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature name='virt-ssbd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature name='lbrv'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature name='tsc-scale'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature name='vmcb-clean'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature name='pause-filter'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature name='pfthreshold'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature name='svme-addr-chk'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature name='rdctl-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature name='skip-l1dfl-vmentry'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature name='mds-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature name='pschange-mc-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <pages unit='KiB' size='4'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <pages unit='KiB' size='2048'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <pages unit='KiB' size='1048576'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </cpu>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <power_management>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <suspend_mem/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </power_management>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <iommu support='no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <migration_features>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <live/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <uri_transports>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <uri_transport>tcp</uri_transport>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <uri_transport>rdma</uri_transport>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </uri_transports>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </migration_features>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <topology>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <cells num='1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <cell id='0'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:           <memory unit='KiB'>7864100</memory>
Oct 01 16:54:12 compute-0 nova_compute[259504]:           <pages unit='KiB' size='4'>1966025</pages>
Oct 01 16:54:12 compute-0 nova_compute[259504]:           <pages unit='KiB' size='2048'>0</pages>
Oct 01 16:54:12 compute-0 nova_compute[259504]:           <pages unit='KiB' size='1048576'>0</pages>
Oct 01 16:54:12 compute-0 nova_compute[259504]:           <distances>
Oct 01 16:54:12 compute-0 nova_compute[259504]:             <sibling id='0' value='10'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:           </distances>
Oct 01 16:54:12 compute-0 nova_compute[259504]:           <cpus num='8'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:           </cpus>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         </cell>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </cells>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </topology>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <cache>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </cache>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <secmodel>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model>selinux</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <doi>0</doi>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </secmodel>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <secmodel>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model>dac</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <doi>0</doi>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <baselabel type='kvm'>+107:+107</baselabel>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <baselabel type='qemu'>+107:+107</baselabel>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </secmodel>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   </host>
Oct 01 16:54:12 compute-0 nova_compute[259504]: 
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <guest>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <os_type>hvm</os_type>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <arch name='i686'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <wordsize>32</wordsize>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <domain type='qemu'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <domain type='kvm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </arch>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <features>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <pae/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <nonpae/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <acpi default='on' toggle='yes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <apic default='on' toggle='no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <cpuselection/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <deviceboot/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <disksnapshot default='on' toggle='no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <externalSnapshot/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </features>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   </guest>
Oct 01 16:54:12 compute-0 nova_compute[259504]: 
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <guest>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <os_type>hvm</os_type>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <arch name='x86_64'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <wordsize>64</wordsize>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <domain type='qemu'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <domain type='kvm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </arch>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <features>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <acpi default='on' toggle='yes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <apic default='on' toggle='no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <cpuselection/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <deviceboot/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <disksnapshot default='on' toggle='no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <externalSnapshot/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </features>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   </guest>
Oct 01 16:54:12 compute-0 nova_compute[259504]: 
Oct 01 16:54:12 compute-0 nova_compute[259504]: </capabilities>
Oct 01 16:54:12 compute-0 nova_compute[259504]: 
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.529 2 DEBUG nova.virt.libvirt.host [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.536 2 WARNING nova.virt.libvirt.driver [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.537 2 DEBUG nova.virt.libvirt.volume.mount [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.571 2 DEBUG nova.virt.libvirt.host [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 01 16:54:12 compute-0 nova_compute[259504]: <domainCapabilities>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <path>/usr/libexec/qemu-kvm</path>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <domain>kvm</domain>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <arch>i686</arch>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <vcpu max='240'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <iothreads supported='yes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <os supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <enum name='firmware'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <loader supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='type'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>rom</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>pflash</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='readonly'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>yes</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>no</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='secure'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>no</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </loader>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   </os>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <cpu>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <mode name='host-passthrough' supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='hostPassthroughMigratable'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>on</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>off</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </mode>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <mode name='maximum' supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='maximumMigratable'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>on</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>off</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </mode>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <mode name='host-model' supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <vendor>AMD</vendor>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='x2apic'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='tsc-deadline'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='hypervisor'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='tsc_adjust'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='spec-ctrl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='stibp'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='arch-capabilities'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='ssbd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='cmp_legacy'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='overflow-recov'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='succor'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='ibrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='amd-ssbd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='virt-ssbd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='lbrv'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='tsc-scale'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='vmcb-clean'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='flushbyasid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='pause-filter'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='pfthreshold'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='svme-addr-chk'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='rdctl-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='mds-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='pschange-mc-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='gds-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='rfds-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='disable' name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </mode>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <mode name='custom' supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Broadwell'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Broadwell-IBRS'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Broadwell-noTSX'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Broadwell-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Broadwell-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Broadwell-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Broadwell-v4'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cascadelake-Server'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cascadelake-Server-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cascadelake-Server-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cascadelake-Server-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cascadelake-Server-v4'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cascadelake-Server-v5'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cooperlake'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cooperlake-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cooperlake-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Denverton'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='mpx'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Denverton-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='mpx'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Denverton-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Denverton-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Dhyana-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-Genoa'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amd-psfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='auto-ibrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='no-nested-data-bp'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='null-sel-clr-base'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='stibp-always-on'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-Genoa-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amd-psfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='auto-ibrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='no-nested-data-bp'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='null-sel-clr-base'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='stibp-always-on'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-Milan'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-Milan-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-Milan-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amd-psfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='no-nested-data-bp'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='null-sel-clr-base'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='stibp-always-on'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-Rome'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-Rome-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-Rome-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-Rome-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-v4'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='GraniteRapids'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-int8'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-tile'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='bus-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fbsdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrc'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fzrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='mcdt-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pbrsb-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='prefetchiti'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='psdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='sbdr-ssdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='serialize'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='tsx-ldtrk'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='GraniteRapids-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-int8'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-tile'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='bus-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fbsdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrc'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fzrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='mcdt-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pbrsb-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='prefetchiti'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='psdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='sbdr-ssdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='serialize'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='tsx-ldtrk'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='GraniteRapids-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-int8'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-tile'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx10'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx10-128'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx10-256'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx10-512'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='bus-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='cldemote'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fbsdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrc'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fzrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='mcdt-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdir64b'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdiri'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pbrsb-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='prefetchiti'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='psdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='sbdr-ssdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='serialize'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='tsx-ldtrk'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Haswell'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Haswell-IBRS'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Haswell-noTSX'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Haswell-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Haswell-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Haswell-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Haswell-v4'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Icelake-Server'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Icelake-Server-noTSX'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Icelake-Server-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Icelake-Server-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Icelake-Server-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Icelake-Server-v4'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Icelake-Server-v5'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Icelake-Server-v6'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Icelake-Server-v7'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='IvyBridge'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='IvyBridge-IBRS'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='IvyBridge-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='IvyBridge-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='KnightsMill'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-4fmaps'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-4vnniw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512er'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512pf'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='KnightsMill-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-4fmaps'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-4vnniw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512er'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512pf'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Opteron_G4'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fma4'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xop'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Opteron_G4-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fma4'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xop'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Opteron_G5'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fma4'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='tbm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xop'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Opteron_G5-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fma4'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='tbm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xop'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='SapphireRapids'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-int8'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-tile'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='bus-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrc'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fzrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='serialize'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='tsx-ldtrk'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='SapphireRapids-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-int8'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-tile'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='bus-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrc'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fzrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='serialize'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='tsx-ldtrk'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='SapphireRapids-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-int8'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-tile'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='bus-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fbsdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrc'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fzrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='psdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='sbdr-ssdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='serialize'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='tsx-ldtrk'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='SapphireRapids-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-int8'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-tile'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='bus-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='cldemote'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fbsdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrc'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fzrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdir64b'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdiri'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='psdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='sbdr-ssdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='serialize'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='tsx-ldtrk'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='SierraForest'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-ne-convert'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni-int8'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='bus-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='cmpccxadd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fbsdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='mcdt-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pbrsb-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='psdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='sbdr-ssdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='serialize'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='SierraForest-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-ne-convert'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni-int8'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='bus-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='cmpccxadd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fbsdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='mcdt-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pbrsb-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='psdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='sbdr-ssdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='serialize'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Client'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Client-IBRS'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Client-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Client-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Client-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Client-v4'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Server'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Server-IBRS'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Server-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Server-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Server-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Server-v4'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Server-v5'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Snowridge'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='cldemote'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='core-capability'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdir64b'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdiri'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='mpx'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='split-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Snowridge-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='cldemote'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='core-capability'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdir64b'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdiri'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='mpx'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='split-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Snowridge-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='cldemote'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='core-capability'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdir64b'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdiri'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='split-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Snowridge-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='cldemote'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='core-capability'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdir64b'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdiri'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='split-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Snowridge-v4'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='cldemote'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdir64b'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdiri'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='athlon'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='3dnow'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='3dnowext'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='athlon-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='3dnow'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='3dnowext'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='core2duo'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='core2duo-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='coreduo'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='coreduo-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='n270'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='n270-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='phenom'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='3dnow'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='3dnowext'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='phenom-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='3dnow'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='3dnowext'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </mode>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   </cpu>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <memoryBacking supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <enum name='sourceType'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <value>file</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <value>anonymous</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <value>memfd</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   </memoryBacking>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <devices>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <disk supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='diskDevice'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>disk</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>cdrom</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>floppy</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>lun</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='bus'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>ide</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>fdc</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>scsi</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>virtio</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>usb</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>sata</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='model'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>virtio</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>virtio-transitional</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>virtio-non-transitional</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </disk>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <graphics supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='type'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>vnc</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>egl-headless</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>dbus</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </graphics>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <video supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='modelType'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>vga</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>cirrus</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>virtio</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>none</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>bochs</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>ramfb</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </video>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <hostdev supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='mode'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>subsystem</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='startupPolicy'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>default</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>mandatory</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>requisite</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>optional</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='subsysType'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>usb</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>pci</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>scsi</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='capsType'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='pciBackend'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </hostdev>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <rng supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='model'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>virtio</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>virtio-transitional</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>virtio-non-transitional</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='backendModel'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>random</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>egd</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>builtin</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </rng>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <filesystem supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='driverType'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>path</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>handle</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>virtiofs</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </filesystem>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <tpm supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='model'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>tpm-tis</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>tpm-crb</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='backendModel'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>emulator</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>external</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='backendVersion'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>2.0</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </tpm>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <redirdev supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='bus'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>usb</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </redirdev>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <channel supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='type'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>pty</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>unix</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </channel>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <crypto supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='model'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='type'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>qemu</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='backendModel'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>builtin</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </crypto>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <interface supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='backendType'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>default</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>passt</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </interface>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <panic supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='model'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>isa</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>hyperv</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </panic>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   </devices>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <features>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <gic supported='no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <vmcoreinfo supported='yes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <genid supported='yes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <backingStoreInput supported='yes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <backup supported='yes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <async-teardown supported='yes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <ps2 supported='yes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <sev supported='no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <sgx supported='no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <hyperv supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='features'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>relaxed</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>vapic</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>spinlocks</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>vpindex</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>runtime</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>synic</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>stimer</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>reset</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>vendor_id</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>frequencies</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>reenlightenment</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>tlbflush</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>ipi</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>avic</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>emsr_bitmap</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>xmm_input</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </hyperv>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <launchSecurity supported='no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   </features>
Oct 01 16:54:12 compute-0 nova_compute[259504]: </domainCapabilities>
Oct 01 16:54:12 compute-0 nova_compute[259504]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.577 2 DEBUG nova.virt.libvirt.host [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 01 16:54:12 compute-0 nova_compute[259504]: <domainCapabilities>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <path>/usr/libexec/qemu-kvm</path>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <domain>kvm</domain>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <arch>i686</arch>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <vcpu max='4096'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <iothreads supported='yes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <os supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <enum name='firmware'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <loader supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='type'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>rom</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>pflash</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='readonly'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>yes</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>no</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='secure'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>no</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </loader>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   </os>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <cpu>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <mode name='host-passthrough' supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='hostPassthroughMigratable'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>on</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>off</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </mode>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <mode name='maximum' supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='maximumMigratable'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>on</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>off</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </mode>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <mode name='host-model' supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <vendor>AMD</vendor>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='x2apic'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='tsc-deadline'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='hypervisor'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='tsc_adjust'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='spec-ctrl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='stibp'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='arch-capabilities'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='ssbd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='cmp_legacy'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='overflow-recov'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='succor'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='ibrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='amd-ssbd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='virt-ssbd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='lbrv'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='tsc-scale'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='vmcb-clean'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='flushbyasid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='pause-filter'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='pfthreshold'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='svme-addr-chk'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='rdctl-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='mds-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='pschange-mc-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='gds-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='rfds-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='disable' name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </mode>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <mode name='custom' supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Broadwell'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Broadwell-IBRS'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Broadwell-noTSX'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Broadwell-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Broadwell-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Broadwell-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Broadwell-v4'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cascadelake-Server'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cascadelake-Server-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cascadelake-Server-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cascadelake-Server-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cascadelake-Server-v4'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cascadelake-Server-v5'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cooperlake'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cooperlake-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cooperlake-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Denverton'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='mpx'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Denverton-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='mpx'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Denverton-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Denverton-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Dhyana-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-Genoa'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amd-psfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='auto-ibrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='no-nested-data-bp'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='null-sel-clr-base'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='stibp-always-on'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-Genoa-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amd-psfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='auto-ibrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='no-nested-data-bp'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='null-sel-clr-base'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='stibp-always-on'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-Milan'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-Milan-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-Milan-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amd-psfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='no-nested-data-bp'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='null-sel-clr-base'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='stibp-always-on'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-Rome'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-Rome-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-Rome-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-Rome-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-v4'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='GraniteRapids'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-int8'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-tile'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='bus-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fbsdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrc'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fzrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='mcdt-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pbrsb-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='prefetchiti'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='psdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='sbdr-ssdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='serialize'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='tsx-ldtrk'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='GraniteRapids-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-int8'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-tile'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='bus-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fbsdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrc'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fzrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='mcdt-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pbrsb-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='prefetchiti'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='psdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='sbdr-ssdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='serialize'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='tsx-ldtrk'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='GraniteRapids-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-int8'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-tile'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx10'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx10-128'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx10-256'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx10-512'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='bus-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='cldemote'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fbsdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrc'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fzrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='mcdt-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdir64b'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdiri'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pbrsb-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='prefetchiti'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='psdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='sbdr-ssdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='serialize'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='tsx-ldtrk'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Haswell'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Haswell-IBRS'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Haswell-noTSX'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Haswell-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Haswell-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Haswell-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Haswell-v4'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Icelake-Server'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Icelake-Server-noTSX'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Icelake-Server-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Icelake-Server-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Icelake-Server-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Icelake-Server-v4'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Icelake-Server-v5'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Icelake-Server-v6'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Icelake-Server-v7'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='IvyBridge'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='IvyBridge-IBRS'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='IvyBridge-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='IvyBridge-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='KnightsMill'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-4fmaps'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-4vnniw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512er'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512pf'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='KnightsMill-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-4fmaps'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-4vnniw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512er'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512pf'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Opteron_G4'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fma4'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xop'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Opteron_G4-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fma4'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xop'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Opteron_G5'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fma4'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='tbm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xop'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Opteron_G5-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fma4'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='tbm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xop'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='SapphireRapids'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-int8'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-tile'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='bus-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrc'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fzrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='serialize'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='tsx-ldtrk'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='SapphireRapids-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-int8'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-tile'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='bus-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrc'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fzrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='serialize'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='tsx-ldtrk'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='SapphireRapids-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-int8'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-tile'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='bus-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fbsdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrc'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fzrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='psdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='sbdr-ssdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='serialize'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='tsx-ldtrk'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='SapphireRapids-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-int8'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-tile'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='bus-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='cldemote'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fbsdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrc'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fzrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdir64b'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdiri'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='psdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='sbdr-ssdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='serialize'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='tsx-ldtrk'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='SierraForest'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-ne-convert'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni-int8'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='bus-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='cmpccxadd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fbsdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='mcdt-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pbrsb-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='psdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='sbdr-ssdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='serialize'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='SierraForest-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-ne-convert'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni-int8'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='bus-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='cmpccxadd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fbsdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='mcdt-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pbrsb-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='psdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='sbdr-ssdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='serialize'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Client'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Client-IBRS'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Client-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Client-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Client-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Client-v4'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Server'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Server-IBRS'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Server-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Server-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Server-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Server-v4'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Server-v5'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Snowridge'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='cldemote'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='core-capability'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdir64b'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdiri'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='mpx'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='split-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Snowridge-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='cldemote'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='core-capability'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdir64b'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdiri'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='mpx'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='split-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Snowridge-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='cldemote'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='core-capability'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdir64b'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdiri'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='split-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Snowridge-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='cldemote'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='core-capability'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdir64b'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdiri'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='split-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Snowridge-v4'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='cldemote'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdir64b'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdiri'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='athlon'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='3dnow'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='3dnowext'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='athlon-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='3dnow'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='3dnowext'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='core2duo'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='core2duo-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='coreduo'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='coreduo-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='n270'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='n270-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='phenom'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='3dnow'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='3dnowext'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='phenom-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='3dnow'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='3dnowext'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </mode>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   </cpu>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <memoryBacking supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <enum name='sourceType'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <value>file</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <value>anonymous</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <value>memfd</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   </memoryBacking>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <devices>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <disk supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='diskDevice'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>disk</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>cdrom</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>floppy</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>lun</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='bus'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>fdc</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>scsi</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>virtio</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>usb</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>sata</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='model'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>virtio</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>virtio-transitional</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>virtio-non-transitional</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </disk>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <graphics supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='type'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>vnc</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>egl-headless</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>dbus</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </graphics>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <video supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='modelType'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>vga</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>cirrus</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>virtio</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>none</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>bochs</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>ramfb</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </video>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <hostdev supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='mode'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>subsystem</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='startupPolicy'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>default</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>mandatory</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>requisite</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>optional</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='subsysType'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>usb</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>pci</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>scsi</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='capsType'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='pciBackend'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </hostdev>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <rng supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='model'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>virtio</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>virtio-transitional</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>virtio-non-transitional</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='backendModel'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>random</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>egd</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>builtin</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </rng>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <filesystem supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='driverType'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>path</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>handle</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>virtiofs</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </filesystem>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <tpm supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='model'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>tpm-tis</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>tpm-crb</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='backendModel'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>emulator</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>external</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='backendVersion'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>2.0</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </tpm>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <redirdev supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='bus'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>usb</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </redirdev>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <channel supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='type'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>pty</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>unix</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </channel>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <crypto supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='model'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='type'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>qemu</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='backendModel'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>builtin</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </crypto>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <interface supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='backendType'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>default</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>passt</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </interface>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <panic supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='model'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>isa</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>hyperv</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </panic>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   </devices>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <features>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <gic supported='no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <vmcoreinfo supported='yes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <genid supported='yes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <backingStoreInput supported='yes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <backup supported='yes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <async-teardown supported='yes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <ps2 supported='yes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <sev supported='no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <sgx supported='no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <hyperv supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='features'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>relaxed</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>vapic</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>spinlocks</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>vpindex</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>runtime</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>synic</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>stimer</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>reset</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>vendor_id</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>frequencies</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>reenlightenment</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>tlbflush</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>ipi</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>avic</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>emsr_bitmap</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>xmm_input</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </hyperv>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <launchSecurity supported='no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   </features>
Oct 01 16:54:12 compute-0 nova_compute[259504]: </domainCapabilities>
Oct 01 16:54:12 compute-0 nova_compute[259504]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.606 2 DEBUG nova.virt.libvirt.host [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.611 2 DEBUG nova.virt.libvirt.host [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 01 16:54:12 compute-0 nova_compute[259504]: <domainCapabilities>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <path>/usr/libexec/qemu-kvm</path>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <domain>kvm</domain>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <arch>x86_64</arch>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <vcpu max='240'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <iothreads supported='yes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <os supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <enum name='firmware'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <loader supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='type'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>rom</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>pflash</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='readonly'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>yes</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>no</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='secure'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>no</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </loader>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   </os>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <cpu>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <mode name='host-passthrough' supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='hostPassthroughMigratable'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>on</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>off</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </mode>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <mode name='maximum' supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='maximumMigratable'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>on</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>off</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </mode>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <mode name='host-model' supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <vendor>AMD</vendor>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='x2apic'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='tsc-deadline'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='hypervisor'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='tsc_adjust'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='spec-ctrl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='stibp'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='arch-capabilities'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='ssbd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='cmp_legacy'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='overflow-recov'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='succor'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='ibrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='amd-ssbd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='virt-ssbd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='lbrv'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='tsc-scale'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='vmcb-clean'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='flushbyasid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='pause-filter'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='pfthreshold'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='svme-addr-chk'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='rdctl-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='mds-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='pschange-mc-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='gds-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='rfds-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='disable' name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </mode>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <mode name='custom' supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Broadwell'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Broadwell-IBRS'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Broadwell-noTSX'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Broadwell-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Broadwell-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Broadwell-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Broadwell-v4'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cascadelake-Server'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cascadelake-Server-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cascadelake-Server-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cascadelake-Server-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cascadelake-Server-v4'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cascadelake-Server-v5'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cooperlake'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cooperlake-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cooperlake-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Denverton'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='mpx'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Denverton-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='mpx'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Denverton-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Denverton-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Dhyana-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-Genoa'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amd-psfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='auto-ibrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='no-nested-data-bp'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='null-sel-clr-base'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='stibp-always-on'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-Genoa-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amd-psfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='auto-ibrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='no-nested-data-bp'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='null-sel-clr-base'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='stibp-always-on'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-Milan'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-Milan-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-Milan-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amd-psfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='no-nested-data-bp'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='null-sel-clr-base'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='stibp-always-on'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-Rome'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-Rome-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-Rome-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-Rome-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-v4'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='GraniteRapids'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-int8'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-tile'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='bus-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fbsdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrc'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fzrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='mcdt-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pbrsb-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='prefetchiti'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='psdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='sbdr-ssdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='serialize'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='tsx-ldtrk'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='GraniteRapids-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-int8'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-tile'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='bus-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fbsdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrc'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fzrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='mcdt-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pbrsb-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='prefetchiti'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='psdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='sbdr-ssdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='serialize'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='tsx-ldtrk'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='GraniteRapids-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-int8'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-tile'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx10'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx10-128'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx10-256'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx10-512'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='bus-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='cldemote'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fbsdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrc'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fzrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='mcdt-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdir64b'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdiri'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pbrsb-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='prefetchiti'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='psdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='sbdr-ssdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='serialize'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='tsx-ldtrk'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Haswell'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Haswell-IBRS'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Haswell-noTSX'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Haswell-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Haswell-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Haswell-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Haswell-v4'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Icelake-Server'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Icelake-Server-noTSX'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Icelake-Server-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Icelake-Server-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Icelake-Server-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Icelake-Server-v4'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Icelake-Server-v5'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Icelake-Server-v6'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Icelake-Server-v7'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='IvyBridge'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='IvyBridge-IBRS'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='IvyBridge-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='IvyBridge-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='KnightsMill'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-4fmaps'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-4vnniw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512er'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512pf'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='KnightsMill-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-4fmaps'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-4vnniw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512er'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512pf'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Opteron_G4'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fma4'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xop'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Opteron_G4-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fma4'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xop'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Opteron_G5'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fma4'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='tbm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xop'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Opteron_G5-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fma4'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='tbm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xop'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='SapphireRapids'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-int8'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-tile'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='bus-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrc'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fzrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='serialize'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='tsx-ldtrk'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='SapphireRapids-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-int8'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-tile'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='bus-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrc'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fzrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='serialize'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='tsx-ldtrk'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='SapphireRapids-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-int8'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-tile'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='bus-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fbsdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrc'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fzrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='psdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='sbdr-ssdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='serialize'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='tsx-ldtrk'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='SapphireRapids-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-int8'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-tile'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='bus-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='cldemote'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fbsdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrc'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fzrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdir64b'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdiri'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='psdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='sbdr-ssdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='serialize'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='tsx-ldtrk'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='SierraForest'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-ne-convert'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni-int8'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='bus-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='cmpccxadd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fbsdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='mcdt-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pbrsb-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='psdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='sbdr-ssdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='serialize'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='SierraForest-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-ne-convert'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni-int8'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='bus-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='cmpccxadd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fbsdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='mcdt-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pbrsb-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='psdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='sbdr-ssdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='serialize'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Client'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Client-IBRS'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Client-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Client-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Client-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Client-v4'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Server'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Server-IBRS'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Server-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Server-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Server-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Server-v4'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Server-v5'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Snowridge'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='cldemote'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='core-capability'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdir64b'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdiri'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='mpx'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='split-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Snowridge-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='cldemote'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='core-capability'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdir64b'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdiri'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='mpx'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='split-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Snowridge-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='cldemote'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='core-capability'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdir64b'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdiri'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='split-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Snowridge-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='cldemote'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='core-capability'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdir64b'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdiri'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='split-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Snowridge-v4'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='cldemote'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdir64b'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdiri'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='athlon'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='3dnow'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='3dnowext'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='athlon-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='3dnow'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='3dnowext'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='core2duo'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='core2duo-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='coreduo'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='coreduo-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='n270'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='n270-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='phenom'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='3dnow'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='3dnowext'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='phenom-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='3dnow'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='3dnowext'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </mode>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   </cpu>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <memoryBacking supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <enum name='sourceType'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <value>file</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <value>anonymous</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <value>memfd</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   </memoryBacking>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <devices>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <disk supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='diskDevice'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>disk</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>cdrom</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>floppy</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>lun</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='bus'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>ide</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>fdc</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>scsi</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>virtio</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>usb</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>sata</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='model'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>virtio</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>virtio-transitional</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>virtio-non-transitional</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </disk>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <graphics supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='type'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>vnc</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>egl-headless</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>dbus</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </graphics>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <video supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='modelType'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>vga</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>cirrus</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>virtio</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>none</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>bochs</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>ramfb</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </video>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <hostdev supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='mode'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>subsystem</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='startupPolicy'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>default</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>mandatory</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>requisite</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>optional</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='subsysType'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>usb</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>pci</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>scsi</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='capsType'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='pciBackend'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </hostdev>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <rng supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='model'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>virtio</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>virtio-transitional</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>virtio-non-transitional</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='backendModel'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>random</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>egd</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>builtin</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </rng>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <filesystem supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='driverType'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>path</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>handle</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>virtiofs</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </filesystem>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <tpm supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='model'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>tpm-tis</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>tpm-crb</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='backendModel'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>emulator</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>external</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='backendVersion'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>2.0</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </tpm>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <redirdev supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='bus'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>usb</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </redirdev>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <channel supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='type'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>pty</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>unix</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </channel>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <crypto supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='model'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='type'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>qemu</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='backendModel'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>builtin</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </crypto>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <interface supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='backendType'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>default</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>passt</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </interface>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <panic supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='model'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>isa</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>hyperv</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </panic>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   </devices>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <features>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <gic supported='no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <vmcoreinfo supported='yes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <genid supported='yes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <backingStoreInput supported='yes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <backup supported='yes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <async-teardown supported='yes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <ps2 supported='yes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <sev supported='no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <sgx supported='no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <hyperv supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='features'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>relaxed</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>vapic</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>spinlocks</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>vpindex</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>runtime</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>synic</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>stimer</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>reset</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>vendor_id</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>frequencies</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>reenlightenment</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>tlbflush</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>ipi</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>avic</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>emsr_bitmap</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>xmm_input</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </hyperv>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <launchSecurity supported='no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   </features>
Oct 01 16:54:12 compute-0 nova_compute[259504]: </domainCapabilities>
Oct 01 16:54:12 compute-0 nova_compute[259504]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.672 2 DEBUG nova.virt.libvirt.host [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 01 16:54:12 compute-0 nova_compute[259504]: <domainCapabilities>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <path>/usr/libexec/qemu-kvm</path>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <domain>kvm</domain>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <arch>x86_64</arch>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <vcpu max='4096'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <iothreads supported='yes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <os supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <enum name='firmware'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <value>efi</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <loader supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='type'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>rom</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>pflash</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='readonly'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>yes</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>no</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='secure'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>yes</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>no</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </loader>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   </os>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <cpu>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <mode name='host-passthrough' supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='hostPassthroughMigratable'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>on</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>off</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </mode>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <mode name='maximum' supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='maximumMigratable'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>on</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>off</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </mode>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <mode name='host-model' supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <vendor>AMD</vendor>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='x2apic'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='tsc-deadline'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='hypervisor'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='tsc_adjust'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='spec-ctrl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='stibp'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='arch-capabilities'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='ssbd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='cmp_legacy'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='overflow-recov'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='succor'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='ibrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='amd-ssbd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='virt-ssbd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='lbrv'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='tsc-scale'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='vmcb-clean'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='flushbyasid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='pause-filter'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='pfthreshold'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='svme-addr-chk'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='rdctl-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='mds-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='pschange-mc-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='gds-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='require' name='rfds-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <feature policy='disable' name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </mode>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <mode name='custom' supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Broadwell'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Broadwell-IBRS'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Broadwell-noTSX'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Broadwell-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Broadwell-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Broadwell-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Broadwell-v4'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cascadelake-Server'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cascadelake-Server-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cascadelake-Server-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cascadelake-Server-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cascadelake-Server-v4'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cascadelake-Server-v5'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cooperlake'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cooperlake-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Cooperlake-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Denverton'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='mpx'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Denverton-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='mpx'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Denverton-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Denverton-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Dhyana-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-Genoa'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amd-psfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='auto-ibrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='no-nested-data-bp'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='null-sel-clr-base'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='stibp-always-on'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-Genoa-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amd-psfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='auto-ibrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='no-nested-data-bp'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='null-sel-clr-base'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='stibp-always-on'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-Milan'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-Milan-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-Milan-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amd-psfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='no-nested-data-bp'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='null-sel-clr-base'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='stibp-always-on'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-Rome'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-Rome-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-Rome-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-Rome-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='EPYC-v4'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='GraniteRapids'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-int8'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-tile'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='bus-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fbsdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrc'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fzrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='mcdt-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pbrsb-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='prefetchiti'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='psdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='sbdr-ssdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='serialize'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='tsx-ldtrk'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='GraniteRapids-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-int8'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-tile'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='bus-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fbsdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrc'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fzrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='mcdt-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pbrsb-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='prefetchiti'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='psdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='sbdr-ssdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='serialize'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='tsx-ldtrk'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='GraniteRapids-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-int8'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-tile'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx10'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx10-128'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx10-256'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx10-512'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='bus-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='cldemote'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fbsdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrc'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fzrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='mcdt-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdir64b'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdiri'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pbrsb-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='prefetchiti'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='psdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='sbdr-ssdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='serialize'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='tsx-ldtrk'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Haswell'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Haswell-IBRS'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Haswell-noTSX'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Haswell-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Haswell-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Haswell-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Haswell-v4'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Icelake-Server'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Icelake-Server-noTSX'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Icelake-Server-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Icelake-Server-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Icelake-Server-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Icelake-Server-v4'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Icelake-Server-v5'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Icelake-Server-v6'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Icelake-Server-v7'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='IvyBridge'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='IvyBridge-IBRS'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='IvyBridge-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='IvyBridge-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='KnightsMill'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-4fmaps'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-4vnniw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512er'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512pf'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='KnightsMill-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-4fmaps'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-4vnniw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512er'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512pf'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Opteron_G4'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fma4'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xop'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Opteron_G4-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fma4'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xop'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Opteron_G5'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fma4'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='tbm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xop'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Opteron_G5-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fma4'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='tbm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xop'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='SapphireRapids'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-int8'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-tile'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='bus-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrc'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fzrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='serialize'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='tsx-ldtrk'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='SapphireRapids-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-int8'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-tile'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='bus-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrc'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fzrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='serialize'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='tsx-ldtrk'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='SapphireRapids-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-int8'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-tile'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='bus-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fbsdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrc'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fzrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='psdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='sbdr-ssdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='serialize'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='tsx-ldtrk'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='SapphireRapids-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-int8'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='amx-tile'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-bf16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-fp16'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512-vpopcntdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bitalg'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vbmi2'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='bus-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='cldemote'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fbsdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrc'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fzrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='la57'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdir64b'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdiri'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='psdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='sbdr-ssdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='serialize'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='taa-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='tsx-ldtrk'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xfd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='SierraForest'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-ne-convert'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni-int8'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='bus-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='cmpccxadd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fbsdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='mcdt-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pbrsb-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='psdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='sbdr-ssdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='serialize'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='SierraForest-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-ifma'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-ne-convert'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx-vnni-int8'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='bus-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='cmpccxadd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fbsdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='fsrs'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ibrs-all'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='mcdt-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pbrsb-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='psdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='sbdr-ssdp-no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='serialize'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vaes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='vpclmulqdq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Client'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Client-IBRS'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Client-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Client-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Client-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Client-v4'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Server'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Server-IBRS'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Server-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Server-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='hle'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='rtm'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Server-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Server-v4'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Skylake-Server-v5'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512bw'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512cd'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512dq'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512f'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='avx512vl'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='invpcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pcid'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='pku'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Snowridge'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='cldemote'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='core-capability'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdir64b'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdiri'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='mpx'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='split-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Snowridge-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='cldemote'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='core-capability'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdir64b'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdiri'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='mpx'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='split-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Snowridge-v2'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='cldemote'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='core-capability'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdir64b'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdiri'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='split-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Snowridge-v3'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='cldemote'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='core-capability'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdir64b'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdiri'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='split-lock-detect'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='Snowridge-v4'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='cldemote'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='erms'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='gfni'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdir64b'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='movdiri'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='xsaves'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='athlon'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='3dnow'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='3dnowext'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='athlon-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='3dnow'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='3dnowext'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='core2duo'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='core2duo-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='coreduo'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='coreduo-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='n270'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='n270-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='ss'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='phenom'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='3dnow'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='3dnowext'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <blockers model='phenom-v1'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='3dnow'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <feature name='3dnowext'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </blockers>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </mode>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   </cpu>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <memoryBacking supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <enum name='sourceType'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <value>file</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <value>anonymous</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <value>memfd</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   </memoryBacking>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <devices>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <disk supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='diskDevice'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>disk</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>cdrom</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>floppy</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>lun</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='bus'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>fdc</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>scsi</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>virtio</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>usb</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>sata</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='model'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>virtio</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>virtio-transitional</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>virtio-non-transitional</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </disk>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <graphics supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='type'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>vnc</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>egl-headless</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>dbus</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </graphics>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <video supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='modelType'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>vga</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>cirrus</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>virtio</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>none</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>bochs</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>ramfb</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </video>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <hostdev supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='mode'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>subsystem</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='startupPolicy'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>default</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>mandatory</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>requisite</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>optional</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='subsysType'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>usb</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>pci</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>scsi</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='capsType'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='pciBackend'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </hostdev>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <rng supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='model'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>virtio</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>virtio-transitional</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>virtio-non-transitional</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='backendModel'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>random</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>egd</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>builtin</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </rng>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <filesystem supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='driverType'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>path</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>handle</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>virtiofs</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </filesystem>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <tpm supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='model'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>tpm-tis</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>tpm-crb</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='backendModel'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>emulator</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>external</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='backendVersion'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>2.0</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </tpm>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <redirdev supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='bus'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>usb</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </redirdev>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <channel supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='type'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>pty</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>unix</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </channel>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <crypto supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='model'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='type'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>qemu</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='backendModel'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>builtin</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </crypto>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <interface supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='backendType'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>default</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>passt</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </interface>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <panic supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='model'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>isa</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>hyperv</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </panic>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   </devices>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   <features>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <gic supported='no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <vmcoreinfo supported='yes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <genid supported='yes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <backingStoreInput supported='yes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <backup supported='yes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <async-teardown supported='yes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <ps2 supported='yes'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <sev supported='no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <sgx supported='no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <hyperv supported='yes'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       <enum name='features'>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>relaxed</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>vapic</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>spinlocks</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>vpindex</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>runtime</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>synic</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>stimer</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>reset</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>vendor_id</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>frequencies</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>reenlightenment</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>tlbflush</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>ipi</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>avic</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>emsr_bitmap</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:         <value>xmm_input</value>
Oct 01 16:54:12 compute-0 nova_compute[259504]:       </enum>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     </hyperv>
Oct 01 16:54:12 compute-0 nova_compute[259504]:     <launchSecurity supported='no'/>
Oct 01 16:54:12 compute-0 nova_compute[259504]:   </features>
Oct 01 16:54:12 compute-0 nova_compute[259504]: </domainCapabilities>
Oct 01 16:54:12 compute-0 nova_compute[259504]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.726 2 DEBUG nova.virt.libvirt.host [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.727 2 DEBUG nova.virt.libvirt.host [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.727 2 DEBUG nova.virt.libvirt.host [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.727 2 INFO nova.virt.libvirt.host [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Secure Boot support detected
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.730 2 INFO nova.virt.libvirt.driver [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.730 2 INFO nova.virt.libvirt.driver [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.739 2 DEBUG nova.virt.libvirt.driver [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.778 2 INFO nova.virt.node [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Determined node identity 2417da73-53f1-4edf-ae4c-fbd9fa470d6b from /var/lib/nova/compute_id
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.802 2 WARNING nova.compute.manager [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Compute nodes ['2417da73-53f1-4edf-ae4c-fbd9fa470d6b'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Oct 01 16:54:12 compute-0 ceph-mon[74273]: pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.862 2 INFO nova.compute.manager [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.912 2 WARNING nova.compute.manager [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.913 2 DEBUG oslo_concurrency.lockutils [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.913 2 DEBUG oslo_concurrency.lockutils [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.914 2 DEBUG oslo_concurrency.lockutils [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.914 2 DEBUG nova.compute.resource_tracker [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 16:54:12 compute-0 nova_compute[259504]: 2025-10-01 16:54:12.915 2 DEBUG oslo_concurrency.processutils [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 16:54:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 16:54:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2168526144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 16:54:13 compute-0 nova_compute[259504]: 2025-10-01 16:54:13.370 2 DEBUG oslo_concurrency.processutils [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 16:54:13 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Oct 01 16:54:13 compute-0 systemd[1]: Started libvirt nodedev daemon.
Oct 01 16:54:13 compute-0 nova_compute[259504]: 2025-10-01 16:54:13.727 2 WARNING nova.virt.libvirt.driver [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 16:54:13 compute-0 nova_compute[259504]: 2025-10-01 16:54:13.728 2 DEBUG nova.compute.resource_tracker [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5187MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 16:54:13 compute-0 nova_compute[259504]: 2025-10-01 16:54:13.728 2 DEBUG oslo_concurrency.lockutils [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 16:54:13 compute-0 nova_compute[259504]: 2025-10-01 16:54:13.729 2 DEBUG oslo_concurrency.lockutils [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 16:54:13 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:13 compute-0 nova_compute[259504]: 2025-10-01 16:54:13.749 2 WARNING nova.compute.resource_tracker [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] No compute node record for compute-0.ctlplane.example.com:2417da73-53f1-4edf-ae4c-fbd9fa470d6b: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 2417da73-53f1-4edf-ae4c-fbd9fa470d6b could not be found.
Oct 01 16:54:13 compute-0 nova_compute[259504]: 2025-10-01 16:54:13.779 2 INFO nova.compute.resource_tracker [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 2417da73-53f1-4edf-ae4c-fbd9fa470d6b
Oct 01 16:54:13 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2168526144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 16:54:13 compute-0 nova_compute[259504]: 2025-10-01 16:54:13.870 2 DEBUG nova.compute.resource_tracker [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 16:54:13 compute-0 nova_compute[259504]: 2025-10-01 16:54:13.871 2 DEBUG nova.compute.resource_tracker [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 16:54:14 compute-0 nova_compute[259504]: 2025-10-01 16:54:14.741 2 INFO nova.scheduler.client.report [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] [req-9abb2bac-8623-48bc-b96e-373ba0440d45] Created resource provider record via placement API for resource provider with UUID 2417da73-53f1-4edf-ae4c-fbd9fa470d6b and name compute-0.ctlplane.example.com.
Oct 01 16:54:14 compute-0 ceph-mon[74273]: pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:54:15 compute-0 nova_compute[259504]: 2025-10-01 16:54:15.146 2 DEBUG oslo_concurrency.processutils [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 16:54:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 16:54:15 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2569830269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 16:54:15 compute-0 nova_compute[259504]: 2025-10-01 16:54:15.585 2 DEBUG oslo_concurrency.processutils [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 16:54:15 compute-0 nova_compute[259504]: 2025-10-01 16:54:15.590 2 DEBUG nova.virt.libvirt.host [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Oct 01 16:54:15 compute-0 nova_compute[259504]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Oct 01 16:54:15 compute-0 nova_compute[259504]: 2025-10-01 16:54:15.591 2 INFO nova.virt.libvirt.host [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] kernel doesn't support AMD SEV
Oct 01 16:54:15 compute-0 nova_compute[259504]: 2025-10-01 16:54:15.593 2 DEBUG nova.compute.provider_tree [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Updating inventory in ProviderTree for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 01 16:54:15 compute-0 nova_compute[259504]: 2025-10-01 16:54:15.594 2 DEBUG nova.virt.libvirt.driver [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 01 16:54:15 compute-0 nova_compute[259504]: 2025-10-01 16:54:15.656 2 DEBUG nova.scheduler.client.report [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Updated inventory for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Oct 01 16:54:15 compute-0 nova_compute[259504]: 2025-10-01 16:54:15.657 2 DEBUG nova.compute.provider_tree [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Updating resource provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 01 16:54:15 compute-0 nova_compute[259504]: 2025-10-01 16:54:15.657 2 DEBUG nova.compute.provider_tree [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Updating inventory in ProviderTree for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 01 16:54:15 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:15 compute-0 nova_compute[259504]: 2025-10-01 16:54:15.816 2 DEBUG nova.compute.provider_tree [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Updating resource provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 01 16:54:15 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2569830269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 16:54:15 compute-0 nova_compute[259504]: 2025-10-01 16:54:15.845 2 DEBUG nova.compute.resource_tracker [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 16:54:15 compute-0 nova_compute[259504]: 2025-10-01 16:54:15.847 2 DEBUG oslo_concurrency.lockutils [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 16:54:15 compute-0 nova_compute[259504]: 2025-10-01 16:54:15.848 2 DEBUG nova.service [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Oct 01 16:54:15 compute-0 nova_compute[259504]: 2025-10-01 16:54:15.969 2 DEBUG nova.service [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Oct 01 16:54:15 compute-0 nova_compute[259504]: 2025-10-01 16:54:15.970 2 DEBUG nova.servicegroup.drivers.db [None req-78fec769-90d1-40bb-b51b-cc3cb7c4880a - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Oct 01 16:54:16 compute-0 ceph-mon[74273]: pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:17 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:18 compute-0 ceph-mon[74273]: pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:19 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:54:19.959 162304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 16:54:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:54:19.959 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 16:54:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:54:19.959 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 16:54:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:54:20 compute-0 sudo[259892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:54:20 compute-0 sudo[259892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:54:20 compute-0 sudo[259892]: pam_unix(sudo:session): session closed for user root
Oct 01 16:54:20 compute-0 sudo[259917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:54:20 compute-0 sudo[259917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:54:20 compute-0 sudo[259917]: pam_unix(sudo:session): session closed for user root
Oct 01 16:54:20 compute-0 sudo[259942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:54:20 compute-0 sudo[259942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:54:20 compute-0 sudo[259942]: pam_unix(sudo:session): session closed for user root
Oct 01 16:54:20 compute-0 ceph-mon[74273]: pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:20 compute-0 sudo[259967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 16:54:20 compute-0 sudo[259967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:54:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 16:54:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:54:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 16:54:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:54:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:54:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:54:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:54:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:54:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:54:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:54:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:54:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:54:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 16:54:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:54:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:54:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:54:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 16:54:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:54:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 16:54:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:54:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:54:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:54:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 16:54:21 compute-0 sudo[259967]: pam_unix(sudo:session): session closed for user root
Oct 01 16:54:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:54:21 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:54:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 16:54:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:54:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 16:54:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:54:21 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev f5921960-d42b-455f-9a1a-64b05524a7f2 does not exist
Oct 01 16:54:21 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev d5d46d29-27ea-4307-8edb-5d990e5990eb does not exist
Oct 01 16:54:21 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 0ccfb43c-ba43-4b49-b05c-66f2274604de does not exist
Oct 01 16:54:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 16:54:21 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:54:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 16:54:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:54:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:54:21 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:54:21 compute-0 sudo[260023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:54:21 compute-0 sudo[260023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:54:21 compute-0 sudo[260023]: pam_unix(sudo:session): session closed for user root
Oct 01 16:54:21 compute-0 sudo[260048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:54:21 compute-0 sudo[260048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:54:21 compute-0 sudo[260048]: pam_unix(sudo:session): session closed for user root
Oct 01 16:54:21 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:21 compute-0 sudo[260073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:54:21 compute-0 sudo[260073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:54:21 compute-0 sudo[260073]: pam_unix(sudo:session): session closed for user root
Oct 01 16:54:21 compute-0 sudo[260098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 16:54:21 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:54:21 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:54:21 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:54:21 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:54:21 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:54:21 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:54:21 compute-0 sudo[260098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:54:22 compute-0 podman[260122]: 2025-10-01 16:54:22.042297806 +0000 UTC m=+0.135832436 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:54:22 compute-0 podman[260190]: 2025-10-01 16:54:22.371655109 +0000 UTC m=+0.076254364 container create f7223ff32713f399521cd682c78b0e4ee0d07054e29f7fc4d20885f900506fc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 01 16:54:22 compute-0 systemd[1]: Started libpod-conmon-f7223ff32713f399521cd682c78b0e4ee0d07054e29f7fc4d20885f900506fc5.scope.
Oct 01 16:54:22 compute-0 podman[260190]: 2025-10-01 16:54:22.339758398 +0000 UTC m=+0.044357743 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:54:22 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:54:22 compute-0 podman[260190]: 2025-10-01 16:54:22.495678294 +0000 UTC m=+0.200277569 container init f7223ff32713f399521cd682c78b0e4ee0d07054e29f7fc4d20885f900506fc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:54:22 compute-0 podman[260190]: 2025-10-01 16:54:22.509415264 +0000 UTC m=+0.214014549 container start f7223ff32713f399521cd682c78b0e4ee0d07054e29f7fc4d20885f900506fc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_johnson, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 01 16:54:22 compute-0 podman[260190]: 2025-10-01 16:54:22.514286774 +0000 UTC m=+0.218886029 container attach f7223ff32713f399521cd682c78b0e4ee0d07054e29f7fc4d20885f900506fc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_johnson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:54:22 compute-0 determined_johnson[260206]: 167 167
Oct 01 16:54:22 compute-0 systemd[1]: libpod-f7223ff32713f399521cd682c78b0e4ee0d07054e29f7fc4d20885f900506fc5.scope: Deactivated successfully.
Oct 01 16:54:22 compute-0 podman[260190]: 2025-10-01 16:54:22.519011414 +0000 UTC m=+0.223610699 container died f7223ff32713f399521cd682c78b0e4ee0d07054e29f7fc4d20885f900506fc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_johnson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 01 16:54:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbce6c3a5713b2f400137021675e2e5fd43c05c85adf41f664abc2456cd35953-merged.mount: Deactivated successfully.
Oct 01 16:54:22 compute-0 podman[260190]: 2025-10-01 16:54:22.578774457 +0000 UTC m=+0.283373742 container remove f7223ff32713f399521cd682c78b0e4ee0d07054e29f7fc4d20885f900506fc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_johnson, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:54:22 compute-0 systemd[1]: libpod-conmon-f7223ff32713f399521cd682c78b0e4ee0d07054e29f7fc4d20885f900506fc5.scope: Deactivated successfully.
Oct 01 16:54:22 compute-0 podman[260230]: 2025-10-01 16:54:22.770109293 +0000 UTC m=+0.066805184 container create 23dd99f48c5a50e65c459b54f2430ded4e3d0639b311bc43a675f2d26d511036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:54:22 compute-0 systemd[1]: Started libpod-conmon-23dd99f48c5a50e65c459b54f2430ded4e3d0639b311bc43a675f2d26d511036.scope.
Oct 01 16:54:22 compute-0 podman[260230]: 2025-10-01 16:54:22.746626602 +0000 UTC m=+0.043322483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:54:22 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:54:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/142b66a1bfe9131cbf99955be2fc3f4786a31593471ef06e93ac7ceadcae97f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:54:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/142b66a1bfe9131cbf99955be2fc3f4786a31593471ef06e93ac7ceadcae97f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:54:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/142b66a1bfe9131cbf99955be2fc3f4786a31593471ef06e93ac7ceadcae97f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:54:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/142b66a1bfe9131cbf99955be2fc3f4786a31593471ef06e93ac7ceadcae97f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:54:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/142b66a1bfe9131cbf99955be2fc3f4786a31593471ef06e93ac7ceadcae97f4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:54:22 compute-0 ceph-mon[74273]: pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:22 compute-0 podman[260230]: 2025-10-01 16:54:22.895106078 +0000 UTC m=+0.191801979 container init 23dd99f48c5a50e65c459b54f2430ded4e3d0639b311bc43a675f2d26d511036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bartik, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 01 16:54:22 compute-0 podman[260230]: 2025-10-01 16:54:22.916144408 +0000 UTC m=+0.212840309 container start 23dd99f48c5a50e65c459b54f2430ded4e3d0639b311bc43a675f2d26d511036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 01 16:54:22 compute-0 podman[260230]: 2025-10-01 16:54:22.920926799 +0000 UTC m=+0.217622690 container attach 23dd99f48c5a50e65c459b54f2430ded4e3d0639b311bc43a675f2d26d511036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 01 16:54:23 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:23 compute-0 brave_bartik[260246]: --> passed data devices: 0 physical, 3 LVM
Oct 01 16:54:23 compute-0 brave_bartik[260246]: --> relative data size: 1.0
Oct 01 16:54:23 compute-0 brave_bartik[260246]: --> All data devices are unavailable
Oct 01 16:54:23 compute-0 systemd[1]: libpod-23dd99f48c5a50e65c459b54f2430ded4e3d0639b311bc43a675f2d26d511036.scope: Deactivated successfully.
Oct 01 16:54:23 compute-0 podman[260230]: 2025-10-01 16:54:23.956539488 +0000 UTC m=+1.253235349 container died 23dd99f48c5a50e65c459b54f2430ded4e3d0639b311bc43a675f2d26d511036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:54:23 compute-0 systemd[1]: libpod-23dd99f48c5a50e65c459b54f2430ded4e3d0639b311bc43a675f2d26d511036.scope: Consumed 1.000s CPU time.
Oct 01 16:54:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-142b66a1bfe9131cbf99955be2fc3f4786a31593471ef06e93ac7ceadcae97f4-merged.mount: Deactivated successfully.
Oct 01 16:54:24 compute-0 podman[260230]: 2025-10-01 16:54:24.00998029 +0000 UTC m=+1.306676151 container remove 23dd99f48c5a50e65c459b54f2430ded4e3d0639b311bc43a675f2d26d511036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:54:24 compute-0 systemd[1]: libpod-conmon-23dd99f48c5a50e65c459b54f2430ded4e3d0639b311bc43a675f2d26d511036.scope: Deactivated successfully.
Oct 01 16:54:24 compute-0 sudo[260098]: pam_unix(sudo:session): session closed for user root
Oct 01 16:54:24 compute-0 sudo[260288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:54:24 compute-0 sudo[260288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:54:24 compute-0 sudo[260288]: pam_unix(sudo:session): session closed for user root
Oct 01 16:54:24 compute-0 sudo[260313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:54:24 compute-0 sudo[260313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:54:24 compute-0 sudo[260313]: pam_unix(sudo:session): session closed for user root
Oct 01 16:54:24 compute-0 sudo[260338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:54:24 compute-0 sudo[260338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:54:24 compute-0 sudo[260338]: pam_unix(sudo:session): session closed for user root
Oct 01 16:54:24 compute-0 sudo[260363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 16:54:24 compute-0 sudo[260363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:54:24 compute-0 podman[260428]: 2025-10-01 16:54:24.762184568 +0000 UTC m=+0.048736273 container create 9cde74becd8d96a718391717a166b711be7f30d1119809813f3396e3a2ff12b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ganguly, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:54:24 compute-0 systemd[1]: Started libpod-conmon-9cde74becd8d96a718391717a166b711be7f30d1119809813f3396e3a2ff12b2.scope.
Oct 01 16:54:24 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:54:24 compute-0 podman[260428]: 2025-10-01 16:54:24.739555617 +0000 UTC m=+0.026107302 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:54:24 compute-0 podman[260428]: 2025-10-01 16:54:24.836828321 +0000 UTC m=+0.123380036 container init 9cde74becd8d96a718391717a166b711be7f30d1119809813f3396e3a2ff12b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ganguly, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:54:24 compute-0 podman[260428]: 2025-10-01 16:54:24.846403871 +0000 UTC m=+0.132955536 container start 9cde74becd8d96a718391717a166b711be7f30d1119809813f3396e3a2ff12b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ganguly, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:54:24 compute-0 podman[260428]: 2025-10-01 16:54:24.850758932 +0000 UTC m=+0.137310607 container attach 9cde74becd8d96a718391717a166b711be7f30d1119809813f3396e3a2ff12b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ganguly, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 01 16:54:24 compute-0 pedantic_ganguly[260445]: 167 167
Oct 01 16:54:24 compute-0 systemd[1]: libpod-9cde74becd8d96a718391717a166b711be7f30d1119809813f3396e3a2ff12b2.scope: Deactivated successfully.
Oct 01 16:54:24 compute-0 conmon[260445]: conmon 9cde74becd8d96a71839 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9cde74becd8d96a718391717a166b711be7f30d1119809813f3396e3a2ff12b2.scope/container/memory.events
Oct 01 16:54:24 compute-0 podman[260428]: 2025-10-01 16:54:24.854420262 +0000 UTC m=+0.140971937 container died 9cde74becd8d96a718391717a166b711be7f30d1119809813f3396e3a2ff12b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ganguly, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:54:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-d261dc6ba053f495f9c687604b64eb05a28bc511adda2a7149c23d16eecf2011-merged.mount: Deactivated successfully.
Oct 01 16:54:24 compute-0 ceph-mon[74273]: pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:24 compute-0 podman[260428]: 2025-10-01 16:54:24.905046714 +0000 UTC m=+0.191598409 container remove 9cde74becd8d96a718391717a166b711be7f30d1119809813f3396e3a2ff12b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ganguly, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:54:24 compute-0 podman[260442]: 2025-10-01 16:54:24.910467904 +0000 UTC m=+0.094108685 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 01 16:54:24 compute-0 systemd[1]: libpod-conmon-9cde74becd8d96a718391717a166b711be7f30d1119809813f3396e3a2ff12b2.scope: Deactivated successfully.
Oct 01 16:54:25 compute-0 podman[260485]: 2025-10-01 16:54:25.07349638 +0000 UTC m=+0.053514573 container create 64f8a32a876399f729f16c3809faf6bcf9ffa67fadc77bd099334f0c35eed49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:54:25 compute-0 systemd[1]: Started libpod-conmon-64f8a32a876399f729f16c3809faf6bcf9ffa67fadc77bd099334f0c35eed49c.scope.
Oct 01 16:54:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:54:25 compute-0 podman[260485]: 2025-10-01 16:54:25.046885229 +0000 UTC m=+0.026903502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:54:25 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:54:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbd5c1917e29e790a2704f8a1c224e7c93366b88dd039ec8d361a95df41d1bf3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:54:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbd5c1917e29e790a2704f8a1c224e7c93366b88dd039ec8d361a95df41d1bf3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:54:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbd5c1917e29e790a2704f8a1c224e7c93366b88dd039ec8d361a95df41d1bf3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:54:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbd5c1917e29e790a2704f8a1c224e7c93366b88dd039ec8d361a95df41d1bf3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:54:25 compute-0 podman[260485]: 2025-10-01 16:54:25.176679984 +0000 UTC m=+0.156698167 container init 64f8a32a876399f729f16c3809faf6bcf9ffa67fadc77bd099334f0c35eed49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Oct 01 16:54:25 compute-0 podman[260485]: 2025-10-01 16:54:25.184036634 +0000 UTC m=+0.164054847 container start 64f8a32a876399f729f16c3809faf6bcf9ffa67fadc77bd099334f0c35eed49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_torvalds, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 01 16:54:25 compute-0 podman[260485]: 2025-10-01 16:54:25.188183064 +0000 UTC m=+0.168201267 container attach 64f8a32a876399f729f16c3809faf6bcf9ffa67fadc77bd099334f0c35eed49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_torvalds, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:54:25 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]: {
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:     "0": [
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:         {
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             "devices": [
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "/dev/loop3"
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             ],
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             "lv_name": "ceph_lv0",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             "lv_size": "21470642176",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             "name": "ceph_lv0",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             "tags": {
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "ceph.cluster_name": "ceph",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "ceph.crush_device_class": "",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "ceph.encrypted": "0",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "ceph.osd_id": "0",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "ceph.type": "block",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "ceph.vdo": "0"
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             },
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             "type": "block",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             "vg_name": "ceph_vg0"
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:         }
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:     ],
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:     "1": [
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:         {
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             "devices": [
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "/dev/loop4"
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             ],
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             "lv_name": "ceph_lv1",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             "lv_size": "21470642176",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             "name": "ceph_lv1",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             "tags": {
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "ceph.cluster_name": "ceph",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "ceph.crush_device_class": "",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "ceph.encrypted": "0",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "ceph.osd_id": "1",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "ceph.type": "block",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "ceph.vdo": "0"
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             },
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             "type": "block",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             "vg_name": "ceph_vg1"
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:         }
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:     ],
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:     "2": [
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:         {
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             "devices": [
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "/dev/loop5"
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             ],
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             "lv_name": "ceph_lv2",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             "lv_size": "21470642176",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             "name": "ceph_lv2",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             "tags": {
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "ceph.cluster_name": "ceph",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "ceph.crush_device_class": "",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "ceph.encrypted": "0",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "ceph.osd_id": "2",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "ceph.type": "block",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:                 "ceph.vdo": "0"
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             },
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             "type": "block",
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:             "vg_name": "ceph_vg2"
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:         }
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]:     ]
Oct 01 16:54:25 compute-0 elastic_torvalds[260502]: }
Oct 01 16:54:25 compute-0 systemd[1]: libpod-64f8a32a876399f729f16c3809faf6bcf9ffa67fadc77bd099334f0c35eed49c.scope: Deactivated successfully.
Oct 01 16:54:25 compute-0 podman[260485]: 2025-10-01 16:54:25.919362052 +0000 UTC m=+0.899380265 container died 64f8a32a876399f729f16c3809faf6bcf9ffa67fadc77bd099334f0c35eed49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:54:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbd5c1917e29e790a2704f8a1c224e7c93366b88dd039ec8d361a95df41d1bf3-merged.mount: Deactivated successfully.
Oct 01 16:54:25 compute-0 podman[260485]: 2025-10-01 16:54:25.980582514 +0000 UTC m=+0.960600697 container remove 64f8a32a876399f729f16c3809faf6bcf9ffa67fadc77bd099334f0c35eed49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_torvalds, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 01 16:54:26 compute-0 systemd[1]: libpod-conmon-64f8a32a876399f729f16c3809faf6bcf9ffa67fadc77bd099334f0c35eed49c.scope: Deactivated successfully.
Oct 01 16:54:26 compute-0 sudo[260363]: pam_unix(sudo:session): session closed for user root
Oct 01 16:54:26 compute-0 sudo[260523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:54:26 compute-0 sudo[260523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:54:26 compute-0 sudo[260523]: pam_unix(sudo:session): session closed for user root
Oct 01 16:54:26 compute-0 sudo[260548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:54:26 compute-0 sudo[260548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:54:26 compute-0 sudo[260548]: pam_unix(sudo:session): session closed for user root
Oct 01 16:54:26 compute-0 sudo[260573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:54:26 compute-0 sudo[260573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:54:26 compute-0 sudo[260573]: pam_unix(sudo:session): session closed for user root
Oct 01 16:54:26 compute-0 sudo[260598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 16:54:26 compute-0 sudo[260598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:54:26 compute-0 podman[260664]: 2025-10-01 16:54:26.712936541 +0000 UTC m=+0.058327273 container create 4980736e8650f74703a76eb58a0f14f736b6cd414b04bc89bbf8df5b54b951f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 01 16:54:26 compute-0 systemd[1]: Started libpod-conmon-4980736e8650f74703a76eb58a0f14f736b6cd414b04bc89bbf8df5b54b951f9.scope.
Oct 01 16:54:26 compute-0 podman[260664]: 2025-10-01 16:54:26.68723766 +0000 UTC m=+0.032628422 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:54:26 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:54:26 compute-0 podman[260664]: 2025-10-01 16:54:26.807850165 +0000 UTC m=+0.153240877 container init 4980736e8650f74703a76eb58a0f14f736b6cd414b04bc89bbf8df5b54b951f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 01 16:54:26 compute-0 podman[260664]: 2025-10-01 16:54:26.814318475 +0000 UTC m=+0.159709197 container start 4980736e8650f74703a76eb58a0f14f736b6cd414b04bc89bbf8df5b54b951f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:54:26 compute-0 podman[260664]: 2025-10-01 16:54:26.818816905 +0000 UTC m=+0.164207597 container attach 4980736e8650f74703a76eb58a0f14f736b6cd414b04bc89bbf8df5b54b951f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jones, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 01 16:54:26 compute-0 stupefied_jones[260680]: 167 167
Oct 01 16:54:26 compute-0 systemd[1]: libpod-4980736e8650f74703a76eb58a0f14f736b6cd414b04bc89bbf8df5b54b951f9.scope: Deactivated successfully.
Oct 01 16:54:26 compute-0 podman[260664]: 2025-10-01 16:54:26.820375645 +0000 UTC m=+0.165766377 container died 4980736e8650f74703a76eb58a0f14f736b6cd414b04bc89bbf8df5b54b951f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 01 16:54:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-67938c3de6179241c5fbf263fe8b57048c73c315930580055ad405ecf371b44d-merged.mount: Deactivated successfully.
Oct 01 16:54:26 compute-0 podman[260664]: 2025-10-01 16:54:26.864070047 +0000 UTC m=+0.209460769 container remove 4980736e8650f74703a76eb58a0f14f736b6cd414b04bc89bbf8df5b54b951f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jones, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:54:26 compute-0 systemd[1]: libpod-conmon-4980736e8650f74703a76eb58a0f14f736b6cd414b04bc89bbf8df5b54b951f9.scope: Deactivated successfully.
Oct 01 16:54:26 compute-0 ceph-mon[74273]: pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:27 compute-0 podman[260706]: 2025-10-01 16:54:27.089086815 +0000 UTC m=+0.067474233 container create 0819bad31aabc82983ff49eac797a0de8ce5405089d1aab037cbfb5e6d326f99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_agnesi, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:54:27 compute-0 systemd[1]: Started libpod-conmon-0819bad31aabc82983ff49eac797a0de8ce5405089d1aab037cbfb5e6d326f99.scope.
Oct 01 16:54:27 compute-0 podman[260706]: 2025-10-01 16:54:27.061505554 +0000 UTC m=+0.039893032 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:54:27 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:54:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35e6867895e6448696c70c378fe824157aea27709e786414bf14b3a53e56d271/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:54:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35e6867895e6448696c70c378fe824157aea27709e786414bf14b3a53e56d271/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:54:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35e6867895e6448696c70c378fe824157aea27709e786414bf14b3a53e56d271/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:54:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35e6867895e6448696c70c378fe824157aea27709e786414bf14b3a53e56d271/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:54:27 compute-0 podman[260706]: 2025-10-01 16:54:27.198345239 +0000 UTC m=+0.176732647 container init 0819bad31aabc82983ff49eac797a0de8ce5405089d1aab037cbfb5e6d326f99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_agnesi, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 01 16:54:27 compute-0 podman[260706]: 2025-10-01 16:54:27.20370336 +0000 UTC m=+0.182090738 container start 0819bad31aabc82983ff49eac797a0de8ce5405089d1aab037cbfb5e6d326f99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_agnesi, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:54:27 compute-0 podman[260706]: 2025-10-01 16:54:27.20677019 +0000 UTC m=+0.185157578 container attach 0819bad31aabc82983ff49eac797a0de8ce5405089d1aab037cbfb5e6d326f99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 16:54:27 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:28 compute-0 distracted_agnesi[260724]: {
Oct 01 16:54:28 compute-0 distracted_agnesi[260724]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 16:54:28 compute-0 distracted_agnesi[260724]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:54:28 compute-0 distracted_agnesi[260724]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 16:54:28 compute-0 distracted_agnesi[260724]:         "osd_id": 2,
Oct 01 16:54:28 compute-0 distracted_agnesi[260724]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:54:28 compute-0 distracted_agnesi[260724]:         "type": "bluestore"
Oct 01 16:54:28 compute-0 distracted_agnesi[260724]:     },
Oct 01 16:54:28 compute-0 distracted_agnesi[260724]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 16:54:28 compute-0 distracted_agnesi[260724]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:54:28 compute-0 distracted_agnesi[260724]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 16:54:28 compute-0 distracted_agnesi[260724]:         "osd_id": 0,
Oct 01 16:54:28 compute-0 distracted_agnesi[260724]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:54:28 compute-0 distracted_agnesi[260724]:         "type": "bluestore"
Oct 01 16:54:28 compute-0 distracted_agnesi[260724]:     },
Oct 01 16:54:28 compute-0 distracted_agnesi[260724]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 16:54:28 compute-0 distracted_agnesi[260724]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:54:28 compute-0 distracted_agnesi[260724]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 16:54:28 compute-0 distracted_agnesi[260724]:         "osd_id": 1,
Oct 01 16:54:28 compute-0 distracted_agnesi[260724]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:54:28 compute-0 distracted_agnesi[260724]:         "type": "bluestore"
Oct 01 16:54:28 compute-0 distracted_agnesi[260724]:     }
Oct 01 16:54:28 compute-0 distracted_agnesi[260724]: }
Oct 01 16:54:28 compute-0 systemd[1]: libpod-0819bad31aabc82983ff49eac797a0de8ce5405089d1aab037cbfb5e6d326f99.scope: Deactivated successfully.
Oct 01 16:54:28 compute-0 podman[260706]: 2025-10-01 16:54:28.187442217 +0000 UTC m=+1.165829595 container died 0819bad31aabc82983ff49eac797a0de8ce5405089d1aab037cbfb5e6d326f99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 01 16:54:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-35e6867895e6448696c70c378fe824157aea27709e786414bf14b3a53e56d271-merged.mount: Deactivated successfully.
Oct 01 16:54:28 compute-0 podman[260706]: 2025-10-01 16:54:28.250628949 +0000 UTC m=+1.229016327 container remove 0819bad31aabc82983ff49eac797a0de8ce5405089d1aab037cbfb5e6d326f99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_agnesi, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:54:28 compute-0 systemd[1]: libpod-conmon-0819bad31aabc82983ff49eac797a0de8ce5405089d1aab037cbfb5e6d326f99.scope: Deactivated successfully.
Oct 01 16:54:28 compute-0 sudo[260598]: pam_unix(sudo:session): session closed for user root
Oct 01 16:54:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:54:28 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:54:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:54:28 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:54:28 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 528ab391-e71d-40f6-a18d-4698bba583b2 does not exist
Oct 01 16:54:28 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev e640f2e3-a407-4897-bed8-0e1afb2ed043 does not exist
Oct 01 16:54:28 compute-0 sudo[260771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:54:28 compute-0 sudo[260771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:54:28 compute-0 sudo[260771]: pam_unix(sudo:session): session closed for user root
Oct 01 16:54:28 compute-0 sudo[260796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 16:54:28 compute-0 sudo[260796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:54:28 compute-0 sudo[260796]: pam_unix(sudo:session): session closed for user root
Oct 01 16:54:28 compute-0 ceph-mon[74273]: pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:54:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:54:29 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:54:30 compute-0 ceph-mon[74273]: pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:30 compute-0 nova_compute[259504]: 2025-10-01 16:54:30.972 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:54:31 compute-0 nova_compute[259504]: 2025-10-01 16:54:31.011 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:54:31 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:32 compute-0 ceph-mon[74273]: pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:33 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:34 compute-0 ceph-mon[74273]: pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:54:35 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 16:54:35 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3644797958' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 16:54:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 16:54:35 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3644797958' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 16:54:35 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3644797958' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 16:54:35 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3644797958' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 16:54:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 16:54:36 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1184270544' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 16:54:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 16:54:36 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1184270544' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 16:54:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 16:54:36 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1713354507' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 16:54:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 16:54:36 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1713354507' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 16:54:37 compute-0 ceph-mon[74273]: pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:37 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1184270544' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 16:54:37 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1184270544' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 16:54:37 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1713354507' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 16:54:37 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1713354507' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 16:54:37 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:39 compute-0 ceph-mon[74273]: pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:39 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:39 compute-0 podman[260821]: 2025-10-01 16:54:39.762137172 +0000 UTC m=+0.077016814 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 01 16:54:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:54:41 compute-0 ceph-mon[74273]: pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:54:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:54:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:54:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:54:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:54:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:54:41 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:41 compute-0 podman[260841]: 2025-10-01 16:54:41.765181776 +0000 UTC m=+0.080124994 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 01 16:54:43 compute-0 sshd-session[260861]: Invalid user user from 185.156.73.233 port 34992
Oct 01 16:54:43 compute-0 ceph-mon[74273]: pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:43 compute-0 sshd-session[260861]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 16:54:43 compute-0 sshd-session[260861]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=185.156.73.233
Oct 01 16:54:43 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:44 compute-0 ceph-mon[74273]: pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:54:45 compute-0 sshd-session[260861]: Failed password for invalid user user from 185.156.73.233 port 34992 ssh2
Oct 01 16:54:45 compute-0 sshd-session[260861]: Connection closed by invalid user user 185.156.73.233 port 34992 [preauth]
Oct 01 16:54:45 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:46 compute-0 ceph-mon[74273]: pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:47 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:49 compute-0 ceph-mon[74273]: pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:49 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:54:50 compute-0 ceph-mon[74273]: pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:51 compute-0 nova_compute[259504]: 2025-10-01 16:54:51.157 2 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 0.18 sec
Oct 01 16:54:51 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:52 compute-0 podman[260863]: 2025-10-01 16:54:52.841251783 +0000 UTC m=+0.130164026 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 01 16:54:53 compute-0 ceph-mon[74273]: pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:53 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:54 compute-0 ceph-mon[74273]: pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:54:55 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:55 compute-0 podman[260889]: 2025-10-01 16:54:55.775176674 +0000 UTC m=+0.082006264 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 01 16:54:57 compute-0 ceph-mon[74273]: pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:57 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:59 compute-0 ceph-mon[74273]: pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:54:59 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:55:01 compute-0 ceph-mon[74273]: pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:01 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:03 compute-0 ceph-mon[74273]: pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:03 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:55:05 compute-0 ceph-mon[74273]: pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:05 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:06 compute-0 ceph-mon[74273]: pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:07 compute-0 ceph-osd[88140]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 16:55:07 compute-0 ceph-osd[88140]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5726 writes, 24K keys, 5726 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5726 writes, 938 syncs, 6.10 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 01 16:55:07 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:08 compute-0 ceph-mon[74273]: pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:09 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:55:10 compute-0 podman[260908]: 2025-10-01 16:55:10.792156111 +0000 UTC m=+0.095438725 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 01 16:55:10 compute-0 ceph-mon[74273]: pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_16:55:11
Oct 01 16:55:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 16:55:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 16:55:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'backups', '.rgw.root', 'images', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'vms', '.mgr', 'cephfs.cephfs.meta']
Oct 01 16:55:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 16:55:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:55:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:55:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:55:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:55:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:55:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:55:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 16:55:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:55:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 16:55:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:55:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:55:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:55:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:55:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:55:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:55:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:55:11 compute-0 ceph-osd[89167]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 16:55:11 compute-0 ceph-osd[89167]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 6849 writes, 28K keys, 6849 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6849 writes, 1288 syncs, 5.32 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 277 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f21090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f21090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f21090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 01 16:55:11 compute-0 nova_compute[259504]: 2025-10-01 16:55:11.752 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:55:11 compute-0 nova_compute[259504]: 2025-10-01 16:55:11.753 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:55:11 compute-0 nova_compute[259504]: 2025-10-01 16:55:11.754 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 16:55:11 compute-0 nova_compute[259504]: 2025-10-01 16:55:11.754 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 16:55:11 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:11 compute-0 nova_compute[259504]: 2025-10-01 16:55:11.774 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 16:55:11 compute-0 nova_compute[259504]: 2025-10-01 16:55:11.775 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:55:11 compute-0 nova_compute[259504]: 2025-10-01 16:55:11.776 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:55:11 compute-0 nova_compute[259504]: 2025-10-01 16:55:11.777 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:55:11 compute-0 nova_compute[259504]: 2025-10-01 16:55:11.777 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:55:11 compute-0 nova_compute[259504]: 2025-10-01 16:55:11.777 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:55:11 compute-0 nova_compute[259504]: 2025-10-01 16:55:11.778 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:55:11 compute-0 nova_compute[259504]: 2025-10-01 16:55:11.779 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 16:55:11 compute-0 nova_compute[259504]: 2025-10-01 16:55:11.779 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:55:11 compute-0 nova_compute[259504]: 2025-10-01 16:55:11.813 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 16:55:11 compute-0 nova_compute[259504]: 2025-10-01 16:55:11.814 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 16:55:11 compute-0 nova_compute[259504]: 2025-10-01 16:55:11.814 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 16:55:11 compute-0 nova_compute[259504]: 2025-10-01 16:55:11.814 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 16:55:11 compute-0 nova_compute[259504]: 2025-10-01 16:55:11.815 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 16:55:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 16:55:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/663025593' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 16:55:12 compute-0 nova_compute[259504]: 2025-10-01 16:55:12.261 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 16:55:12 compute-0 nova_compute[259504]: 2025-10-01 16:55:12.490 2 WARNING nova.virt.libvirt.driver [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 16:55:12 compute-0 nova_compute[259504]: 2025-10-01 16:55:12.492 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5178MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 16:55:12 compute-0 nova_compute[259504]: 2025-10-01 16:55:12.492 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 16:55:12 compute-0 nova_compute[259504]: 2025-10-01 16:55:12.493 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 16:55:12 compute-0 nova_compute[259504]: 2025-10-01 16:55:12.587 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 16:55:12 compute-0 nova_compute[259504]: 2025-10-01 16:55:12.588 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 16:55:12 compute-0 nova_compute[259504]: 2025-10-01 16:55:12.622 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 16:55:12 compute-0 podman[260950]: 2025-10-01 16:55:12.758323725 +0000 UTC m=+0.076181353 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 01 16:55:13 compute-0 ceph-mon[74273]: pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:13 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/663025593' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 16:55:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 16:55:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1154929385' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 16:55:13 compute-0 nova_compute[259504]: 2025-10-01 16:55:13.132 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 16:55:13 compute-0 nova_compute[259504]: 2025-10-01 16:55:13.140 2 DEBUG nova.compute.provider_tree [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed in ProviderTree for provider: 2417da73-53f1-4edf-ae4c-fbd9fa470d6b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 16:55:13 compute-0 nova_compute[259504]: 2025-10-01 16:55:13.156 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 16:55:13 compute-0 nova_compute[259504]: 2025-10-01 16:55:13.157 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 16:55:13 compute-0 nova_compute[259504]: 2025-10-01 16:55:13.157 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 16:55:13 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:14 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1154929385' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 16:55:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:55:15 compute-0 ceph-mon[74273]: pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:15 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:16 compute-0 ceph-osd[90269]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 16:55:16 compute-0 ceph-osd[90269]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 5598 writes, 23K keys, 5598 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5598 writes, 864 syncs, 6.48 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.55 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.55 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 01 16:55:16 compute-0 ceph-mon[74273]: pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:17 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:18 compute-0 ceph-mgr[74571]: [devicehealth INFO root] Check health
Oct 01 16:55:18 compute-0 ceph-mon[74273]: pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:19 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:55:19.960 162304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 16:55:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:55:19.961 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 16:55:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:55:19.961 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 16:55:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:55:20 compute-0 ceph-mon[74273]: pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 16:55:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:55:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 16:55:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:55:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:55:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:55:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:55:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:55:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:55:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:55:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:55:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:55:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 16:55:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:55:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:55:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:55:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 16:55:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:55:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 16:55:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:55:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:55:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:55:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 16:55:21 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:22 compute-0 ceph-mon[74273]: pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:23 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:23 compute-0 podman[260991]: 2025-10-01 16:55:23.818350369 +0000 UTC m=+0.121857152 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 01 16:55:24 compute-0 ceph-mon[74273]: pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:55:25 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:26 compute-0 podman[261017]: 2025-10-01 16:55:26.747299896 +0000 UTC m=+0.060540632 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 01 16:55:26 compute-0 ceph-mon[74273]: pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:27 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:28 compute-0 sudo[261036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:55:28 compute-0 sudo[261036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:55:28 compute-0 sudo[261036]: pam_unix(sudo:session): session closed for user root
Oct 01 16:55:28 compute-0 sudo[261061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:55:28 compute-0 sudo[261061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:55:28 compute-0 sudo[261061]: pam_unix(sudo:session): session closed for user root
Oct 01 16:55:28 compute-0 sudo[261086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:55:28 compute-0 sudo[261086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:55:28 compute-0 sudo[261086]: pam_unix(sudo:session): session closed for user root
Oct 01 16:55:28 compute-0 sudo[261111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 01 16:55:28 compute-0 sudo[261111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:55:28 compute-0 sudo[261111]: pam_unix(sudo:session): session closed for user root
Oct 01 16:55:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:55:28 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:55:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:55:29 compute-0 ceph-mon[74273]: pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:55:29 compute-0 sudo[261156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:55:29 compute-0 sudo[261156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:55:29 compute-0 sudo[261156]: pam_unix(sudo:session): session closed for user root
Oct 01 16:55:29 compute-0 sudo[261181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:55:29 compute-0 sudo[261181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:55:29 compute-0 sudo[261181]: pam_unix(sudo:session): session closed for user root
Oct 01 16:55:29 compute-0 sudo[261206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:55:29 compute-0 sudo[261206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:55:29 compute-0 sudo[261206]: pam_unix(sudo:session): session closed for user root
Oct 01 16:55:29 compute-0 sudo[261231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 16:55:29 compute-0 sudo[261231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:55:29 compute-0 sudo[261231]: pam_unix(sudo:session): session closed for user root
Oct 01 16:55:29 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:55:29 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:55:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 16:55:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:55:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 16:55:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:55:29 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev b974827a-34c4-4c4f-a27b-7e74cee306f6 does not exist
Oct 01 16:55:29 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 421bd68a-da4a-43c6-9810-a690f74e2573 does not exist
Oct 01 16:55:29 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 7dd4d1ca-38de-49a8-acb9-d2500339f41c does not exist
Oct 01 16:55:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 16:55:29 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:55:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 16:55:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:55:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:55:29 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:55:29 compute-0 sudo[261289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:55:29 compute-0 sudo[261289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:55:29 compute-0 sudo[261289]: pam_unix(sudo:session): session closed for user root
Oct 01 16:55:29 compute-0 sudo[261314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:55:29 compute-0 sudo[261314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:55:29 compute-0 sudo[261314]: pam_unix(sudo:session): session closed for user root
Oct 01 16:55:29 compute-0 sudo[261339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:55:29 compute-0 sudo[261339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:55:29 compute-0 sudo[261339]: pam_unix(sudo:session): session closed for user root
Oct 01 16:55:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:55:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:55:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:55:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:55:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:55:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:55:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:55:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:55:30 compute-0 sudo[261364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 16:55:30 compute-0 sudo[261364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:55:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:55:30 compute-0 podman[261431]: 2025-10-01 16:55:30.343541586 +0000 UTC m=+0.040296755 container create 2dfc4e545bec281574ad43e79c6b94db72025d53e31a9d19d0567b6a2128866e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 01 16:55:30 compute-0 systemd[1]: Started libpod-conmon-2dfc4e545bec281574ad43e79c6b94db72025d53e31a9d19d0567b6a2128866e.scope.
Oct 01 16:55:30 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:55:30 compute-0 podman[261431]: 2025-10-01 16:55:30.32426716 +0000 UTC m=+0.021022349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:55:30 compute-0 podman[261431]: 2025-10-01 16:55:30.420569002 +0000 UTC m=+0.117324181 container init 2dfc4e545bec281574ad43e79c6b94db72025d53e31a9d19d0567b6a2128866e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_agnesi, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 01 16:55:30 compute-0 podman[261431]: 2025-10-01 16:55:30.428509149 +0000 UTC m=+0.125264308 container start 2dfc4e545bec281574ad43e79c6b94db72025d53e31a9d19d0567b6a2128866e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:55:30 compute-0 podman[261431]: 2025-10-01 16:55:30.432296327 +0000 UTC m=+0.129051486 container attach 2dfc4e545bec281574ad43e79c6b94db72025d53e31a9d19d0567b6a2128866e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_agnesi, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:55:30 compute-0 festive_agnesi[261447]: 167 167
Oct 01 16:55:30 compute-0 systemd[1]: libpod-2dfc4e545bec281574ad43e79c6b94db72025d53e31a9d19d0567b6a2128866e.scope: Deactivated successfully.
Oct 01 16:55:30 compute-0 podman[261431]: 2025-10-01 16:55:30.436045707 +0000 UTC m=+0.132800866 container died 2dfc4e545bec281574ad43e79c6b94db72025d53e31a9d19d0567b6a2128866e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 01 16:55:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3b2d2015a2d281b2c870a445c8a542a96986b23acc277f4475cb1f203100fe2-merged.mount: Deactivated successfully.
Oct 01 16:55:30 compute-0 podman[261431]: 2025-10-01 16:55:30.488098662 +0000 UTC m=+0.184853841 container remove 2dfc4e545bec281574ad43e79c6b94db72025d53e31a9d19d0567b6a2128866e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_agnesi, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:55:30 compute-0 systemd[1]: libpod-conmon-2dfc4e545bec281574ad43e79c6b94db72025d53e31a9d19d0567b6a2128866e.scope: Deactivated successfully.
Oct 01 16:55:30 compute-0 podman[261470]: 2025-10-01 16:55:30.68923233 +0000 UTC m=+0.087314626 container create eadf2f3a504d1d861cd470ca3b6266188abf8fc7eea5fd576aeec3e46070a416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 16:55:30 compute-0 podman[261470]: 2025-10-01 16:55:30.623929399 +0000 UTC m=+0.022011715 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:55:30 compute-0 systemd[1]: Started libpod-conmon-eadf2f3a504d1d861cd470ca3b6266188abf8fc7eea5fd576aeec3e46070a416.scope.
Oct 01 16:55:30 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:55:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51ffb41d83a4e450be72624d7e8d011a1de037e07135926781b6f6c83b33984d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:55:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51ffb41d83a4e450be72624d7e8d011a1de037e07135926781b6f6c83b33984d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:55:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51ffb41d83a4e450be72624d7e8d011a1de037e07135926781b6f6c83b33984d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:55:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51ffb41d83a4e450be72624d7e8d011a1de037e07135926781b6f6c83b33984d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:55:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51ffb41d83a4e450be72624d7e8d011a1de037e07135926781b6f6c83b33984d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:55:30 compute-0 podman[261470]: 2025-10-01 16:55:30.79398271 +0000 UTC m=+0.192065026 container init eadf2f3a504d1d861cd470ca3b6266188abf8fc7eea5fd576aeec3e46070a416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_sinoussi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:55:30 compute-0 podman[261470]: 2025-10-01 16:55:30.804359069 +0000 UTC m=+0.202441396 container start eadf2f3a504d1d861cd470ca3b6266188abf8fc7eea5fd576aeec3e46070a416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_sinoussi, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:55:30 compute-0 podman[261470]: 2025-10-01 16:55:30.808999074 +0000 UTC m=+0.207081390 container attach eadf2f3a504d1d861cd470ca3b6266188abf8fc7eea5fd576aeec3e46070a416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:55:31 compute-0 ceph-mon[74273]: pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:31 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:31 compute-0 nervous_sinoussi[261486]: --> passed data devices: 0 physical, 3 LVM
Oct 01 16:55:31 compute-0 nervous_sinoussi[261486]: --> relative data size: 1.0
Oct 01 16:55:31 compute-0 nervous_sinoussi[261486]: --> All data devices are unavailable
Oct 01 16:55:31 compute-0 systemd[1]: libpod-eadf2f3a504d1d861cd470ca3b6266188abf8fc7eea5fd576aeec3e46070a416.scope: Deactivated successfully.
Oct 01 16:55:31 compute-0 systemd[1]: libpod-eadf2f3a504d1d861cd470ca3b6266188abf8fc7eea5fd576aeec3e46070a416.scope: Consumed 1.052s CPU time.
Oct 01 16:55:31 compute-0 podman[261470]: 2025-10-01 16:55:31.915183467 +0000 UTC m=+1.313265773 container died eadf2f3a504d1d861cd470ca3b6266188abf8fc7eea5fd576aeec3e46070a416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 16:55:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-51ffb41d83a4e450be72624d7e8d011a1de037e07135926781b6f6c83b33984d-merged.mount: Deactivated successfully.
Oct 01 16:55:31 compute-0 podman[261470]: 2025-10-01 16:55:31.987522548 +0000 UTC m=+1.385604854 container remove eadf2f3a504d1d861cd470ca3b6266188abf8fc7eea5fd576aeec3e46070a416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:55:31 compute-0 systemd[1]: libpod-conmon-eadf2f3a504d1d861cd470ca3b6266188abf8fc7eea5fd576aeec3e46070a416.scope: Deactivated successfully.
Oct 01 16:55:32 compute-0 sudo[261364]: pam_unix(sudo:session): session closed for user root
Oct 01 16:55:32 compute-0 sudo[261529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:55:32 compute-0 sudo[261529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:55:32 compute-0 sudo[261529]: pam_unix(sudo:session): session closed for user root
Oct 01 16:55:32 compute-0 sudo[261554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:55:32 compute-0 sudo[261554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:55:32 compute-0 sudo[261554]: pam_unix(sudo:session): session closed for user root
Oct 01 16:55:32 compute-0 sudo[261579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:55:32 compute-0 sudo[261579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:55:32 compute-0 sudo[261579]: pam_unix(sudo:session): session closed for user root
Oct 01 16:55:32 compute-0 sudo[261604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 16:55:32 compute-0 sudo[261604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:55:32 compute-0 podman[261669]: 2025-10-01 16:55:32.727794582 +0000 UTC m=+0.049141258 container create 00b316d83b977ac025fdfcad23fb459db852fdd8db4bd03e5f41875e708049ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_stonebraker, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Oct 01 16:55:32 compute-0 systemd[1]: Started libpod-conmon-00b316d83b977ac025fdfcad23fb459db852fdd8db4bd03e5f41875e708049ce.scope.
Oct 01 16:55:32 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:55:32 compute-0 podman[261669]: 2025-10-01 16:55:32.704105721 +0000 UTC m=+0.025452417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:55:32 compute-0 podman[261669]: 2025-10-01 16:55:32.800178483 +0000 UTC m=+0.121525179 container init 00b316d83b977ac025fdfcad23fb459db852fdd8db4bd03e5f41875e708049ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 01 16:55:32 compute-0 podman[261669]: 2025-10-01 16:55:32.806231677 +0000 UTC m=+0.127578353 container start 00b316d83b977ac025fdfcad23fb459db852fdd8db4bd03e5f41875e708049ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:55:32 compute-0 podman[261669]: 2025-10-01 16:55:32.809135442 +0000 UTC m=+0.130482138 container attach 00b316d83b977ac025fdfcad23fb459db852fdd8db4bd03e5f41875e708049ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_stonebraker, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:55:32 compute-0 happy_stonebraker[261685]: 167 167
Oct 01 16:55:32 compute-0 systemd[1]: libpod-00b316d83b977ac025fdfcad23fb459db852fdd8db4bd03e5f41875e708049ce.scope: Deactivated successfully.
Oct 01 16:55:32 compute-0 podman[261669]: 2025-10-01 16:55:32.811506941 +0000 UTC m=+0.132853617 container died 00b316d83b977ac025fdfcad23fb459db852fdd8db4bd03e5f41875e708049ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:55:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bab0cb711080520d783a7f6f61bdc95848c1953e207ce66b361b86b971e1936-merged.mount: Deactivated successfully.
Oct 01 16:55:32 compute-0 podman[261669]: 2025-10-01 16:55:32.843862487 +0000 UTC m=+0.165209163 container remove 00b316d83b977ac025fdfcad23fb459db852fdd8db4bd03e5f41875e708049ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 01 16:55:32 compute-0 systemd[1]: libpod-conmon-00b316d83b977ac025fdfcad23fb459db852fdd8db4bd03e5f41875e708049ce.scope: Deactivated successfully.
Oct 01 16:55:33 compute-0 podman[261709]: 2025-10-01 16:55:33.042284604 +0000 UTC m=+0.049013330 container create 055b9c986fb9c99bedc9fccb439a5a649054f3127af7f808e96d135234dd6d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mahavira, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 01 16:55:33 compute-0 ceph-mon[74273]: pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:33 compute-0 systemd[1]: Started libpod-conmon-055b9c986fb9c99bedc9fccb439a5a649054f3127af7f808e96d135234dd6d47.scope.
Oct 01 16:55:33 compute-0 podman[261709]: 2025-10-01 16:55:33.01967996 +0000 UTC m=+0.026408696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:55:33 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:55:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93630938a12180aa65703dc7da5bbe8a058fb13c684ddc0d24d56cbb9f01b263/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:55:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93630938a12180aa65703dc7da5bbe8a058fb13c684ddc0d24d56cbb9f01b263/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:55:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93630938a12180aa65703dc7da5bbe8a058fb13c684ddc0d24d56cbb9f01b263/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:55:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93630938a12180aa65703dc7da5bbe8a058fb13c684ddc0d24d56cbb9f01b263/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:55:33 compute-0 podman[261709]: 2025-10-01 16:55:33.144253816 +0000 UTC m=+0.150982542 container init 055b9c986fb9c99bedc9fccb439a5a649054f3127af7f808e96d135234dd6d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mahavira, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 01 16:55:33 compute-0 podman[261709]: 2025-10-01 16:55:33.152986106 +0000 UTC m=+0.159714832 container start 055b9c986fb9c99bedc9fccb439a5a649054f3127af7f808e96d135234dd6d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 01 16:55:33 compute-0 podman[261709]: 2025-10-01 16:55:33.161839762 +0000 UTC m=+0.168568508 container attach 055b9c986fb9c99bedc9fccb439a5a649054f3127af7f808e96d135234dd6d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Oct 01 16:55:33 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:33 compute-0 angry_mahavira[261725]: {
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:     "0": [
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:         {
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             "devices": [
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "/dev/loop3"
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             ],
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             "lv_name": "ceph_lv0",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             "lv_size": "21470642176",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             "name": "ceph_lv0",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             "tags": {
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "ceph.cluster_name": "ceph",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "ceph.crush_device_class": "",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "ceph.encrypted": "0",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "ceph.osd_id": "0",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "ceph.type": "block",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "ceph.vdo": "0"
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             },
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             "type": "block",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             "vg_name": "ceph_vg0"
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:         }
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:     ],
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:     "1": [
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:         {
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             "devices": [
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "/dev/loop4"
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             ],
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             "lv_name": "ceph_lv1",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             "lv_size": "21470642176",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             "name": "ceph_lv1",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             "tags": {
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "ceph.cluster_name": "ceph",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "ceph.crush_device_class": "",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "ceph.encrypted": "0",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "ceph.osd_id": "1",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "ceph.type": "block",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "ceph.vdo": "0"
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             },
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             "type": "block",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             "vg_name": "ceph_vg1"
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:         }
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:     ],
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:     "2": [
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:         {
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             "devices": [
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "/dev/loop5"
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             ],
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             "lv_name": "ceph_lv2",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             "lv_size": "21470642176",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             "name": "ceph_lv2",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             "tags": {
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "ceph.cluster_name": "ceph",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "ceph.crush_device_class": "",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "ceph.encrypted": "0",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "ceph.osd_id": "2",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "ceph.type": "block",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:                 "ceph.vdo": "0"
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             },
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             "type": "block",
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:             "vg_name": "ceph_vg2"
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:         }
Oct 01 16:55:33 compute-0 angry_mahavira[261725]:     ]
Oct 01 16:55:33 compute-0 angry_mahavira[261725]: }
Oct 01 16:55:33 compute-0 systemd[1]: libpod-055b9c986fb9c99bedc9fccb439a5a649054f3127af7f808e96d135234dd6d47.scope: Deactivated successfully.
Oct 01 16:55:33 compute-0 podman[261709]: 2025-10-01 16:55:33.932698076 +0000 UTC m=+0.939426802 container died 055b9c986fb9c99bedc9fccb439a5a649054f3127af7f808e96d135234dd6d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mahavira, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 01 16:55:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-93630938a12180aa65703dc7da5bbe8a058fb13c684ddc0d24d56cbb9f01b263-merged.mount: Deactivated successfully.
Oct 01 16:55:34 compute-0 podman[261709]: 2025-10-01 16:55:34.188024198 +0000 UTC m=+1.194752954 container remove 055b9c986fb9c99bedc9fccb439a5a649054f3127af7f808e96d135234dd6d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mahavira, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:55:34 compute-0 systemd[1]: libpod-conmon-055b9c986fb9c99bedc9fccb439a5a649054f3127af7f808e96d135234dd6d47.scope: Deactivated successfully.
Oct 01 16:55:34 compute-0 sudo[261604]: pam_unix(sudo:session): session closed for user root
Oct 01 16:55:34 compute-0 sudo[261747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:55:34 compute-0 sudo[261747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:55:34 compute-0 sudo[261747]: pam_unix(sudo:session): session closed for user root
Oct 01 16:55:34 compute-0 sudo[261772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:55:34 compute-0 sudo[261772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:55:34 compute-0 sudo[261772]: pam_unix(sudo:session): session closed for user root
Oct 01 16:55:34 compute-0 sudo[261797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:55:34 compute-0 sudo[261797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:55:34 compute-0 sudo[261797]: pam_unix(sudo:session): session closed for user root
Oct 01 16:55:34 compute-0 sudo[261822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 16:55:34 compute-0 sudo[261822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:55:34 compute-0 podman[261888]: 2025-10-01 16:55:34.94996287 +0000 UTC m=+0.062980766 container create 916623fca110293e4de938e2721fed46a6fb8783e3f30130b221a1e5deb30beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 01 16:55:35 compute-0 podman[261888]: 2025-10-01 16:55:34.910450058 +0000 UTC m=+0.023467954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:55:35 compute-0 systemd[1]: Started libpod-conmon-916623fca110293e4de938e2721fed46a6fb8783e3f30130b221a1e5deb30beb.scope.
Oct 01 16:55:35 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:55:35 compute-0 podman[261888]: 2025-10-01 16:55:35.129371528 +0000 UTC m=+0.242389504 container init 916623fca110293e4de938e2721fed46a6fb8783e3f30130b221a1e5deb30beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:55:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:55:35 compute-0 podman[261888]: 2025-10-01 16:55:35.141625798 +0000 UTC m=+0.254643704 container start 916623fca110293e4de938e2721fed46a6fb8783e3f30130b221a1e5deb30beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_brattain, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:55:35 compute-0 hopeful_brattain[261904]: 167 167
Oct 01 16:55:35 compute-0 systemd[1]: libpod-916623fca110293e4de938e2721fed46a6fb8783e3f30130b221a1e5deb30beb.scope: Deactivated successfully.
Oct 01 16:55:35 compute-0 ceph-mon[74273]: pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:35 compute-0 podman[261888]: 2025-10-01 16:55:35.172325452 +0000 UTC m=+0.285343418 container attach 916623fca110293e4de938e2721fed46a6fb8783e3f30130b221a1e5deb30beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_brattain, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:55:35 compute-0 podman[261888]: 2025-10-01 16:55:35.17304806 +0000 UTC m=+0.286065976 container died 916623fca110293e4de938e2721fed46a6fb8783e3f30130b221a1e5deb30beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 01 16:55:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-58be0169bba7cb5bbcbf69eee48449557073da09f5d666785f8baac3599bdf42-merged.mount: Deactivated successfully.
Oct 01 16:55:35 compute-0 podman[261888]: 2025-10-01 16:55:35.578193059 +0000 UTC m=+0.691210945 container remove 916623fca110293e4de938e2721fed46a6fb8783e3f30130b221a1e5deb30beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:55:35 compute-0 systemd[1]: libpod-conmon-916623fca110293e4de938e2721fed46a6fb8783e3f30130b221a1e5deb30beb.scope: Deactivated successfully.
Oct 01 16:55:35 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:35 compute-0 podman[261929]: 2025-10-01 16:55:35.810829779 +0000 UTC m=+0.061345178 container create 5c220e373b1ec2ae894ed073107005aec2b6359b04d18f3c9621a8440b5d50f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_tesla, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:55:35 compute-0 systemd[1]: Started libpod-conmon-5c220e373b1ec2ae894ed073107005aec2b6359b04d18f3c9621a8440b5d50f6.scope.
Oct 01 16:55:35 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:55:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52174d95665f6389fa37969c4e83fa4de1c3a6e45cb45d01def29d36d6673765/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:55:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52174d95665f6389fa37969c4e83fa4de1c3a6e45cb45d01def29d36d6673765/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:55:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52174d95665f6389fa37969c4e83fa4de1c3a6e45cb45d01def29d36d6673765/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:55:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52174d95665f6389fa37969c4e83fa4de1c3a6e45cb45d01def29d36d6673765/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:55:35 compute-0 podman[261929]: 2025-10-01 16:55:35.791658776 +0000 UTC m=+0.042174205 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:55:35 compute-0 podman[261929]: 2025-10-01 16:55:35.899473436 +0000 UTC m=+0.149988935 container init 5c220e373b1ec2ae894ed073107005aec2b6359b04d18f3c9621a8440b5d50f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 01 16:55:35 compute-0 podman[261929]: 2025-10-01 16:55:35.907292336 +0000 UTC m=+0.157807735 container start 5c220e373b1ec2ae894ed073107005aec2b6359b04d18f3c9621a8440b5d50f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:55:35 compute-0 podman[261929]: 2025-10-01 16:55:35.913028011 +0000 UTC m=+0.163543610 container attach 5c220e373b1ec2ae894ed073107005aec2b6359b04d18f3c9621a8440b5d50f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:55:36 compute-0 ceph-mon[74273]: pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:36 compute-0 brave_tesla[261946]: {
Oct 01 16:55:36 compute-0 brave_tesla[261946]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 16:55:36 compute-0 brave_tesla[261946]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:55:36 compute-0 brave_tesla[261946]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 16:55:36 compute-0 brave_tesla[261946]:         "osd_id": 2,
Oct 01 16:55:36 compute-0 brave_tesla[261946]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:55:36 compute-0 brave_tesla[261946]:         "type": "bluestore"
Oct 01 16:55:36 compute-0 brave_tesla[261946]:     },
Oct 01 16:55:36 compute-0 brave_tesla[261946]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 16:55:36 compute-0 brave_tesla[261946]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:55:36 compute-0 brave_tesla[261946]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 16:55:36 compute-0 brave_tesla[261946]:         "osd_id": 0,
Oct 01 16:55:36 compute-0 brave_tesla[261946]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:55:36 compute-0 brave_tesla[261946]:         "type": "bluestore"
Oct 01 16:55:36 compute-0 brave_tesla[261946]:     },
Oct 01 16:55:36 compute-0 brave_tesla[261946]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 16:55:36 compute-0 brave_tesla[261946]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:55:36 compute-0 brave_tesla[261946]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 16:55:36 compute-0 brave_tesla[261946]:         "osd_id": 1,
Oct 01 16:55:36 compute-0 brave_tesla[261946]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:55:36 compute-0 brave_tesla[261946]:         "type": "bluestore"
Oct 01 16:55:36 compute-0 brave_tesla[261946]:     }
Oct 01 16:55:36 compute-0 brave_tesla[261946]: }
Oct 01 16:55:36 compute-0 systemd[1]: libpod-5c220e373b1ec2ae894ed073107005aec2b6359b04d18f3c9621a8440b5d50f6.scope: Deactivated successfully.
Oct 01 16:55:36 compute-0 podman[261929]: 2025-10-01 16:55:36.948846894 +0000 UTC m=+1.199362293 container died 5c220e373b1ec2ae894ed073107005aec2b6359b04d18f3c9621a8440b5d50f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Oct 01 16:55:36 compute-0 systemd[1]: libpod-5c220e373b1ec2ae894ed073107005aec2b6359b04d18f3c9621a8440b5d50f6.scope: Consumed 1.031s CPU time.
Oct 01 16:55:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-52174d95665f6389fa37969c4e83fa4de1c3a6e45cb45d01def29d36d6673765-merged.mount: Deactivated successfully.
Oct 01 16:55:37 compute-0 podman[261929]: 2025-10-01 16:55:37.18827906 +0000 UTC m=+1.438794459 container remove 5c220e373b1ec2ae894ed073107005aec2b6359b04d18f3c9621a8440b5d50f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_tesla, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:55:37 compute-0 systemd[1]: libpod-conmon-5c220e373b1ec2ae894ed073107005aec2b6359b04d18f3c9621a8440b5d50f6.scope: Deactivated successfully.
Oct 01 16:55:37 compute-0 sudo[261822]: pam_unix(sudo:session): session closed for user root
Oct 01 16:55:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:55:37 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:55:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:55:37 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:55:37 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev cdf6df22-728f-4ddb-b41a-be036909c69a does not exist
Oct 01 16:55:37 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev d28fd09c-e1ff-4c25-aad4-988e2f65705c does not exist
Oct 01 16:55:37 compute-0 sudo[261993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:55:37 compute-0 sudo[261993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:55:37 compute-0 sudo[261993]: pam_unix(sudo:session): session closed for user root
Oct 01 16:55:37 compute-0 sudo[262018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 16:55:37 compute-0 sudo[262018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:55:37 compute-0 sudo[262018]: pam_unix(sudo:session): session closed for user root
Oct 01 16:55:37 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:38 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:55:38 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:55:38 compute-0 ceph-mon[74273]: pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Oct 01 16:55:39 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1126986115' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 01 16:55:39 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 01 16:55:39 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 01 16:55:39 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 01 16:55:39 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1126986115' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 01 16:55:39 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:55:40 compute-0 ceph-mon[74273]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 01 16:55:40 compute-0 ceph-mon[74273]: pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:55:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:55:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:55:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:55:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:55:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:55:41 compute-0 podman[262043]: 2025-10-01 16:55:41.770936167 +0000 UTC m=+0.079458350 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 01 16:55:41 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:43 compute-0 ceph-mon[74273]: pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:43 compute-0 podman[262062]: 2025-10-01 16:55:43.737155556 +0000 UTC m=+0.057576083 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 16:55:43 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 16:55:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1146691313' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 16:55:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 16:55:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1146691313' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 16:55:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1146691313' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 16:55:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1146691313' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 16:55:44 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Oct 01 16:55:44 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:55:44.437587) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 16:55:44 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Oct 01 16:55:44 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337744437676, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1544, "num_deletes": 251, "total_data_size": 2454147, "memory_usage": 2498648, "flush_reason": "Manual Compaction"}
Oct 01 16:55:44 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Oct 01 16:55:44 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337744686611, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2398591, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14805, "largest_seqno": 16348, "table_properties": {"data_size": 2391461, "index_size": 4137, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14644, "raw_average_key_size": 19, "raw_value_size": 2377167, "raw_average_value_size": 3199, "num_data_blocks": 189, "num_entries": 743, "num_filter_entries": 743, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759337586, "oldest_key_time": 1759337586, "file_creation_time": 1759337744, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 01 16:55:44 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 249068 microseconds, and 9987 cpu microseconds.
Oct 01 16:55:44 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 16:55:44 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:55:44.686667) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2398591 bytes OK
Oct 01 16:55:44 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:55:44.686692) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Oct 01 16:55:44 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:55:44.717662) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Oct 01 16:55:44 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:55:44.717703) EVENT_LOG_v1 {"time_micros": 1759337744717692, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 16:55:44 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:55:44.717730) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 16:55:44 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2447408, prev total WAL file size 2447408, number of live WAL files 2.
Oct 01 16:55:44 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 16:55:44 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:55:44.719021) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Oct 01 16:55:44 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 16:55:44 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2342KB)], [35(6924KB)]
Oct 01 16:55:44 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337744719066, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9488918, "oldest_snapshot_seqno": -1}
Oct 01 16:55:44 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4007 keys, 7705755 bytes, temperature: kUnknown
Oct 01 16:55:44 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337744951146, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7705755, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7676738, "index_size": 17899, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 97850, "raw_average_key_size": 24, "raw_value_size": 7602017, "raw_average_value_size": 1897, "num_data_blocks": 757, "num_entries": 4007, "num_filter_entries": 4007, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336399, "oldest_key_time": 0, "file_creation_time": 1759337744, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Oct 01 16:55:44 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 16:55:44 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:55:44.951449) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7705755 bytes
Oct 01 16:55:44 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:55:44.955816) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 40.9 rd, 33.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 6.8 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(7.2) write-amplify(3.2) OK, records in: 4521, records dropped: 514 output_compression: NoCompression
Oct 01 16:55:44 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:55:44.955870) EVENT_LOG_v1 {"time_micros": 1759337744955846, "job": 16, "event": "compaction_finished", "compaction_time_micros": 232172, "compaction_time_cpu_micros": 20871, "output_level": 6, "num_output_files": 1, "total_output_size": 7705755, "num_input_records": 4521, "num_output_records": 4007, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 16:55:44 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 16:55:44 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337744957425, "job": 16, "event": "table_file_deletion", "file_number": 37}
Oct 01 16:55:44 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 16:55:44 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337744960781, "job": 16, "event": "table_file_deletion", "file_number": 35}
Oct 01 16:55:44 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:55:44.718883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:55:44 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:55:44.960957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:55:44 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:55:44.960967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:55:44 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:55:44.960971) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:55:44 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:55:44.960976) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:55:44 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:55:44.960980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:55:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:55:45 compute-0 ceph-mon[74273]: pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:45 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:46 compute-0 ceph-mon[74273]: pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:47 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:49 compute-0 ceph-mon[74273]: pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:49 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:55:51 compute-0 ceph-mon[74273]: pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:51 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:53 compute-0 ceph-mon[74273]: pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:53 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:54 compute-0 podman[262082]: 2025-10-01 16:55:54.804735726 +0000 UTC m=+0.114208597 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct 01 16:55:55 compute-0 ceph-mon[74273]: pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:55:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Oct 01 16:55:55 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 01 16:55:55 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 01 16:55:55 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 01 16:55:55 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 01 16:55:55 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:56 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 01 16:55:57 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 01 16:55:57 compute-0 ceph-mon[74273]: pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:57 compute-0 podman[262110]: 2025-10-01 16:55:57.778863606 +0000 UTC m=+0.085317096 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 01 16:55:57 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:58 compute-0 ceph-mon[74273]: pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:55:59 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:56:00 compute-0 ceph-mon[74273]: pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:01 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:02 compute-0 ceph-mon[74273]: pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:03 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:56:05 compute-0 ceph-mon[74273]: pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:05 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:06 compute-0 ceph-mon[74273]: pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:07 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:09 compute-0 ceph-mon[74273]: pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:09 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 01 16:56:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:56:11 compute-0 ceph-mon[74273]: pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 01 16:56:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_16:56:11
Oct 01 16:56:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 16:56:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 16:56:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'backups', 'images', 'vms', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr']
Oct 01 16:56:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 16:56:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:56:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:56:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:56:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:56:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:56:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:56:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 16:56:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:56:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 16:56:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:56:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:56:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:56:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:56:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:56:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:56:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:56:11 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 01 16:56:12 compute-0 ceph-mon[74273]: pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 01 16:56:12 compute-0 podman[262129]: 2025-10-01 16:56:12.770309209 +0000 UTC m=+0.070493913 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 16:56:13 compute-0 nova_compute[259504]: 2025-10-01 16:56:13.149 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:56:13 compute-0 nova_compute[259504]: 2025-10-01 16:56:13.149 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:56:13 compute-0 nova_compute[259504]: 2025-10-01 16:56:13.170 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:56:13 compute-0 nova_compute[259504]: 2025-10-01 16:56:13.171 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 16:56:13 compute-0 nova_compute[259504]: 2025-10-01 16:56:13.171 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 16:56:13 compute-0 nova_compute[259504]: 2025-10-01 16:56:13.186 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 16:56:13 compute-0 nova_compute[259504]: 2025-10-01 16:56:13.186 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:56:13 compute-0 nova_compute[259504]: 2025-10-01 16:56:13.186 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:56:13 compute-0 nova_compute[259504]: 2025-10-01 16:56:13.187 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:56:13 compute-0 nova_compute[259504]: 2025-10-01 16:56:13.187 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:56:13 compute-0 nova_compute[259504]: 2025-10-01 16:56:13.187 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:56:13 compute-0 nova_compute[259504]: 2025-10-01 16:56:13.216 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 16:56:13 compute-0 nova_compute[259504]: 2025-10-01 16:56:13.216 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 16:56:13 compute-0 nova_compute[259504]: 2025-10-01 16:56:13.216 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 16:56:13 compute-0 nova_compute[259504]: 2025-10-01 16:56:13.216 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 16:56:13 compute-0 nova_compute[259504]: 2025-10-01 16:56:13.217 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 16:56:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 16:56:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/595797681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 16:56:13 compute-0 nova_compute[259504]: 2025-10-01 16:56:13.676 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 16:56:13 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Oct 01 16:56:13 compute-0 nova_compute[259504]: 2025-10-01 16:56:13.827 2 WARNING nova.virt.libvirt.driver [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 16:56:13 compute-0 nova_compute[259504]: 2025-10-01 16:56:13.828 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5183MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 16:56:13 compute-0 nova_compute[259504]: 2025-10-01 16:56:13.829 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 16:56:13 compute-0 nova_compute[259504]: 2025-10-01 16:56:13.829 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 16:56:13 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/595797681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 16:56:13 compute-0 nova_compute[259504]: 2025-10-01 16:56:13.899 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 16:56:13 compute-0 nova_compute[259504]: 2025-10-01 16:56:13.899 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 16:56:13 compute-0 nova_compute[259504]: 2025-10-01 16:56:13.916 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 16:56:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 16:56:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3883714056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 16:56:14 compute-0 nova_compute[259504]: 2025-10-01 16:56:14.349 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 16:56:14 compute-0 nova_compute[259504]: 2025-10-01 16:56:14.357 2 DEBUG nova.compute.provider_tree [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed in ProviderTree for provider: 2417da73-53f1-4edf-ae4c-fbd9fa470d6b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 16:56:14 compute-0 nova_compute[259504]: 2025-10-01 16:56:14.378 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 16:56:14 compute-0 nova_compute[259504]: 2025-10-01 16:56:14.379 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 16:56:14 compute-0 nova_compute[259504]: 2025-10-01 16:56:14.379 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 16:56:14 compute-0 podman[262193]: 2025-10-01 16:56:14.783507516 +0000 UTC m=+0.093387438 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Oct 01 16:56:14 compute-0 ceph-mon[74273]: pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Oct 01 16:56:14 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3883714056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 16:56:14 compute-0 nova_compute[259504]: 2025-10-01 16:56:14.942 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:56:14 compute-0 nova_compute[259504]: 2025-10-01 16:56:14.943 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:56:14 compute-0 nova_compute[259504]: 2025-10-01 16:56:14.943 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 16:56:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:56:15 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 16:56:16 compute-0 ceph-mon[74273]: pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 16:56:17 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 16:56:18 compute-0 ceph-mon[74273]: pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 16:56:19 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 16:56:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:56:19.961 162304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 16:56:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:56:19.962 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 16:56:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:56:19.962 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 16:56:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:56:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 16:56:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:56:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 16:56:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:56:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:56:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:56:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:56:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:56:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:56:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:56:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:56:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:56:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 16:56:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:56:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:56:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:56:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 16:56:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:56:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 16:56:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:56:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:56:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:56:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 16:56:21 compute-0 ceph-mon[74273]: pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 16:56:21 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 16:56:23 compute-0 ceph-mon[74273]: pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 16:56:23 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 16:56:25 compute-0 ceph-mon[74273]: pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 16:56:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:56:25 compute-0 podman[262213]: 2025-10-01 16:56:25.786116784 +0000 UTC m=+0.103186827 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 01 16:56:25 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Oct 01 16:56:26 compute-0 ceph-mon[74273]: pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Oct 01 16:56:27 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:28 compute-0 podman[262239]: 2025-10-01 16:56:28.756595555 +0000 UTC m=+0.080067129 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 01 16:56:28 compute-0 ceph-mon[74273]: pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:29 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:56:30 compute-0 ceph-mon[74273]: pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:31 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:32 compute-0 ceph-mon[74273]: pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:33 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:34 compute-0 ceph-mon[74273]: pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:56:35 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:37 compute-0 ceph-mon[74273]: pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:37 compute-0 sudo[262258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:56:37 compute-0 sudo[262258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:56:37 compute-0 sudo[262258]: pam_unix(sudo:session): session closed for user root
Oct 01 16:56:37 compute-0 sudo[262283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:56:37 compute-0 sudo[262283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:56:37 compute-0 sudo[262283]: pam_unix(sudo:session): session closed for user root
Oct 01 16:56:37 compute-0 sudo[262308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:56:37 compute-0 sudo[262308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:56:37 compute-0 sudo[262308]: pam_unix(sudo:session): session closed for user root
Oct 01 16:56:37 compute-0 sudo[262333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 01 16:56:37 compute-0 sudo[262333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:56:37 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:38 compute-0 podman[262433]: 2025-10-01 16:56:38.529372189 +0000 UTC m=+0.171759372 container exec bfdaa9b78cc1558959452c7020a00aa78f3da27e3ededf3766f2f88165c2443b (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 01 16:56:38 compute-0 podman[262433]: 2025-10-01 16:56:38.652543807 +0000 UTC m=+0.294930920 container exec_died bfdaa9b78cc1558959452c7020a00aa78f3da27e3ededf3766f2f88165c2443b (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:56:39 compute-0 ceph-mon[74273]: pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:39 compute-0 sudo[262333]: pam_unix(sudo:session): session closed for user root
Oct 01 16:56:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:56:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:56:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:56:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:56:39 compute-0 sudo[262594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:56:39 compute-0 sudo[262594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:56:39 compute-0 sudo[262594]: pam_unix(sudo:session): session closed for user root
Oct 01 16:56:39 compute-0 sudo[262619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:56:39 compute-0 sudo[262619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:56:39 compute-0 sudo[262619]: pam_unix(sudo:session): session closed for user root
Oct 01 16:56:39 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:39 compute-0 sudo[262644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:56:39 compute-0 sudo[262644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:56:39 compute-0 sudo[262644]: pam_unix(sudo:session): session closed for user root
Oct 01 16:56:39 compute-0 sudo[262669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 16:56:39 compute-0 sudo[262669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:56:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:56:40 compute-0 sudo[262669]: pam_unix(sudo:session): session closed for user root
Oct 01 16:56:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 01 16:56:40 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 01 16:56:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:56:40 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:56:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 16:56:40 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:56:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 16:56:40 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:56:40 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 82b84ac8-f99b-4328-ae50-47dcd21d9f18 does not exist
Oct 01 16:56:40 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev d97c244b-1f43-425b-9f17-b5e7f5b4f992 does not exist
Oct 01 16:56:40 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev f703aba2-3238-412c-9316-0497c1ad3a16 does not exist
Oct 01 16:56:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 16:56:40 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:56:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 16:56:40 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:56:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:56:40 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:56:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:56:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:56:40 compute-0 ceph-mon[74273]: pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 01 16:56:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:56:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:56:40 compute-0 sudo[262725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:56:40 compute-0 sudo[262725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:56:40 compute-0 sudo[262725]: pam_unix(sudo:session): session closed for user root
Oct 01 16:56:40 compute-0 sudo[262750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:56:40 compute-0 sudo[262750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:56:40 compute-0 sudo[262750]: pam_unix(sudo:session): session closed for user root
Oct 01 16:56:40 compute-0 sudo[262775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:56:40 compute-0 sudo[262775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:56:40 compute-0 sudo[262775]: pam_unix(sudo:session): session closed for user root
Oct 01 16:56:40 compute-0 sudo[262800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 16:56:40 compute-0 sudo[262800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:56:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:56:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:56:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:56:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:56:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:56:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:56:41 compute-0 podman[262866]: 2025-10-01 16:56:41.439532103 +0000 UTC m=+0.074835819 container create 607de5ffacbdff78fd57025815882dd5258ecde6642ac5cd091ef34d66432ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_banzai, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:56:41 compute-0 systemd[1]: Started libpod-conmon-607de5ffacbdff78fd57025815882dd5258ecde6642ac5cd091ef34d66432ea3.scope.
Oct 01 16:56:41 compute-0 podman[262866]: 2025-10-01 16:56:41.404930589 +0000 UTC m=+0.040234375 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:56:41 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:56:41 compute-0 podman[262866]: 2025-10-01 16:56:41.557699392 +0000 UTC m=+0.193003138 container init 607de5ffacbdff78fd57025815882dd5258ecde6642ac5cd091ef34d66432ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 01 16:56:41 compute-0 podman[262866]: 2025-10-01 16:56:41.574971321 +0000 UTC m=+0.210275027 container start 607de5ffacbdff78fd57025815882dd5258ecde6642ac5cd091ef34d66432ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 01 16:56:41 compute-0 condescending_banzai[262882]: 167 167
Oct 01 16:56:41 compute-0 systemd[1]: libpod-607de5ffacbdff78fd57025815882dd5258ecde6642ac5cd091ef34d66432ea3.scope: Deactivated successfully.
Oct 01 16:56:41 compute-0 podman[262866]: 2025-10-01 16:56:41.590385311 +0000 UTC m=+0.225689057 container attach 607de5ffacbdff78fd57025815882dd5258ecde6642ac5cd091ef34d66432ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_banzai, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:56:41 compute-0 podman[262866]: 2025-10-01 16:56:41.59100271 +0000 UTC m=+0.226306446 container died 607de5ffacbdff78fd57025815882dd5258ecde6642ac5cd091ef34d66432ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_banzai, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:56:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-a250ac1def7f24ad2aa4d59cdd71af98a9d66ed70f9871154c1e1d7c5ee45ef1-merged.mount: Deactivated successfully.
Oct 01 16:56:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:56:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:56:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:56:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:56:41 compute-0 podman[262866]: 2025-10-01 16:56:41.68164536 +0000 UTC m=+0.316949086 container remove 607de5ffacbdff78fd57025815882dd5258ecde6642ac5cd091ef34d66432ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:56:41 compute-0 systemd[1]: libpod-conmon-607de5ffacbdff78fd57025815882dd5258ecde6642ac5cd091ef34d66432ea3.scope: Deactivated successfully.
Oct 01 16:56:41 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:41 compute-0 podman[262906]: 2025-10-01 16:56:41.896237148 +0000 UTC m=+0.058073576 container create bfc54b77770f4fa26b489acad30acab9c1638e122ec1587b88b32cc2b33157c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_benz, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 01 16:56:41 compute-0 systemd[1]: Started libpod-conmon-bfc54b77770f4fa26b489acad30acab9c1638e122ec1587b88b32cc2b33157c1.scope.
Oct 01 16:56:41 compute-0 podman[262906]: 2025-10-01 16:56:41.875259226 +0000 UTC m=+0.037095704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:56:41 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:56:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c772fcb6be2e90d4d262ca7e39f25b354da921f98a730b6006034a5f7d9e413/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:56:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c772fcb6be2e90d4d262ca7e39f25b354da921f98a730b6006034a5f7d9e413/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:56:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c772fcb6be2e90d4d262ca7e39f25b354da921f98a730b6006034a5f7d9e413/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:56:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c772fcb6be2e90d4d262ca7e39f25b354da921f98a730b6006034a5f7d9e413/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:56:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c772fcb6be2e90d4d262ca7e39f25b354da921f98a730b6006034a5f7d9e413/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:56:42 compute-0 podman[262906]: 2025-10-01 16:56:41.999981903 +0000 UTC m=+0.161818421 container init bfc54b77770f4fa26b489acad30acab9c1638e122ec1587b88b32cc2b33157c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_benz, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 01 16:56:42 compute-0 podman[262906]: 2025-10-01 16:56:42.007744225 +0000 UTC m=+0.169580643 container start bfc54b77770f4fa26b489acad30acab9c1638e122ec1587b88b32cc2b33157c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_benz, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 01 16:56:42 compute-0 podman[262906]: 2025-10-01 16:56:42.01122092 +0000 UTC m=+0.173057358 container attach bfc54b77770f4fa26b489acad30acab9c1638e122ec1587b88b32cc2b33157c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_benz, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 01 16:56:42 compute-0 ceph-mon[74273]: pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:43 compute-0 zealous_benz[262923]: --> passed data devices: 0 physical, 3 LVM
Oct 01 16:56:43 compute-0 zealous_benz[262923]: --> relative data size: 1.0
Oct 01 16:56:43 compute-0 zealous_benz[262923]: --> All data devices are unavailable
Oct 01 16:56:43 compute-0 systemd[1]: libpod-bfc54b77770f4fa26b489acad30acab9c1638e122ec1587b88b32cc2b33157c1.scope: Deactivated successfully.
Oct 01 16:56:43 compute-0 systemd[1]: libpod-bfc54b77770f4fa26b489acad30acab9c1638e122ec1587b88b32cc2b33157c1.scope: Consumed 1.158s CPU time.
Oct 01 16:56:43 compute-0 podman[262906]: 2025-10-01 16:56:43.205629637 +0000 UTC m=+1.367466095 container died bfc54b77770f4fa26b489acad30acab9c1638e122ec1587b88b32cc2b33157c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_benz, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:56:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c772fcb6be2e90d4d262ca7e39f25b354da921f98a730b6006034a5f7d9e413-merged.mount: Deactivated successfully.
Oct 01 16:56:43 compute-0 podman[262906]: 2025-10-01 16:56:43.317749193 +0000 UTC m=+1.479585621 container remove bfc54b77770f4fa26b489acad30acab9c1638e122ec1587b88b32cc2b33157c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 01 16:56:43 compute-0 systemd[1]: libpod-conmon-bfc54b77770f4fa26b489acad30acab9c1638e122ec1587b88b32cc2b33157c1.scope: Deactivated successfully.
Oct 01 16:56:43 compute-0 podman[262954]: 2025-10-01 16:56:43.34345725 +0000 UTC m=+0.101988482 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 01 16:56:43 compute-0 sudo[262800]: pam_unix(sudo:session): session closed for user root
Oct 01 16:56:43 compute-0 sudo[262985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:56:43 compute-0 sudo[262985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:56:43 compute-0 sudo[262985]: pam_unix(sudo:session): session closed for user root
Oct 01 16:56:43 compute-0 sudo[263010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:56:43 compute-0 sudo[263010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:56:43 compute-0 sudo[263010]: pam_unix(sudo:session): session closed for user root
Oct 01 16:56:43 compute-0 sudo[263035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:56:43 compute-0 sudo[263035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:56:43 compute-0 sudo[263035]: pam_unix(sudo:session): session closed for user root
Oct 01 16:56:43 compute-0 sudo[263060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 16:56:43 compute-0 sudo[263060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:56:43 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 16:56:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/732685493' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 16:56:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 16:56:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/732685493' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 16:56:44 compute-0 podman[263126]: 2025-10-01 16:56:44.125059766 +0000 UTC m=+0.066591765 container create e24a37147191e3e9d45f4dd439bb70bfb292ce19aa85c4e993fce0d5f35d9264 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_liskov, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 01 16:56:44 compute-0 systemd[1]: Started libpod-conmon-e24a37147191e3e9d45f4dd439bb70bfb292ce19aa85c4e993fce0d5f35d9264.scope.
Oct 01 16:56:44 compute-0 podman[263126]: 2025-10-01 16:56:44.096455099 +0000 UTC m=+0.037987138 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:56:44 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:56:44 compute-0 podman[263126]: 2025-10-01 16:56:44.232181703 +0000 UTC m=+0.173713742 container init e24a37147191e3e9d45f4dd439bb70bfb292ce19aa85c4e993fce0d5f35d9264 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_liskov, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 01 16:56:44 compute-0 podman[263126]: 2025-10-01 16:56:44.244026817 +0000 UTC m=+0.185558816 container start e24a37147191e3e9d45f4dd439bb70bfb292ce19aa85c4e993fce0d5f35d9264 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:56:44 compute-0 podman[263126]: 2025-10-01 16:56:44.249022586 +0000 UTC m=+0.190554655 container attach e24a37147191e3e9d45f4dd439bb70bfb292ce19aa85c4e993fce0d5f35d9264 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:56:44 compute-0 lucid_liskov[263142]: 167 167
Oct 01 16:56:44 compute-0 systemd[1]: libpod-e24a37147191e3e9d45f4dd439bb70bfb292ce19aa85c4e993fce0d5f35d9264.scope: Deactivated successfully.
Oct 01 16:56:44 compute-0 podman[263126]: 2025-10-01 16:56:44.253694083 +0000 UTC m=+0.195226082 container died e24a37147191e3e9d45f4dd439bb70bfb292ce19aa85c4e993fce0d5f35d9264 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_liskov, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:56:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-07b46e5d4763526148e5a862e4db1c26d5772f589a7721986e96a9f356375e2a-merged.mount: Deactivated successfully.
Oct 01 16:56:44 compute-0 podman[263126]: 2025-10-01 16:56:44.30207701 +0000 UTC m=+0.243609009 container remove e24a37147191e3e9d45f4dd439bb70bfb292ce19aa85c4e993fce0d5f35d9264 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:56:44 compute-0 systemd[1]: libpod-conmon-e24a37147191e3e9d45f4dd439bb70bfb292ce19aa85c4e993fce0d5f35d9264.scope: Deactivated successfully.
Oct 01 16:56:44 compute-0 podman[263166]: 2025-10-01 16:56:44.53733618 +0000 UTC m=+0.058624607 container create bb9cdb7ec90ed3ff78dba870b64e7cc9f3e6102e41cb7865f0e7c2d66b5757fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_allen, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 01 16:56:44 compute-0 systemd[1]: Started libpod-conmon-bb9cdb7ec90ed3ff78dba870b64e7cc9f3e6102e41cb7865f0e7c2d66b5757fe.scope.
Oct 01 16:56:44 compute-0 podman[263166]: 2025-10-01 16:56:44.505949687 +0000 UTC m=+0.027238124 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:56:44 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:56:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9edbccb168cc502e222df52084c542e2e6686b6601dc99704d6f90371159d3c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:56:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9edbccb168cc502e222df52084c542e2e6686b6601dc99704d6f90371159d3c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:56:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9edbccb168cc502e222df52084c542e2e6686b6601dc99704d6f90371159d3c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:56:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9edbccb168cc502e222df52084c542e2e6686b6601dc99704d6f90371159d3c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:56:44 compute-0 podman[263166]: 2025-10-01 16:56:44.642615851 +0000 UTC m=+0.163904288 container init bb9cdb7ec90ed3ff78dba870b64e7cc9f3e6102e41cb7865f0e7c2d66b5757fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_allen, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 01 16:56:44 compute-0 podman[263166]: 2025-10-01 16:56:44.655150984 +0000 UTC m=+0.176439421 container start bb9cdb7ec90ed3ff78dba870b64e7cc9f3e6102e41cb7865f0e7c2d66b5757fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_allen, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 01 16:56:44 compute-0 podman[263166]: 2025-10-01 16:56:44.664987932 +0000 UTC m=+0.186276369 container attach bb9cdb7ec90ed3ff78dba870b64e7cc9f3e6102e41cb7865f0e7c2d66b5757fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_allen, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 01 16:56:44 compute-0 ceph-mon[74273]: pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/732685493' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 16:56:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/732685493' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 16:56:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:56:45 compute-0 magical_allen[263182]: {
Oct 01 16:56:45 compute-0 magical_allen[263182]:     "0": [
Oct 01 16:56:45 compute-0 magical_allen[263182]:         {
Oct 01 16:56:45 compute-0 magical_allen[263182]:             "devices": [
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "/dev/loop3"
Oct 01 16:56:45 compute-0 magical_allen[263182]:             ],
Oct 01 16:56:45 compute-0 magical_allen[263182]:             "lv_name": "ceph_lv0",
Oct 01 16:56:45 compute-0 magical_allen[263182]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:56:45 compute-0 magical_allen[263182]:             "lv_size": "21470642176",
Oct 01 16:56:45 compute-0 magical_allen[263182]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:56:45 compute-0 magical_allen[263182]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:56:45 compute-0 magical_allen[263182]:             "name": "ceph_lv0",
Oct 01 16:56:45 compute-0 magical_allen[263182]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:56:45 compute-0 magical_allen[263182]:             "tags": {
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "ceph.cluster_name": "ceph",
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "ceph.crush_device_class": "",
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "ceph.encrypted": "0",
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "ceph.osd_id": "0",
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "ceph.type": "block",
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "ceph.vdo": "0"
Oct 01 16:56:45 compute-0 magical_allen[263182]:             },
Oct 01 16:56:45 compute-0 magical_allen[263182]:             "type": "block",
Oct 01 16:56:45 compute-0 magical_allen[263182]:             "vg_name": "ceph_vg0"
Oct 01 16:56:45 compute-0 magical_allen[263182]:         }
Oct 01 16:56:45 compute-0 magical_allen[263182]:     ],
Oct 01 16:56:45 compute-0 magical_allen[263182]:     "1": [
Oct 01 16:56:45 compute-0 magical_allen[263182]:         {
Oct 01 16:56:45 compute-0 magical_allen[263182]:             "devices": [
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "/dev/loop4"
Oct 01 16:56:45 compute-0 magical_allen[263182]:             ],
Oct 01 16:56:45 compute-0 magical_allen[263182]:             "lv_name": "ceph_lv1",
Oct 01 16:56:45 compute-0 magical_allen[263182]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:56:45 compute-0 magical_allen[263182]:             "lv_size": "21470642176",
Oct 01 16:56:45 compute-0 magical_allen[263182]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:56:45 compute-0 magical_allen[263182]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:56:45 compute-0 magical_allen[263182]:             "name": "ceph_lv1",
Oct 01 16:56:45 compute-0 magical_allen[263182]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:56:45 compute-0 magical_allen[263182]:             "tags": {
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "ceph.cluster_name": "ceph",
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "ceph.crush_device_class": "",
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "ceph.encrypted": "0",
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "ceph.osd_id": "1",
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "ceph.type": "block",
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "ceph.vdo": "0"
Oct 01 16:56:45 compute-0 magical_allen[263182]:             },
Oct 01 16:56:45 compute-0 magical_allen[263182]:             "type": "block",
Oct 01 16:56:45 compute-0 magical_allen[263182]:             "vg_name": "ceph_vg1"
Oct 01 16:56:45 compute-0 magical_allen[263182]:         }
Oct 01 16:56:45 compute-0 magical_allen[263182]:     ],
Oct 01 16:56:45 compute-0 magical_allen[263182]:     "2": [
Oct 01 16:56:45 compute-0 magical_allen[263182]:         {
Oct 01 16:56:45 compute-0 magical_allen[263182]:             "devices": [
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "/dev/loop5"
Oct 01 16:56:45 compute-0 magical_allen[263182]:             ],
Oct 01 16:56:45 compute-0 magical_allen[263182]:             "lv_name": "ceph_lv2",
Oct 01 16:56:45 compute-0 magical_allen[263182]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:56:45 compute-0 magical_allen[263182]:             "lv_size": "21470642176",
Oct 01 16:56:45 compute-0 magical_allen[263182]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:56:45 compute-0 magical_allen[263182]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:56:45 compute-0 magical_allen[263182]:             "name": "ceph_lv2",
Oct 01 16:56:45 compute-0 magical_allen[263182]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:56:45 compute-0 magical_allen[263182]:             "tags": {
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "ceph.cluster_name": "ceph",
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "ceph.crush_device_class": "",
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "ceph.encrypted": "0",
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "ceph.osd_id": "2",
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "ceph.type": "block",
Oct 01 16:56:45 compute-0 magical_allen[263182]:                 "ceph.vdo": "0"
Oct 01 16:56:45 compute-0 magical_allen[263182]:             },
Oct 01 16:56:45 compute-0 magical_allen[263182]:             "type": "block",
Oct 01 16:56:45 compute-0 magical_allen[263182]:             "vg_name": "ceph_vg2"
Oct 01 16:56:45 compute-0 magical_allen[263182]:         }
Oct 01 16:56:45 compute-0 magical_allen[263182]:     ]
Oct 01 16:56:45 compute-0 magical_allen[263182]: }
Oct 01 16:56:45 compute-0 systemd[1]: libpod-bb9cdb7ec90ed3ff78dba870b64e7cc9f3e6102e41cb7865f0e7c2d66b5757fe.scope: Deactivated successfully.
Oct 01 16:56:45 compute-0 podman[263166]: 2025-10-01 16:56:45.440281232 +0000 UTC m=+0.961569629 container died bb9cdb7ec90ed3ff78dba870b64e7cc9f3e6102e41cb7865f0e7c2d66b5757fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 01 16:56:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-9edbccb168cc502e222df52084c542e2e6686b6601dc99704d6f90371159d3c8-merged.mount: Deactivated successfully.
Oct 01 16:56:45 compute-0 podman[263166]: 2025-10-01 16:56:45.495624048 +0000 UTC m=+1.016912445 container remove bb9cdb7ec90ed3ff78dba870b64e7cc9f3e6102e41cb7865f0e7c2d66b5757fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_allen, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 01 16:56:45 compute-0 systemd[1]: libpod-conmon-bb9cdb7ec90ed3ff78dba870b64e7cc9f3e6102e41cb7865f0e7c2d66b5757fe.scope: Deactivated successfully.
Oct 01 16:56:45 compute-0 sudo[263060]: pam_unix(sudo:session): session closed for user root
Oct 01 16:56:45 compute-0 podman[263191]: 2025-10-01 16:56:45.548656409 +0000 UTC m=+0.078161425 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 01 16:56:45 compute-0 sudo[263224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:56:45 compute-0 sudo[263224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:56:45 compute-0 sudo[263224]: pam_unix(sudo:session): session closed for user root
Oct 01 16:56:45 compute-0 sudo[263249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:56:45 compute-0 sudo[263249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:56:45 compute-0 sudo[263249]: pam_unix(sudo:session): session closed for user root
Oct 01 16:56:45 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:56:45.656 162304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:71:db', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:60:3f:78:bd:29'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 16:56:45 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:56:45.659 162304 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 16:56:45 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:56:45.661 162304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d2971fc2-5b75-459a-98a0-6e626d0d4d99, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 16:56:45 compute-0 sudo[263274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:56:45 compute-0 sudo[263274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:56:45 compute-0 sudo[263274]: pam_unix(sudo:session): session closed for user root
Oct 01 16:56:45 compute-0 sudo[263299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 16:56:45 compute-0 sudo[263299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:56:45 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:46 compute-0 podman[263365]: 2025-10-01 16:56:46.218373828 +0000 UTC m=+0.068745581 container create ca5c46bbdf555addc8a545592069737bf9944848910c10c1255462686c8e2e73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_swartz, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:56:46 compute-0 systemd[1]: Started libpod-conmon-ca5c46bbdf555addc8a545592069737bf9944848910c10c1255462686c8e2e73.scope.
Oct 01 16:56:46 compute-0 podman[263365]: 2025-10-01 16:56:46.189559824 +0000 UTC m=+0.039931627 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:56:46 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:56:46 compute-0 podman[263365]: 2025-10-01 16:56:46.316357457 +0000 UTC m=+0.166729260 container init ca5c46bbdf555addc8a545592069737bf9944848910c10c1255462686c8e2e73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_swartz, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:56:46 compute-0 podman[263365]: 2025-10-01 16:56:46.329308302 +0000 UTC m=+0.179680055 container start ca5c46bbdf555addc8a545592069737bf9944848910c10c1255462686c8e2e73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 01 16:56:46 compute-0 podman[263365]: 2025-10-01 16:56:46.333165136 +0000 UTC m=+0.183536929 container attach ca5c46bbdf555addc8a545592069737bf9944848910c10c1255462686c8e2e73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_swartz, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:56:46 compute-0 strange_swartz[263381]: 167 167
Oct 01 16:56:46 compute-0 systemd[1]: libpod-ca5c46bbdf555addc8a545592069737bf9944848910c10c1255462686c8e2e73.scope: Deactivated successfully.
Oct 01 16:56:46 compute-0 podman[263365]: 2025-10-01 16:56:46.338304083 +0000 UTC m=+0.188675866 container died ca5c46bbdf555addc8a545592069737bf9944848910c10c1255462686c8e2e73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 01 16:56:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-08b70b8e91bb60e0bac73bd2bd9ef9af37a3033fefe32392b1d775b9caae08e8-merged.mount: Deactivated successfully.
Oct 01 16:56:46 compute-0 podman[263365]: 2025-10-01 16:56:46.39741006 +0000 UTC m=+0.247781793 container remove ca5c46bbdf555addc8a545592069737bf9944848910c10c1255462686c8e2e73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:56:46 compute-0 systemd[1]: libpod-conmon-ca5c46bbdf555addc8a545592069737bf9944848910c10c1255462686c8e2e73.scope: Deactivated successfully.
Oct 01 16:56:46 compute-0 podman[263404]: 2025-10-01 16:56:46.623508509 +0000 UTC m=+0.071713000 container create dfa4c92a62cf733d2340f19afc7fad49530d2dad6afe5e3f7852751310baa438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kowalevski, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:56:46 compute-0 systemd[1]: Started libpod-conmon-dfa4c92a62cf733d2340f19afc7fad49530d2dad6afe5e3f7852751310baa438.scope.
Oct 01 16:56:46 compute-0 podman[263404]: 2025-10-01 16:56:46.595635015 +0000 UTC m=+0.043839546 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:56:46 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:56:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80b3bda22232132748f5a4c6a8c6a62ee436d2ef2dfa13abe0782860abc9cc98/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:56:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80b3bda22232132748f5a4c6a8c6a62ee436d2ef2dfa13abe0782860abc9cc98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:56:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80b3bda22232132748f5a4c6a8c6a62ee436d2ef2dfa13abe0782860abc9cc98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:56:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80b3bda22232132748f5a4c6a8c6a62ee436d2ef2dfa13abe0782860abc9cc98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:56:46 compute-0 podman[263404]: 2025-10-01 16:56:46.741007433 +0000 UTC m=+0.189211904 container init dfa4c92a62cf733d2340f19afc7fad49530d2dad6afe5e3f7852751310baa438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kowalevski, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 01 16:56:46 compute-0 podman[263404]: 2025-10-01 16:56:46.75576798 +0000 UTC m=+0.203972461 container start dfa4c92a62cf733d2340f19afc7fad49530d2dad6afe5e3f7852751310baa438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 01 16:56:46 compute-0 podman[263404]: 2025-10-01 16:56:46.76014618 +0000 UTC m=+0.208350631 container attach dfa4c92a62cf733d2340f19afc7fad49530d2dad6afe5e3f7852751310baa438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kowalevski, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:56:46 compute-0 ceph-mon[74273]: pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:47 compute-0 upbeat_kowalevski[263421]: {
Oct 01 16:56:47 compute-0 upbeat_kowalevski[263421]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 16:56:47 compute-0 upbeat_kowalevski[263421]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:56:47 compute-0 upbeat_kowalevski[263421]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 16:56:47 compute-0 upbeat_kowalevski[263421]:         "osd_id": 2,
Oct 01 16:56:47 compute-0 upbeat_kowalevski[263421]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:56:47 compute-0 upbeat_kowalevski[263421]:         "type": "bluestore"
Oct 01 16:56:47 compute-0 upbeat_kowalevski[263421]:     },
Oct 01 16:56:47 compute-0 upbeat_kowalevski[263421]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 16:56:47 compute-0 upbeat_kowalevski[263421]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:56:47 compute-0 upbeat_kowalevski[263421]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 16:56:47 compute-0 upbeat_kowalevski[263421]:         "osd_id": 0,
Oct 01 16:56:47 compute-0 upbeat_kowalevski[263421]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:56:47 compute-0 upbeat_kowalevski[263421]:         "type": "bluestore"
Oct 01 16:56:47 compute-0 upbeat_kowalevski[263421]:     },
Oct 01 16:56:47 compute-0 upbeat_kowalevski[263421]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 16:56:47 compute-0 upbeat_kowalevski[263421]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:56:47 compute-0 upbeat_kowalevski[263421]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 16:56:47 compute-0 upbeat_kowalevski[263421]:         "osd_id": 1,
Oct 01 16:56:47 compute-0 upbeat_kowalevski[263421]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:56:47 compute-0 upbeat_kowalevski[263421]:         "type": "bluestore"
Oct 01 16:56:47 compute-0 upbeat_kowalevski[263421]:     }
Oct 01 16:56:47 compute-0 upbeat_kowalevski[263421]: }
Oct 01 16:56:47 compute-0 systemd[1]: libpod-dfa4c92a62cf733d2340f19afc7fad49530d2dad6afe5e3f7852751310baa438.scope: Deactivated successfully.
Oct 01 16:56:47 compute-0 systemd[1]: libpod-dfa4c92a62cf733d2340f19afc7fad49530d2dad6afe5e3f7852751310baa438.scope: Consumed 1.001s CPU time.
Oct 01 16:56:47 compute-0 podman[263454]: 2025-10-01 16:56:47.7932252 +0000 UTC m=+0.028066069 container died dfa4c92a62cf733d2340f19afc7fad49530d2dad6afe5e3f7852751310baa438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 01 16:56:47 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-80b3bda22232132748f5a4c6a8c6a62ee436d2ef2dfa13abe0782860abc9cc98-merged.mount: Deactivated successfully.
Oct 01 16:56:47 compute-0 podman[263454]: 2025-10-01 16:56:47.860931017 +0000 UTC m=+0.095771896 container remove dfa4c92a62cf733d2340f19afc7fad49530d2dad6afe5e3f7852751310baa438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kowalevski, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 01 16:56:47 compute-0 systemd[1]: libpod-conmon-dfa4c92a62cf733d2340f19afc7fad49530d2dad6afe5e3f7852751310baa438.scope: Deactivated successfully.
Oct 01 16:56:47 compute-0 sudo[263299]: pam_unix(sudo:session): session closed for user root
Oct 01 16:56:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:56:47 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:56:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:56:47 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:56:47 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev d2200a63-64dc-49b7-9235-8f9e79018659 does not exist
Oct 01 16:56:47 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev f6ddd8f1-53d2-46c3-b56e-a3ca0ad30239 does not exist
Oct 01 16:56:48 compute-0 sudo[263469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:56:48 compute-0 sudo[263469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:56:48 compute-0 sudo[263469]: pam_unix(sudo:session): session closed for user root
Oct 01 16:56:48 compute-0 sudo[263494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 16:56:48 compute-0 sudo[263494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:56:48 compute-0 sudo[263494]: pam_unix(sudo:session): session closed for user root
Oct 01 16:56:48 compute-0 ceph-mon[74273]: pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:48 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:56:48 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:56:49 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:56:50 compute-0 ceph-mon[74273]: pgmap v792: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:51 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:53 compute-0 ceph-mon[74273]: pgmap v793: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:53 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:55 compute-0 ceph-mon[74273]: pgmap v794: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:56:55 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:56 compute-0 podman[263519]: 2025-10-01 16:56:56.847267832 +0000 UTC m=+0.153050253 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 01 16:56:57 compute-0 ceph-mon[74273]: pgmap v795: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:57 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:59 compute-0 ceph-mon[74273]: pgmap v796: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:56:59 compute-0 podman[263547]: 2025-10-01 16:56:59.768571181 +0000 UTC m=+0.073018682 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 01 16:56:59 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:57:01 compute-0 ceph-mon[74273]: pgmap v797: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:01 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:03 compute-0 ceph-mon[74273]: pgmap v798: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:03 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:57:05 compute-0 ceph-mon[74273]: pgmap v799: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:05 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:06 compute-0 ceph-mon[74273]: pgmap v800: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:07 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:08 compute-0 ceph-mon[74273]: pgmap v801: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:09 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:57:10 compute-0 ceph-mon[74273]: pgmap v802: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_16:57:11
Oct 01 16:57:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 16:57:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 16:57:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'volumes', '.rgw.root', 'default.rgw.control', 'images', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta']
Oct 01 16:57:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 16:57:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:57:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:57:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:57:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:57:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:57:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:57:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 16:57:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:57:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 16:57:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:57:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:57:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:57:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:57:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:57:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:57:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:57:11 compute-0 nova_compute[259504]: 2025-10-01 16:57:11.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:57:11 compute-0 nova_compute[259504]: 2025-10-01 16:57:11.751 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:57:11 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:12 compute-0 nova_compute[259504]: 2025-10-01 16:57:12.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:57:12 compute-0 nova_compute[259504]: 2025-10-01 16:57:12.810 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 16:57:12 compute-0 nova_compute[259504]: 2025-10-01 16:57:12.810 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 16:57:12 compute-0 nova_compute[259504]: 2025-10-01 16:57:12.810 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 16:57:12 compute-0 nova_compute[259504]: 2025-10-01 16:57:12.810 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 16:57:12 compute-0 nova_compute[259504]: 2025-10-01 16:57:12.811 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 16:57:13 compute-0 ceph-mon[74273]: pgmap v803: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 16:57:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2921904495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 16:57:13 compute-0 nova_compute[259504]: 2025-10-01 16:57:13.257 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 16:57:13 compute-0 nova_compute[259504]: 2025-10-01 16:57:13.474 2 WARNING nova.virt.libvirt.driver [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 16:57:13 compute-0 nova_compute[259504]: 2025-10-01 16:57:13.475 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5163MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 16:57:13 compute-0 nova_compute[259504]: 2025-10-01 16:57:13.475 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 16:57:13 compute-0 nova_compute[259504]: 2025-10-01 16:57:13.476 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 16:57:13 compute-0 nova_compute[259504]: 2025-10-01 16:57:13.594 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 16:57:13 compute-0 nova_compute[259504]: 2025-10-01 16:57:13.595 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 16:57:13 compute-0 nova_compute[259504]: 2025-10-01 16:57:13.616 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 16:57:13 compute-0 podman[263589]: 2025-10-01 16:57:13.76463595 +0000 UTC m=+0.078075127 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Oct 01 16:57:13 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 16:57:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2237401585' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 16:57:14 compute-0 nova_compute[259504]: 2025-10-01 16:57:14.095 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 16:57:14 compute-0 nova_compute[259504]: 2025-10-01 16:57:14.104 2 DEBUG nova.compute.provider_tree [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed in ProviderTree for provider: 2417da73-53f1-4edf-ae4c-fbd9fa470d6b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 16:57:14 compute-0 nova_compute[259504]: 2025-10-01 16:57:14.130 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 16:57:14 compute-0 nova_compute[259504]: 2025-10-01 16:57:14.133 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 16:57:14 compute-0 nova_compute[259504]: 2025-10-01 16:57:14.133 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 16:57:14 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2921904495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 16:57:15 compute-0 nova_compute[259504]: 2025-10-01 16:57:15.128 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:57:15 compute-0 nova_compute[259504]: 2025-10-01 16:57:15.129 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:57:15 compute-0 nova_compute[259504]: 2025-10-01 16:57:15.130 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 16:57:15 compute-0 nova_compute[259504]: 2025-10-01 16:57:15.130 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 16:57:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:57:15 compute-0 ceph-mon[74273]: pgmap v804: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:15 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2237401585' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 16:57:15 compute-0 nova_compute[259504]: 2025-10-01 16:57:15.239 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 16:57:15 compute-0 nova_compute[259504]: 2025-10-01 16:57:15.240 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:57:15 compute-0 nova_compute[259504]: 2025-10-01 16:57:15.240 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:57:15 compute-0 nova_compute[259504]: 2025-10-01 16:57:15.240 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:57:15 compute-0 nova_compute[259504]: 2025-10-01 16:57:15.241 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 16:57:15 compute-0 nova_compute[259504]: 2025-10-01 16:57:15.751 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:57:15 compute-0 podman[263631]: 2025-10-01 16:57:15.770383688 +0000 UTC m=+0.084837016 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 01 16:57:15 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:16 compute-0 ceph-mon[74273]: pgmap v805: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:17 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:18 compute-0 ceph-mon[74273]: pgmap v806: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:19 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:57:19.962 162304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 16:57:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:57:19.962 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 16:57:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:57:19.963 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 16:57:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:57:20 compute-0 ceph-mon[74273]: pgmap v807: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 16:57:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:57:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 16:57:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:57:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:57:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:57:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:57:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:57:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:57:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:57:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:57:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:57:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 16:57:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:57:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:57:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:57:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 16:57:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:57:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 16:57:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:57:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:57:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:57:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 16:57:21 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:22 compute-0 ceph-mon[74273]: pgmap v808: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:23 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:24 compute-0 ceph-mon[74273]: pgmap v809: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:57:25 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:27 compute-0 ceph-mon[74273]: pgmap v810: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:27 compute-0 podman[263652]: 2025-10-01 16:57:27.792929109 +0000 UTC m=+0.102215287 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 01 16:57:27 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:29 compute-0 ceph-mon[74273]: pgmap v811: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:29 compute-0 PackageKit[191756]: daemon quit
Oct 01 16:57:29 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct 01 16:57:29 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:57:30 compute-0 podman[263678]: 2025-10-01 16:57:30.766832287 +0000 UTC m=+0.080794454 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible)
Oct 01 16:57:31 compute-0 ceph-mon[74273]: pgmap v812: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:31 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:33 compute-0 ceph-mon[74273]: pgmap v813: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:33 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:35 compute-0 ceph-mon[74273]: pgmap v814: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:57:35 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:37 compute-0 ceph-mon[74273]: pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:37 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:38 compute-0 ceph-mon[74273]: pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:39 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:57:40 compute-0 ceph-mon[74273]: pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:57:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:57:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:57:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:57:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:57:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:57:41 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:42 compute-0 ceph-mon[74273]: pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:43 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 16:57:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4138846590' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 16:57:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 16:57:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4138846590' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 16:57:43 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4138846590' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 16:57:43 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/4138846590' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 16:57:44 compute-0 podman[263698]: 2025-10-01 16:57:44.806263442 +0000 UTC m=+0.113861616 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, tcib_managed=true, container_name=iscsid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 01 16:57:44 compute-0 ceph-mon[74273]: pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:57:45 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:46 compute-0 ceph-mon[74273]: pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:46 compute-0 podman[263719]: 2025-10-01 16:57:46.779164676 +0000 UTC m=+0.091015018 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 01 16:57:47 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:48 compute-0 sudo[263739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:57:48 compute-0 sudo[263739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:57:48 compute-0 sudo[263739]: pam_unix(sudo:session): session closed for user root
Oct 01 16:57:48 compute-0 sudo[263764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:57:48 compute-0 sudo[263764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:57:48 compute-0 sudo[263764]: pam_unix(sudo:session): session closed for user root
Oct 01 16:57:48 compute-0 sudo[263789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:57:48 compute-0 sudo[263789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:57:48 compute-0 sudo[263789]: pam_unix(sudo:session): session closed for user root
Oct 01 16:57:48 compute-0 sudo[263814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 16:57:48 compute-0 sudo[263814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:57:48 compute-0 sudo[263814]: pam_unix(sudo:session): session closed for user root
Oct 01 16:57:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:57:48 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:57:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 16:57:48 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:57:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 16:57:48 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:57:48 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev ab64dfa9-f4b4-4c2c-9f0f-4f2e676c93bf does not exist
Oct 01 16:57:48 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev c28859a7-d2c0-4c5a-a56c-2aea5fbcad1c does not exist
Oct 01 16:57:48 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev e587c3f8-28c4-429b-b27e-c46d398082c2 does not exist
Oct 01 16:57:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 16:57:48 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:57:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 16:57:48 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:57:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:57:48 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:57:49 compute-0 sudo[263870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:57:49 compute-0 sudo[263870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:57:49 compute-0 sudo[263870]: pam_unix(sudo:session): session closed for user root
Oct 01 16:57:49 compute-0 sudo[263895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:57:49 compute-0 sudo[263895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:57:49 compute-0 sudo[263895]: pam_unix(sudo:session): session closed for user root
Oct 01 16:57:49 compute-0 ceph-mon[74273]: pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:49 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:57:49 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:57:49 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:57:49 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:57:49 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:57:49 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:57:49 compute-0 sudo[263920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:57:49 compute-0 sudo[263920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:57:49 compute-0 sudo[263920]: pam_unix(sudo:session): session closed for user root
Oct 01 16:57:49 compute-0 sudo[263945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 16:57:49 compute-0 sudo[263945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:57:49 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:49 compute-0 podman[264010]: 2025-10-01 16:57:49.838566295 +0000 UTC m=+0.115740008 container create 757345cc4ee9ec0badf0fc6e6c69e4bdec043683ebaf0f9456117fadbd0a3c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_khayyam, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:57:49 compute-0 podman[264010]: 2025-10-01 16:57:49.752254551 +0000 UTC m=+0.029428254 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:57:49 compute-0 systemd[1]: Started libpod-conmon-757345cc4ee9ec0badf0fc6e6c69e4bdec043683ebaf0f9456117fadbd0a3c9d.scope.
Oct 01 16:57:50 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:57:50 compute-0 podman[264010]: 2025-10-01 16:57:50.029286069 +0000 UTC m=+0.306459772 container init 757345cc4ee9ec0badf0fc6e6c69e4bdec043683ebaf0f9456117fadbd0a3c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_khayyam, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 01 16:57:50 compute-0 podman[264010]: 2025-10-01 16:57:50.036116726 +0000 UTC m=+0.313290399 container start 757345cc4ee9ec0badf0fc6e6c69e4bdec043683ebaf0f9456117fadbd0a3c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 16:57:50 compute-0 podman[264010]: 2025-10-01 16:57:50.040372273 +0000 UTC m=+0.317546086 container attach 757345cc4ee9ec0badf0fc6e6c69e4bdec043683ebaf0f9456117fadbd0a3c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_khayyam, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:57:50 compute-0 cool_khayyam[264026]: 167 167
Oct 01 16:57:50 compute-0 systemd[1]: libpod-757345cc4ee9ec0badf0fc6e6c69e4bdec043683ebaf0f9456117fadbd0a3c9d.scope: Deactivated successfully.
Oct 01 16:57:50 compute-0 podman[264010]: 2025-10-01 16:57:50.042564211 +0000 UTC m=+0.319737894 container died 757345cc4ee9ec0badf0fc6e6c69e4bdec043683ebaf0f9456117fadbd0a3c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_khayyam, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:57:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9a65b9693275ac814f8c7e62e33212128ead3932e4b3bf519827db0679f6f27-merged.mount: Deactivated successfully.
Oct 01 16:57:50 compute-0 podman[264010]: 2025-10-01 16:57:50.086759196 +0000 UTC m=+0.363932869 container remove 757345cc4ee9ec0badf0fc6e6c69e4bdec043683ebaf0f9456117fadbd0a3c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_khayyam, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 01 16:57:50 compute-0 systemd[1]: libpod-conmon-757345cc4ee9ec0badf0fc6e6c69e4bdec043683ebaf0f9456117fadbd0a3c9d.scope: Deactivated successfully.
Oct 01 16:57:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:57:50 compute-0 ceph-mon[74273]: pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:50 compute-0 podman[264048]: 2025-10-01 16:57:50.301057473 +0000 UTC m=+0.063085625 container create 2897f3906bbc6e256ed73f2b9953834683eefba92805bcbf5f8bab97a0fcc068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lewin, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 01 16:57:50 compute-0 systemd[1]: Started libpod-conmon-2897f3906bbc6e256ed73f2b9953834683eefba92805bcbf5f8bab97a0fcc068.scope.
Oct 01 16:57:50 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:57:50 compute-0 podman[264048]: 2025-10-01 16:57:50.282016996 +0000 UTC m=+0.044045148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:57:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f99cad61e73958d5f945713a171a81fbc553f78721af5113117eb0adc201bb94/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:57:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f99cad61e73958d5f945713a171a81fbc553f78721af5113117eb0adc201bb94/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:57:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f99cad61e73958d5f945713a171a81fbc553f78721af5113117eb0adc201bb94/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:57:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f99cad61e73958d5f945713a171a81fbc553f78721af5113117eb0adc201bb94/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:57:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f99cad61e73958d5f945713a171a81fbc553f78721af5113117eb0adc201bb94/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:57:50 compute-0 podman[264048]: 2025-10-01 16:57:50.401610438 +0000 UTC m=+0.163638590 container init 2897f3906bbc6e256ed73f2b9953834683eefba92805bcbf5f8bab97a0fcc068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:57:50 compute-0 podman[264048]: 2025-10-01 16:57:50.414281255 +0000 UTC m=+0.176309387 container start 2897f3906bbc6e256ed73f2b9953834683eefba92805bcbf5f8bab97a0fcc068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lewin, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 01 16:57:50 compute-0 podman[264048]: 2025-10-01 16:57:50.417998741 +0000 UTC m=+0.180026903 container attach 2897f3906bbc6e256ed73f2b9953834683eefba92805bcbf5f8bab97a0fcc068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 01 16:57:51 compute-0 zealous_lewin[264064]: --> passed data devices: 0 physical, 3 LVM
Oct 01 16:57:51 compute-0 zealous_lewin[264064]: --> relative data size: 1.0
Oct 01 16:57:51 compute-0 zealous_lewin[264064]: --> All data devices are unavailable
Oct 01 16:57:51 compute-0 systemd[1]: libpod-2897f3906bbc6e256ed73f2b9953834683eefba92805bcbf5f8bab97a0fcc068.scope: Deactivated successfully.
Oct 01 16:57:51 compute-0 podman[264048]: 2025-10-01 16:57:51.596487391 +0000 UTC m=+1.358515603 container died 2897f3906bbc6e256ed73f2b9953834683eefba92805bcbf5f8bab97a0fcc068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:57:51 compute-0 systemd[1]: libpod-2897f3906bbc6e256ed73f2b9953834683eefba92805bcbf5f8bab97a0fcc068.scope: Consumed 1.145s CPU time.
Oct 01 16:57:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-f99cad61e73958d5f945713a171a81fbc553f78721af5113117eb0adc201bb94-merged.mount: Deactivated successfully.
Oct 01 16:57:51 compute-0 podman[264048]: 2025-10-01 16:57:51.688388433 +0000 UTC m=+1.450416595 container remove 2897f3906bbc6e256ed73f2b9953834683eefba92805bcbf5f8bab97a0fcc068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:57:51 compute-0 systemd[1]: libpod-conmon-2897f3906bbc6e256ed73f2b9953834683eefba92805bcbf5f8bab97a0fcc068.scope: Deactivated successfully.
Oct 01 16:57:51 compute-0 sudo[263945]: pam_unix(sudo:session): session closed for user root
Oct 01 16:57:51 compute-0 sudo[264106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:57:51 compute-0 sudo[264106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:57:51 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:51 compute-0 sudo[264106]: pam_unix(sudo:session): session closed for user root
Oct 01 16:57:51 compute-0 sudo[264131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:57:51 compute-0 sudo[264131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:57:51 compute-0 sudo[264131]: pam_unix(sudo:session): session closed for user root
Oct 01 16:57:52 compute-0 sudo[264156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:57:52 compute-0 sudo[264156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:57:52 compute-0 sudo[264156]: pam_unix(sudo:session): session closed for user root
Oct 01 16:57:52 compute-0 sudo[264181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 16:57:52 compute-0 sudo[264181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:57:52 compute-0 podman[264248]: 2025-10-01 16:57:52.485022754 +0000 UTC m=+0.055803871 container create 63c3af057eec211affe6c989dd2060e62f8e6370a914e5b66efaba6e10f5df76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hypatia, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:57:52 compute-0 systemd[1]: Started libpod-conmon-63c3af057eec211affe6c989dd2060e62f8e6370a914e5b66efaba6e10f5df76.scope.
Oct 01 16:57:52 compute-0 podman[264248]: 2025-10-01 16:57:52.459965749 +0000 UTC m=+0.030746946 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:57:52 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:57:52 compute-0 podman[264248]: 2025-10-01 16:57:52.584781737 +0000 UTC m=+0.155562894 container init 63c3af057eec211affe6c989dd2060e62f8e6370a914e5b66efaba6e10f5df76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 01 16:57:52 compute-0 podman[264248]: 2025-10-01 16:57:52.592058401 +0000 UTC m=+0.162839558 container start 63c3af057eec211affe6c989dd2060e62f8e6370a914e5b66efaba6e10f5df76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 01 16:57:52 compute-0 podman[264248]: 2025-10-01 16:57:52.595830373 +0000 UTC m=+0.166611580 container attach 63c3af057eec211affe6c989dd2060e62f8e6370a914e5b66efaba6e10f5df76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:57:52 compute-0 wonderful_hypatia[264265]: 167 167
Oct 01 16:57:52 compute-0 systemd[1]: libpod-63c3af057eec211affe6c989dd2060e62f8e6370a914e5b66efaba6e10f5df76.scope: Deactivated successfully.
Oct 01 16:57:52 compute-0 podman[264248]: 2025-10-01 16:57:52.600106979 +0000 UTC m=+0.170888136 container died 63c3af057eec211affe6c989dd2060e62f8e6370a914e5b66efaba6e10f5df76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:57:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-63622fd749b3cb3325399808c70f3a1e1d5e659ad2d004d01e2d8adb6634146d-merged.mount: Deactivated successfully.
Oct 01 16:57:52 compute-0 podman[264248]: 2025-10-01 16:57:52.646408109 +0000 UTC m=+0.217189246 container remove 63c3af057eec211affe6c989dd2060e62f8e6370a914e5b66efaba6e10f5df76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 01 16:57:52 compute-0 systemd[1]: libpod-conmon-63c3af057eec211affe6c989dd2060e62f8e6370a914e5b66efaba6e10f5df76.scope: Deactivated successfully.
Oct 01 16:57:52 compute-0 podman[264291]: 2025-10-01 16:57:52.856224747 +0000 UTC m=+0.049413442 container create d0d02d83d0f716f16fcdcff174c04065a41f9ef642df1bc86d4dd69988bfb3eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:57:52 compute-0 systemd[1]: Started libpod-conmon-d0d02d83d0f716f16fcdcff174c04065a41f9ef642df1bc86d4dd69988bfb3eb.scope.
Oct 01 16:57:52 compute-0 ceph-mon[74273]: pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:52 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:57:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/809b051da4850d072c17bedf7d7f9e466be96ab607e00f5f470e25d223b393a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:57:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/809b051da4850d072c17bedf7d7f9e466be96ab607e00f5f470e25d223b393a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:57:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/809b051da4850d072c17bedf7d7f9e466be96ab607e00f5f470e25d223b393a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:57:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/809b051da4850d072c17bedf7d7f9e466be96ab607e00f5f470e25d223b393a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:57:52 compute-0 podman[264291]: 2025-10-01 16:57:52.835840208 +0000 UTC m=+0.029028923 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:57:52 compute-0 podman[264291]: 2025-10-01 16:57:52.934062294 +0000 UTC m=+0.127251059 container init d0d02d83d0f716f16fcdcff174c04065a41f9ef642df1bc86d4dd69988bfb3eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:57:52 compute-0 podman[264291]: 2025-10-01 16:57:52.947606447 +0000 UTC m=+0.140795172 container start d0d02d83d0f716f16fcdcff174c04065a41f9ef642df1bc86d4dd69988bfb3eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_goodall, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 01 16:57:52 compute-0 podman[264291]: 2025-10-01 16:57:52.951267507 +0000 UTC m=+0.144456292 container attach d0d02d83d0f716f16fcdcff174c04065a41f9ef642df1bc86d4dd69988bfb3eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:57:53 compute-0 cranky_goodall[264307]: {
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:     "0": [
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:         {
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             "devices": [
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "/dev/loop3"
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             ],
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             "lv_name": "ceph_lv0",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             "lv_size": "21470642176",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             "name": "ceph_lv0",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             "tags": {
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "ceph.cluster_name": "ceph",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "ceph.crush_device_class": "",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "ceph.encrypted": "0",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "ceph.osd_id": "0",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "ceph.type": "block",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "ceph.vdo": "0"
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             },
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             "type": "block",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             "vg_name": "ceph_vg0"
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:         }
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:     ],
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:     "1": [
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:         {
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             "devices": [
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "/dev/loop4"
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             ],
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             "lv_name": "ceph_lv1",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             "lv_size": "21470642176",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             "name": "ceph_lv1",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             "tags": {
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "ceph.cluster_name": "ceph",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "ceph.crush_device_class": "",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "ceph.encrypted": "0",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "ceph.osd_id": "1",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "ceph.type": "block",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "ceph.vdo": "0"
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             },
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             "type": "block",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             "vg_name": "ceph_vg1"
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:         }
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:     ],
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:     "2": [
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:         {
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             "devices": [
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "/dev/loop5"
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             ],
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             "lv_name": "ceph_lv2",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             "lv_size": "21470642176",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             "name": "ceph_lv2",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             "tags": {
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "ceph.cluster_name": "ceph",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "ceph.crush_device_class": "",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "ceph.encrypted": "0",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "ceph.osd_id": "2",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "ceph.type": "block",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:                 "ceph.vdo": "0"
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             },
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             "type": "block",
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:             "vg_name": "ceph_vg2"
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:         }
Oct 01 16:57:53 compute-0 cranky_goodall[264307]:     ]
Oct 01 16:57:53 compute-0 cranky_goodall[264307]: }
Oct 01 16:57:53 compute-0 systemd[1]: libpod-d0d02d83d0f716f16fcdcff174c04065a41f9ef642df1bc86d4dd69988bfb3eb.scope: Deactivated successfully.
Oct 01 16:57:53 compute-0 podman[264291]: 2025-10-01 16:57:53.737070356 +0000 UTC m=+0.930259041 container died d0d02d83d0f716f16fcdcff174c04065a41f9ef642df1bc86d4dd69988bfb3eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_goodall, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct 01 16:57:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-809b051da4850d072c17bedf7d7f9e466be96ab607e00f5f470e25d223b393a0-merged.mount: Deactivated successfully.
Oct 01 16:57:53 compute-0 podman[264291]: 2025-10-01 16:57:53.809606103 +0000 UTC m=+1.002794818 container remove d0d02d83d0f716f16fcdcff174c04065a41f9ef642df1bc86d4dd69988bfb3eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 01 16:57:53 compute-0 systemd[1]: libpod-conmon-d0d02d83d0f716f16fcdcff174c04065a41f9ef642df1bc86d4dd69988bfb3eb.scope: Deactivated successfully.
Oct 01 16:57:53 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:53 compute-0 sudo[264181]: pam_unix(sudo:session): session closed for user root
Oct 01 16:57:53 compute-0 sudo[264328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:57:53 compute-0 sudo[264328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:57:53 compute-0 sudo[264328]: pam_unix(sudo:session): session closed for user root
Oct 01 16:57:54 compute-0 sudo[264353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:57:54 compute-0 sudo[264353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:57:54 compute-0 sudo[264353]: pam_unix(sudo:session): session closed for user root
Oct 01 16:57:54 compute-0 sudo[264378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:57:54 compute-0 sudo[264378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:57:54 compute-0 sudo[264378]: pam_unix(sudo:session): session closed for user root
Oct 01 16:57:54 compute-0 sudo[264403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 16:57:54 compute-0 sudo[264403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:57:54 compute-0 podman[264469]: 2025-10-01 16:57:54.677026492 +0000 UTC m=+0.070498389 container create dc43efee2b5cc152ca968042bfa031ea0305caad84c0cbe8f36856616b478350 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 01 16:57:54 compute-0 systemd[1]: Started libpod-conmon-dc43efee2b5cc152ca968042bfa031ea0305caad84c0cbe8f36856616b478350.scope.
Oct 01 16:57:54 compute-0 podman[264469]: 2025-10-01 16:57:54.651316335 +0000 UTC m=+0.044788262 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:57:54 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:57:54 compute-0 podman[264469]: 2025-10-01 16:57:54.774046837 +0000 UTC m=+0.167518784 container init dc43efee2b5cc152ca968042bfa031ea0305caad84c0cbe8f36856616b478350 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hodgkin, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 01 16:57:54 compute-0 podman[264469]: 2025-10-01 16:57:54.782168928 +0000 UTC m=+0.175640865 container start dc43efee2b5cc152ca968042bfa031ea0305caad84c0cbe8f36856616b478350 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:57:54 compute-0 podman[264469]: 2025-10-01 16:57:54.787417282 +0000 UTC m=+0.180889229 container attach dc43efee2b5cc152ca968042bfa031ea0305caad84c0cbe8f36856616b478350 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:57:54 compute-0 nostalgic_hodgkin[264486]: 167 167
Oct 01 16:57:54 compute-0 systemd[1]: libpod-dc43efee2b5cc152ca968042bfa031ea0305caad84c0cbe8f36856616b478350.scope: Deactivated successfully.
Oct 01 16:57:54 compute-0 podman[264469]: 2025-10-01 16:57:54.790212336 +0000 UTC m=+0.183684253 container died dc43efee2b5cc152ca968042bfa031ea0305caad84c0cbe8f36856616b478350 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hodgkin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Oct 01 16:57:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e6018c58055a9e4ac767d853b39fafc80b7beee7970ce6f2c4a9cc18748a3af-merged.mount: Deactivated successfully.
Oct 01 16:57:54 compute-0 podman[264469]: 2025-10-01 16:57:54.843675059 +0000 UTC m=+0.237146956 container remove dc43efee2b5cc152ca968042bfa031ea0305caad84c0cbe8f36856616b478350 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:57:54 compute-0 systemd[1]: libpod-conmon-dc43efee2b5cc152ca968042bfa031ea0305caad84c0cbe8f36856616b478350.scope: Deactivated successfully.
Oct 01 16:57:54 compute-0 ceph-mon[74273]: pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:55 compute-0 podman[264510]: 2025-10-01 16:57:55.077534756 +0000 UTC m=+0.075086201 container create b2ae8d407dcfbac217ab357340cf55816247d5f5a545321e91eaa6a1314ea077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 01 16:57:55 compute-0 systemd[1]: Started libpod-conmon-b2ae8d407dcfbac217ab357340cf55816247d5f5a545321e91eaa6a1314ea077.scope.
Oct 01 16:57:55 compute-0 podman[264510]: 2025-10-01 16:57:55.049186204 +0000 UTC m=+0.046737699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:57:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:57:55 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:57:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c784f26a49f27aa7b3e76a686767cb5da2b2dd96b91bf95557e62ce2c6a730c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:57:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c784f26a49f27aa7b3e76a686767cb5da2b2dd96b91bf95557e62ce2c6a730c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:57:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c784f26a49f27aa7b3e76a686767cb5da2b2dd96b91bf95557e62ce2c6a730c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:57:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c784f26a49f27aa7b3e76a686767cb5da2b2dd96b91bf95557e62ce2c6a730c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:57:55 compute-0 podman[264510]: 2025-10-01 16:57:55.188350905 +0000 UTC m=+0.185902390 container init b2ae8d407dcfbac217ab357340cf55816247d5f5a545321e91eaa6a1314ea077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_meitner, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:57:55 compute-0 podman[264510]: 2025-10-01 16:57:55.199019319 +0000 UTC m=+0.196570764 container start b2ae8d407dcfbac217ab357340cf55816247d5f5a545321e91eaa6a1314ea077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 01 16:57:55 compute-0 podman[264510]: 2025-10-01 16:57:55.203511358 +0000 UTC m=+0.201062803 container attach b2ae8d407dcfbac217ab357340cf55816247d5f5a545321e91eaa6a1314ea077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_meitner, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 01 16:57:55 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:56 compute-0 sweet_meitner[264527]: {
Oct 01 16:57:56 compute-0 sweet_meitner[264527]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 16:57:56 compute-0 sweet_meitner[264527]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:57:56 compute-0 sweet_meitner[264527]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 16:57:56 compute-0 sweet_meitner[264527]:         "osd_id": 2,
Oct 01 16:57:56 compute-0 sweet_meitner[264527]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:57:56 compute-0 sweet_meitner[264527]:         "type": "bluestore"
Oct 01 16:57:56 compute-0 sweet_meitner[264527]:     },
Oct 01 16:57:56 compute-0 sweet_meitner[264527]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 16:57:56 compute-0 sweet_meitner[264527]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:57:56 compute-0 sweet_meitner[264527]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 16:57:56 compute-0 sweet_meitner[264527]:         "osd_id": 0,
Oct 01 16:57:56 compute-0 sweet_meitner[264527]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:57:56 compute-0 sweet_meitner[264527]:         "type": "bluestore"
Oct 01 16:57:56 compute-0 sweet_meitner[264527]:     },
Oct 01 16:57:56 compute-0 sweet_meitner[264527]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 16:57:56 compute-0 sweet_meitner[264527]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:57:56 compute-0 sweet_meitner[264527]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 16:57:56 compute-0 sweet_meitner[264527]:         "osd_id": 1,
Oct 01 16:57:56 compute-0 sweet_meitner[264527]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:57:56 compute-0 sweet_meitner[264527]:         "type": "bluestore"
Oct 01 16:57:56 compute-0 sweet_meitner[264527]:     }
Oct 01 16:57:56 compute-0 sweet_meitner[264527]: }
Oct 01 16:57:56 compute-0 systemd[1]: libpod-b2ae8d407dcfbac217ab357340cf55816247d5f5a545321e91eaa6a1314ea077.scope: Deactivated successfully.
Oct 01 16:57:56 compute-0 systemd[1]: libpod-b2ae8d407dcfbac217ab357340cf55816247d5f5a545321e91eaa6a1314ea077.scope: Consumed 1.096s CPU time.
Oct 01 16:57:56 compute-0 podman[264510]: 2025-10-01 16:57:56.287416304 +0000 UTC m=+1.284967779 container died b2ae8d407dcfbac217ab357340cf55816247d5f5a545321e91eaa6a1314ea077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:57:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-c784f26a49f27aa7b3e76a686767cb5da2b2dd96b91bf95557e62ce2c6a730c4-merged.mount: Deactivated successfully.
Oct 01 16:57:56 compute-0 podman[264510]: 2025-10-01 16:57:56.360282897 +0000 UTC m=+1.357834332 container remove b2ae8d407dcfbac217ab357340cf55816247d5f5a545321e91eaa6a1314ea077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_meitner, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:57:56 compute-0 systemd[1]: libpod-conmon-b2ae8d407dcfbac217ab357340cf55816247d5f5a545321e91eaa6a1314ea077.scope: Deactivated successfully.
Oct 01 16:57:56 compute-0 sudo[264403]: pam_unix(sudo:session): session closed for user root
Oct 01 16:57:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:57:56 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:57:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:57:56 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:57:56 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 630d10dd-dff4-40ae-854d-30e3c0d912d1 does not exist
Oct 01 16:57:56 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev db4476f1-6577-4bb0-8c5b-249d879727f6 does not exist
Oct 01 16:57:56 compute-0 sudo[264571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:57:56 compute-0 sudo[264571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:57:56 compute-0 sudo[264571]: pam_unix(sudo:session): session closed for user root
Oct 01 16:57:56 compute-0 sudo[264596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 16:57:56 compute-0 sudo[264596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:57:56 compute-0 sudo[264596]: pam_unix(sudo:session): session closed for user root
Oct 01 16:57:56 compute-0 ceph-mon[74273]: pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:56 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:57:56 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:57:57 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:58 compute-0 podman[264621]: 2025-10-01 16:57:58.845565847 +0000 UTC m=+0.145270132 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 16:57:58 compute-0 ceph-mon[74273]: pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:57:59 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:58:00 compute-0 ceph-mon[74273]: pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:01 compute-0 podman[264649]: 2025-10-01 16:58:01.756285905 +0000 UTC m=+0.066219754 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 01 16:58:01 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:02 compute-0 ceph-mon[74273]: pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:03 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:04 compute-0 ceph-mon[74273]: pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:58:05 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:06 compute-0 ceph-mon[74273]: pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:07 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:08 compute-0 ceph-mon[74273]: pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:09 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:58:10 compute-0 ceph-mon[74273]: pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_16:58:11
Oct 01 16:58:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 16:58:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 16:58:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'default.rgw.control', 'backups', 'default.rgw.meta', '.rgw.root', 'images', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms']
Oct 01 16:58:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 16:58:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:58:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:58:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:58:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:58:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:58:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:58:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 16:58:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:58:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 16:58:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:58:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:58:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:58:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:58:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:58:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:58:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:58:11 compute-0 nova_compute[259504]: 2025-10-01 16:58:11.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:58:11 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:12 compute-0 ceph-mon[74273]: pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:13 compute-0 nova_compute[259504]: 2025-10-01 16:58:13.746 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:58:13 compute-0 nova_compute[259504]: 2025-10-01 16:58:13.773 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:58:13 compute-0 nova_compute[259504]: 2025-10-01 16:58:13.773 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 16:58:13 compute-0 nova_compute[259504]: 2025-10-01 16:58:13.773 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 16:58:13 compute-0 nova_compute[259504]: 2025-10-01 16:58:13.788 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 16:58:13 compute-0 nova_compute[259504]: 2025-10-01 16:58:13.788 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:58:13 compute-0 nova_compute[259504]: 2025-10-01 16:58:13.788 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:58:13 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:14 compute-0 nova_compute[259504]: 2025-10-01 16:58:14.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:58:14 compute-0 nova_compute[259504]: 2025-10-01 16:58:14.751 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:58:14 compute-0 nova_compute[259504]: 2025-10-01 16:58:14.751 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:58:14 compute-0 nova_compute[259504]: 2025-10-01 16:58:14.752 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 16:58:14 compute-0 nova_compute[259504]: 2025-10-01 16:58:14.752 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:58:14 compute-0 nova_compute[259504]: 2025-10-01 16:58:14.788 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 16:58:14 compute-0 nova_compute[259504]: 2025-10-01 16:58:14.789 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 16:58:14 compute-0 nova_compute[259504]: 2025-10-01 16:58:14.789 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 16:58:14 compute-0 nova_compute[259504]: 2025-10-01 16:58:14.790 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 16:58:14 compute-0 nova_compute[259504]: 2025-10-01 16:58:14.790 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 16:58:14 compute-0 ceph-mon[74273]: pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:58:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 16:58:15 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3198836077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 16:58:15 compute-0 nova_compute[259504]: 2025-10-01 16:58:15.255 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 16:58:15 compute-0 nova_compute[259504]: 2025-10-01 16:58:15.468 2 WARNING nova.virt.libvirt.driver [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 16:58:15 compute-0 nova_compute[259504]: 2025-10-01 16:58:15.470 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5152MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 16:58:15 compute-0 nova_compute[259504]: 2025-10-01 16:58:15.471 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 16:58:15 compute-0 nova_compute[259504]: 2025-10-01 16:58:15.471 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 16:58:15 compute-0 nova_compute[259504]: 2025-10-01 16:58:15.551 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 16:58:15 compute-0 nova_compute[259504]: 2025-10-01 16:58:15.552 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 16:58:15 compute-0 nova_compute[259504]: 2025-10-01 16:58:15.575 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 16:58:15 compute-0 podman[264691]: 2025-10-01 16:58:15.777776752 +0000 UTC m=+0.093101335 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 01 16:58:15 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:16 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3198836077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 16:58:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 16:58:16 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/913888758' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 16:58:16 compute-0 nova_compute[259504]: 2025-10-01 16:58:16.069 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 16:58:16 compute-0 nova_compute[259504]: 2025-10-01 16:58:16.077 2 DEBUG nova.compute.provider_tree [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed in ProviderTree for provider: 2417da73-53f1-4edf-ae4c-fbd9fa470d6b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 16:58:16 compute-0 nova_compute[259504]: 2025-10-01 16:58:16.104 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 16:58:16 compute-0 nova_compute[259504]: 2025-10-01 16:58:16.108 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 16:58:16 compute-0 nova_compute[259504]: 2025-10-01 16:58:16.109 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 16:58:17 compute-0 ceph-mon[74273]: pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:17 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/913888758' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 16:58:17 compute-0 podman[264734]: 2025-10-01 16:58:17.799085624 +0000 UTC m=+0.102200671 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 01 16:58:17 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:19 compute-0 ceph-mon[74273]: pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:19 compute-0 nova_compute[259504]: 2025-10-01 16:58:19.108 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:58:19 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:58:19.963 162304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 16:58:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:58:19.963 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 16:58:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:58:19.963 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 16:58:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:58:21 compute-0 ceph-mon[74273]: pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 16:58:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:58:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 16:58:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:58:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:58:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:58:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:58:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:58:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:58:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:58:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:58:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:58:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 16:58:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:58:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:58:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:58:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 16:58:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:58:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 16:58:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:58:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:58:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:58:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 16:58:21 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:23 compute-0 ceph-mon[74273]: pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:23 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:25 compute-0 ceph-mon[74273]: pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:58:25 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:26 compute-0 ceph-mon[74273]: pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:27 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:28 compute-0 ceph-mon[74273]: pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:29 compute-0 podman[264755]: 2025-10-01 16:58:29.788257907 +0000 UTC m=+0.101835033 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251001, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 01 16:58:29 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:58:30 compute-0 ceph-mon[74273]: pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:31 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:32 compute-0 podman[264781]: 2025-10-01 16:58:32.751920747 +0000 UTC m=+0.072716842 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 01 16:58:32 compute-0 ceph-mon[74273]: pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:33 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:35 compute-0 ceph-mon[74273]: pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:58:35 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:37 compute-0 ceph-mon[74273]: pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:37 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:39 compute-0 ceph-mon[74273]: pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:39 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:58:41 compute-0 ceph-mon[74273]: pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:58:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:58:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:58:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:58:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:58:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:58:41 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:43 compute-0 ceph-mon[74273]: pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 16:58:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3457545107' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 16:58:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 16:58:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3457545107' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 16:58:43 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3457545107' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 16:58:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3457545107' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 16:58:45 compute-0 ceph-mon[74273]: pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:58:45 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:46 compute-0 podman[264800]: 2025-10-01 16:58:46.777154922 +0000 UTC m=+0.084517462 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 01 16:58:47 compute-0 ceph-mon[74273]: pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:47 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:48 compute-0 podman[264820]: 2025-10-01 16:58:48.740654448 +0000 UTC m=+0.060126915 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 16:58:49 compute-0 ceph-mon[74273]: pgmap v851: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:49 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:58:51 compute-0 ceph-mon[74273]: pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:51 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:53 compute-0 ceph-mon[74273]: pgmap v853: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:53 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:55 compute-0 ceph-mon[74273]: pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:58:55 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:56 compute-0 sudo[264842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:58:56 compute-0 sudo[264842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:58:56 compute-0 sudo[264842]: pam_unix(sudo:session): session closed for user root
Oct 01 16:58:56 compute-0 sudo[264867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:58:56 compute-0 sudo[264867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:58:56 compute-0 sudo[264867]: pam_unix(sudo:session): session closed for user root
Oct 01 16:58:56 compute-0 sudo[264892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:58:56 compute-0 sudo[264892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:58:56 compute-0 sudo[264892]: pam_unix(sudo:session): session closed for user root
Oct 01 16:58:56 compute-0 sudo[264917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 16:58:56 compute-0 sudo[264917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:58:57 compute-0 ceph-mon[74273]: pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:57 compute-0 sudo[264917]: pam_unix(sudo:session): session closed for user root
Oct 01 16:58:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:58:57 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:58:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 16:58:57 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:58:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 16:58:57 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:58:57 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev ce6fe9ac-2267-46ec-93b6-c06d7a3250c4 does not exist
Oct 01 16:58:57 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 1620ce23-661d-443d-97b5-71aac016c432 does not exist
Oct 01 16:58:57 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev cb180379-307a-4beb-9ad3-cfa7d0267a00 does not exist
Oct 01 16:58:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 16:58:57 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:58:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 16:58:57 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:58:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 16:58:57 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:58:57 compute-0 sudo[264973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:58:57 compute-0 sudo[264973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:58:57 compute-0 sudo[264973]: pam_unix(sudo:session): session closed for user root
Oct 01 16:58:57 compute-0 sudo[264998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:58:57 compute-0 sudo[264998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:58:57 compute-0 sudo[264998]: pam_unix(sudo:session): session closed for user root
Oct 01 16:58:57 compute-0 sudo[265023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:58:57 compute-0 sudo[265023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:58:57 compute-0 sudo[265023]: pam_unix(sudo:session): session closed for user root
Oct 01 16:58:57 compute-0 sudo[265048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 16:58:57 compute-0 sudo[265048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:58:57 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:58 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:58:58 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 16:58:58 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:58:58 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 16:58:58 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 16:58:58 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 16:58:58 compute-0 podman[265113]: 2025-10-01 16:58:58.148249558 +0000 UTC m=+0.047719983 container create a85de5f44c8627ade688bfd61dc2687ea0db670d5b740577a3eaa89b63d8caec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_montalcini, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:58:58 compute-0 systemd[1]: Started libpod-conmon-a85de5f44c8627ade688bfd61dc2687ea0db670d5b740577a3eaa89b63d8caec.scope.
Oct 01 16:58:58 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:58:58 compute-0 podman[265113]: 2025-10-01 16:58:58.131712549 +0000 UTC m=+0.031182994 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:58:58 compute-0 podman[265113]: 2025-10-01 16:58:58.234193957 +0000 UTC m=+0.133664422 container init a85de5f44c8627ade688bfd61dc2687ea0db670d5b740577a3eaa89b63d8caec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_montalcini, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:58:58 compute-0 podman[265113]: 2025-10-01 16:58:58.242321822 +0000 UTC m=+0.141792277 container start a85de5f44c8627ade688bfd61dc2687ea0db670d5b740577a3eaa89b63d8caec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_montalcini, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:58:58 compute-0 podman[265113]: 2025-10-01 16:58:58.246829507 +0000 UTC m=+0.146300022 container attach a85de5f44c8627ade688bfd61dc2687ea0db670d5b740577a3eaa89b63d8caec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_montalcini, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 01 16:58:58 compute-0 vigorous_montalcini[265130]: 167 167
Oct 01 16:58:58 compute-0 systemd[1]: libpod-a85de5f44c8627ade688bfd61dc2687ea0db670d5b740577a3eaa89b63d8caec.scope: Deactivated successfully.
Oct 01 16:58:58 compute-0 podman[265113]: 2025-10-01 16:58:58.249208868 +0000 UTC m=+0.148679323 container died a85de5f44c8627ade688bfd61dc2687ea0db670d5b740577a3eaa89b63d8caec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:58:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-15732fce897ec0744c27ed832086948bf2a2ecc098d7e8bdb9ce2b0543dc7d51-merged.mount: Deactivated successfully.
Oct 01 16:58:58 compute-0 podman[265113]: 2025-10-01 16:58:58.310330984 +0000 UTC m=+0.209801419 container remove a85de5f44c8627ade688bfd61dc2687ea0db670d5b740577a3eaa89b63d8caec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 01 16:58:58 compute-0 systemd[1]: libpod-conmon-a85de5f44c8627ade688bfd61dc2687ea0db670d5b740577a3eaa89b63d8caec.scope: Deactivated successfully.
Oct 01 16:58:58 compute-0 podman[265153]: 2025-10-01 16:58:58.528779465 +0000 UTC m=+0.062996062 container create 4b0b4d925969ab55bae3b50b7f362c4883458785c4a3b4cf449a1a7e8b91d764 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 01 16:58:58 compute-0 systemd[1]: Started libpod-conmon-4b0b4d925969ab55bae3b50b7f362c4883458785c4a3b4cf449a1a7e8b91d764.scope.
Oct 01 16:58:58 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:58:58 compute-0 podman[265153]: 2025-10-01 16:58:58.506195674 +0000 UTC m=+0.040412281 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:58:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a32c7f5a4d5146beb9918b3ec41c02a7959126b1b53c1f7e85bdd75d49af84d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:58:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a32c7f5a4d5146beb9918b3ec41c02a7959126b1b53c1f7e85bdd75d49af84d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:58:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a32c7f5a4d5146beb9918b3ec41c02a7959126b1b53c1f7e85bdd75d49af84d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:58:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a32c7f5a4d5146beb9918b3ec41c02a7959126b1b53c1f7e85bdd75d49af84d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:58:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a32c7f5a4d5146beb9918b3ec41c02a7959126b1b53c1f7e85bdd75d49af84d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 16:58:58 compute-0 podman[265153]: 2025-10-01 16:58:58.620972765 +0000 UTC m=+0.155189352 container init 4b0b4d925969ab55bae3b50b7f362c4883458785c4a3b4cf449a1a7e8b91d764 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 01 16:58:58 compute-0 podman[265153]: 2025-10-01 16:58:58.63929133 +0000 UTC m=+0.173507927 container start 4b0b4d925969ab55bae3b50b7f362c4883458785c4a3b4cf449a1a7e8b91d764 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_sanderson, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:58:58 compute-0 podman[265153]: 2025-10-01 16:58:58.643421638 +0000 UTC m=+0.177638205 container attach 4b0b4d925969ab55bae3b50b7f362c4883458785c4a3b4cf449a1a7e8b91d764 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 01 16:58:59 compute-0 ceph-mon[74273]: pgmap v856: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:59 compute-0 awesome_sanderson[265169]: --> passed data devices: 0 physical, 3 LVM
Oct 01 16:58:59 compute-0 awesome_sanderson[265169]: --> relative data size: 1.0
Oct 01 16:58:59 compute-0 awesome_sanderson[265169]: --> All data devices are unavailable
Oct 01 16:58:59 compute-0 systemd[1]: libpod-4b0b4d925969ab55bae3b50b7f362c4883458785c4a3b4cf449a1a7e8b91d764.scope: Deactivated successfully.
Oct 01 16:58:59 compute-0 systemd[1]: libpod-4b0b4d925969ab55bae3b50b7f362c4883458785c4a3b4cf449a1a7e8b91d764.scope: Consumed 1.142s CPU time.
Oct 01 16:58:59 compute-0 podman[265153]: 2025-10-01 16:58:59.815373381 +0000 UTC m=+1.349589978 container died 4b0b4d925969ab55bae3b50b7f362c4883458785c4a3b4cf449a1a7e8b91d764 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_sanderson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 01 16:58:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a32c7f5a4d5146beb9918b3ec41c02a7959126b1b53c1f7e85bdd75d49af84d-merged.mount: Deactivated successfully.
Oct 01 16:58:59 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:58:59 compute-0 podman[265153]: 2025-10-01 16:58:59.982459348 +0000 UTC m=+1.516675925 container remove 4b0b4d925969ab55bae3b50b7f362c4883458785c4a3b4cf449a1a7e8b91d764 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_sanderson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 16:58:59 compute-0 systemd[1]: libpod-conmon-4b0b4d925969ab55bae3b50b7f362c4883458785c4a3b4cf449a1a7e8b91d764.scope: Deactivated successfully.
Oct 01 16:59:00 compute-0 sudo[265048]: pam_unix(sudo:session): session closed for user root
Oct 01 16:59:00 compute-0 sudo[265226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:59:00 compute-0 sudo[265226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:59:00 compute-0 sudo[265226]: pam_unix(sudo:session): session closed for user root
Oct 01 16:59:00 compute-0 podman[265199]: 2025-10-01 16:59:00.079750548 +0000 UTC m=+0.218629370 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Oct 01 16:59:00 compute-0 sudo[265261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:59:00 compute-0 sudo[265261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:59:00 compute-0 sudo[265261]: pam_unix(sudo:session): session closed for user root
Oct 01 16:59:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:59:00 compute-0 sudo[265286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:59:00 compute-0 sudo[265286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:59:00 compute-0 sudo[265286]: pam_unix(sudo:session): session closed for user root
Oct 01 16:59:00 compute-0 sudo[265311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 16:59:00 compute-0 sudo[265311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:59:00 compute-0 podman[265377]: 2025-10-01 16:59:00.655855858 +0000 UTC m=+0.056491904 container create 0a33ddae8cf6e26abebca4cbf5ee7dac07abe44c2b00ff675d5a72916e69d62c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 01 16:59:00 compute-0 systemd[1]: Started libpod-conmon-0a33ddae8cf6e26abebca4cbf5ee7dac07abe44c2b00ff675d5a72916e69d62c.scope.
Oct 01 16:59:00 compute-0 podman[265377]: 2025-10-01 16:59:00.629625545 +0000 UTC m=+0.030261651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:59:00 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:59:00 compute-0 podman[265377]: 2025-10-01 16:59:00.757151976 +0000 UTC m=+0.157788052 container init 0a33ddae8cf6e26abebca4cbf5ee7dac07abe44c2b00ff675d5a72916e69d62c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_brown, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:59:00 compute-0 podman[265377]: 2025-10-01 16:59:00.769184531 +0000 UTC m=+0.169820577 container start 0a33ddae8cf6e26abebca4cbf5ee7dac07abe44c2b00ff675d5a72916e69d62c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_brown, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 01 16:59:00 compute-0 podman[265377]: 2025-10-01 16:59:00.773556056 +0000 UTC m=+0.174192102 container attach 0a33ddae8cf6e26abebca4cbf5ee7dac07abe44c2b00ff675d5a72916e69d62c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:59:00 compute-0 condescending_brown[265394]: 167 167
Oct 01 16:59:00 compute-0 systemd[1]: libpod-0a33ddae8cf6e26abebca4cbf5ee7dac07abe44c2b00ff675d5a72916e69d62c.scope: Deactivated successfully.
Oct 01 16:59:00 compute-0 podman[265377]: 2025-10-01 16:59:00.776609702 +0000 UTC m=+0.177245738 container died 0a33ddae8cf6e26abebca4cbf5ee7dac07abe44c2b00ff675d5a72916e69d62c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_brown, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:59:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c0da7a0c352ce12fe61768dc9b7261cd3da9dd65de96512a29750b8eba30d3a-merged.mount: Deactivated successfully.
Oct 01 16:59:00 compute-0 podman[265377]: 2025-10-01 16:59:00.836702296 +0000 UTC m=+0.237338352 container remove 0a33ddae8cf6e26abebca4cbf5ee7dac07abe44c2b00ff675d5a72916e69d62c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:59:00 compute-0 systemd[1]: libpod-conmon-0a33ddae8cf6e26abebca4cbf5ee7dac07abe44c2b00ff675d5a72916e69d62c.scope: Deactivated successfully.
Oct 01 16:59:01 compute-0 podman[265417]: 2025-10-01 16:59:01.062856356 +0000 UTC m=+0.062014020 container create d448a37622d59b4a2f07c95b55a21b96e6845ce300307149ce630909eff9eec0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_tesla, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 16:59:01 compute-0 systemd[1]: Started libpod-conmon-d448a37622d59b4a2f07c95b55a21b96e6845ce300307149ce630909eff9eec0.scope.
Oct 01 16:59:01 compute-0 podman[265417]: 2025-10-01 16:59:01.045284435 +0000 UTC m=+0.044442089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:59:01 compute-0 ceph-mon[74273]: pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:01 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:59:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/594d6fc16f020ed4e36fffab61e321ea481c30230bcd34777c5db5b50d9a8fe7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:59:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/594d6fc16f020ed4e36fffab61e321ea481c30230bcd34777c5db5b50d9a8fe7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:59:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/594d6fc16f020ed4e36fffab61e321ea481c30230bcd34777c5db5b50d9a8fe7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:59:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/594d6fc16f020ed4e36fffab61e321ea481c30230bcd34777c5db5b50d9a8fe7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:59:01 compute-0 podman[265417]: 2025-10-01 16:59:01.182073792 +0000 UTC m=+0.181231506 container init d448a37622d59b4a2f07c95b55a21b96e6845ce300307149ce630909eff9eec0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_tesla, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 01 16:59:01 compute-0 podman[265417]: 2025-10-01 16:59:01.196408169 +0000 UTC m=+0.195565833 container start d448a37622d59b4a2f07c95b55a21b96e6845ce300307149ce630909eff9eec0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_tesla, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 01 16:59:01 compute-0 podman[265417]: 2025-10-01 16:59:01.201010853 +0000 UTC m=+0.200168577 container attach d448a37622d59b4a2f07c95b55a21b96e6845ce300307149ce630909eff9eec0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 01 16:59:01 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:01 compute-0 laughing_tesla[265433]: {
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:     "0": [
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:         {
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             "devices": [
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "/dev/loop3"
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             ],
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             "lv_name": "ceph_lv0",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             "lv_size": "21470642176",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             "name": "ceph_lv0",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             "tags": {
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "ceph.cluster_name": "ceph",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "ceph.crush_device_class": "",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "ceph.encrypted": "0",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "ceph.osd_id": "0",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "ceph.type": "block",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "ceph.vdo": "0"
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             },
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             "type": "block",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             "vg_name": "ceph_vg0"
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:         }
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:     ],
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:     "1": [
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:         {
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             "devices": [
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "/dev/loop4"
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             ],
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             "lv_name": "ceph_lv1",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             "lv_size": "21470642176",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             "name": "ceph_lv1",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             "tags": {
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "ceph.cluster_name": "ceph",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "ceph.crush_device_class": "",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "ceph.encrypted": "0",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "ceph.osd_id": "1",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "ceph.type": "block",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "ceph.vdo": "0"
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             },
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             "type": "block",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             "vg_name": "ceph_vg1"
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:         }
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:     ],
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:     "2": [
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:         {
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             "devices": [
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "/dev/loop5"
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             ],
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             "lv_name": "ceph_lv2",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             "lv_size": "21470642176",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             "name": "ceph_lv2",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             "tags": {
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "ceph.cluster_name": "ceph",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "ceph.crush_device_class": "",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "ceph.encrypted": "0",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "ceph.osd_id": "2",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "ceph.type": "block",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:                 "ceph.vdo": "0"
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             },
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             "type": "block",
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:             "vg_name": "ceph_vg2"
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:         }
Oct 01 16:59:01 compute-0 laughing_tesla[265433]:     ]
Oct 01 16:59:01 compute-0 laughing_tesla[265433]: }
Oct 01 16:59:01 compute-0 systemd[1]: libpod-d448a37622d59b4a2f07c95b55a21b96e6845ce300307149ce630909eff9eec0.scope: Deactivated successfully.
Oct 01 16:59:01 compute-0 podman[265417]: 2025-10-01 16:59:01.959952764 +0000 UTC m=+0.959110408 container died d448a37622d59b4a2f07c95b55a21b96e6845ce300307149ce630909eff9eec0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_tesla, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 01 16:59:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-594d6fc16f020ed4e36fffab61e321ea481c30230bcd34777c5db5b50d9a8fe7-merged.mount: Deactivated successfully.
Oct 01 16:59:02 compute-0 podman[265417]: 2025-10-01 16:59:02.034236746 +0000 UTC m=+1.033394400 container remove d448a37622d59b4a2f07c95b55a21b96e6845ce300307149ce630909eff9eec0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:59:02 compute-0 systemd[1]: libpod-conmon-d448a37622d59b4a2f07c95b55a21b96e6845ce300307149ce630909eff9eec0.scope: Deactivated successfully.
Oct 01 16:59:02 compute-0 sudo[265311]: pam_unix(sudo:session): session closed for user root
Oct 01 16:59:02 compute-0 sudo[265453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:59:02 compute-0 sudo[265453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:59:02 compute-0 sudo[265453]: pam_unix(sudo:session): session closed for user root
Oct 01 16:59:02 compute-0 sudo[265478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 16:59:02 compute-0 sudo[265478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:59:02 compute-0 sudo[265478]: pam_unix(sudo:session): session closed for user root
Oct 01 16:59:02 compute-0 sudo[265503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:59:02 compute-0 sudo[265503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:59:02 compute-0 sudo[265503]: pam_unix(sudo:session): session closed for user root
Oct 01 16:59:02 compute-0 sudo[265528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 16:59:02 compute-0 sudo[265528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:59:02 compute-0 podman[265593]: 2025-10-01 16:59:02.913775464 +0000 UTC m=+0.061556504 container create c85165b915239dd5fb6fa4c16586f4e5223060e292805cf49c32ea79ac98cf67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_franklin, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:59:02 compute-0 systemd[1]: Started libpod-conmon-c85165b915239dd5fb6fa4c16586f4e5223060e292805cf49c32ea79ac98cf67.scope.
Oct 01 16:59:02 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:59:02 compute-0 podman[265593]: 2025-10-01 16:59:02.889585165 +0000 UTC m=+0.037366285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:59:02 compute-0 podman[265593]: 2025-10-01 16:59:02.990206369 +0000 UTC m=+0.137987439 container init c85165b915239dd5fb6fa4c16586f4e5223060e292805cf49c32ea79ac98cf67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 01 16:59:03 compute-0 podman[265593]: 2025-10-01 16:59:03.000712766 +0000 UTC m=+0.148493806 container start c85165b915239dd5fb6fa4c16586f4e5223060e292805cf49c32ea79ac98cf67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_franklin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 16:59:03 compute-0 podman[265593]: 2025-10-01 16:59:03.004028999 +0000 UTC m=+0.151810119 container attach c85165b915239dd5fb6fa4c16586f4e5223060e292805cf49c32ea79ac98cf67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 16:59:03 compute-0 podman[265608]: 2025-10-01 16:59:03.006817107 +0000 UTC m=+0.055868578 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 01 16:59:03 compute-0 priceless_franklin[265611]: 167 167
Oct 01 16:59:03 compute-0 systemd[1]: libpod-c85165b915239dd5fb6fa4c16586f4e5223060e292805cf49c32ea79ac98cf67.scope: Deactivated successfully.
Oct 01 16:59:03 compute-0 podman[265593]: 2025-10-01 16:59:03.008820612 +0000 UTC m=+0.156601642 container died c85165b915239dd5fb6fa4c16586f4e5223060e292805cf49c32ea79ac98cf67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_franklin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 01 16:59:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-99510403b3e64d923f7a5e14718b36e282d66d6aabbf64a19118eb23394aca43-merged.mount: Deactivated successfully.
Oct 01 16:59:03 compute-0 podman[265593]: 2025-10-01 16:59:03.043585386 +0000 UTC m=+0.191366416 container remove c85165b915239dd5fb6fa4c16586f4e5223060e292805cf49c32ea79ac98cf67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_franklin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 16:59:03 compute-0 systemd[1]: libpod-conmon-c85165b915239dd5fb6fa4c16586f4e5223060e292805cf49c32ea79ac98cf67.scope: Deactivated successfully.
Oct 01 16:59:03 compute-0 ceph-mon[74273]: pgmap v858: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:03 compute-0 podman[265651]: 2025-10-01 16:59:03.239993532 +0000 UTC m=+0.051973180 container create 45b00801e42479366c528c413fc023e7ce35a5d03befdcb58c4f6962fae7b465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 01 16:59:03 compute-0 systemd[1]: Started libpod-conmon-45b00801e42479366c528c413fc023e7ce35a5d03befdcb58c4f6962fae7b465.scope.
Oct 01 16:59:03 compute-0 podman[265651]: 2025-10-01 16:59:03.217326181 +0000 UTC m=+0.029305859 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 16:59:03 compute-0 systemd[1]: Started libcrun container.
Oct 01 16:59:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c16f285977cb933dede4da00adad088aab2c136049257118ed29d9b2edf3daa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 16:59:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c16f285977cb933dede4da00adad088aab2c136049257118ed29d9b2edf3daa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 16:59:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c16f285977cb933dede4da00adad088aab2c136049257118ed29d9b2edf3daa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 16:59:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c16f285977cb933dede4da00adad088aab2c136049257118ed29d9b2edf3daa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 16:59:03 compute-0 podman[265651]: 2025-10-01 16:59:03.356340431 +0000 UTC m=+0.168320119 container init 45b00801e42479366c528c413fc023e7ce35a5d03befdcb58c4f6962fae7b465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_banach, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 01 16:59:03 compute-0 podman[265651]: 2025-10-01 16:59:03.364337437 +0000 UTC m=+0.176317115 container start 45b00801e42479366c528c413fc023e7ce35a5d03befdcb58c4f6962fae7b465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:59:03 compute-0 podman[265651]: 2025-10-01 16:59:03.368216807 +0000 UTC m=+0.180196475 container attach 45b00801e42479366c528c413fc023e7ce35a5d03befdcb58c4f6962fae7b465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_banach, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 16:59:03 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:04 compute-0 infallible_banach[265667]: {
Oct 01 16:59:04 compute-0 infallible_banach[265667]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 16:59:04 compute-0 infallible_banach[265667]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:59:04 compute-0 infallible_banach[265667]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 16:59:04 compute-0 infallible_banach[265667]:         "osd_id": 2,
Oct 01 16:59:04 compute-0 infallible_banach[265667]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 16:59:04 compute-0 infallible_banach[265667]:         "type": "bluestore"
Oct 01 16:59:04 compute-0 infallible_banach[265667]:     },
Oct 01 16:59:04 compute-0 infallible_banach[265667]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 16:59:04 compute-0 infallible_banach[265667]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:59:04 compute-0 infallible_banach[265667]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 16:59:04 compute-0 infallible_banach[265667]:         "osd_id": 0,
Oct 01 16:59:04 compute-0 infallible_banach[265667]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 16:59:04 compute-0 infallible_banach[265667]:         "type": "bluestore"
Oct 01 16:59:04 compute-0 infallible_banach[265667]:     },
Oct 01 16:59:04 compute-0 infallible_banach[265667]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 16:59:04 compute-0 infallible_banach[265667]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 16:59:04 compute-0 infallible_banach[265667]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 16:59:04 compute-0 infallible_banach[265667]:         "osd_id": 1,
Oct 01 16:59:04 compute-0 infallible_banach[265667]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 16:59:04 compute-0 infallible_banach[265667]:         "type": "bluestore"
Oct 01 16:59:04 compute-0 infallible_banach[265667]:     }
Oct 01 16:59:04 compute-0 infallible_banach[265667]: }
Oct 01 16:59:04 compute-0 systemd[1]: libpod-45b00801e42479366c528c413fc023e7ce35a5d03befdcb58c4f6962fae7b465.scope: Deactivated successfully.
Oct 01 16:59:04 compute-0 systemd[1]: libpod-45b00801e42479366c528c413fc023e7ce35a5d03befdcb58c4f6962fae7b465.scope: Consumed 1.137s CPU time.
Oct 01 16:59:04 compute-0 podman[265651]: 2025-10-01 16:59:04.495753191 +0000 UTC m=+1.307732889 container died 45b00801e42479366c528c413fc023e7ce35a5d03befdcb58c4f6962fae7b465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_banach, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 16:59:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c16f285977cb933dede4da00adad088aab2c136049257118ed29d9b2edf3daa-merged.mount: Deactivated successfully.
Oct 01 16:59:04 compute-0 podman[265651]: 2025-10-01 16:59:04.572392885 +0000 UTC m=+1.384372563 container remove 45b00801e42479366c528c413fc023e7ce35a5d03befdcb58c4f6962fae7b465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Oct 01 16:59:04 compute-0 systemd[1]: libpod-conmon-45b00801e42479366c528c413fc023e7ce35a5d03befdcb58c4f6962fae7b465.scope: Deactivated successfully.
Oct 01 16:59:04 compute-0 sudo[265528]: pam_unix(sudo:session): session closed for user root
Oct 01 16:59:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 16:59:04 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:59:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 16:59:04 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:59:04 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 151f51e5-477b-4900-b595-261385684b9b does not exist
Oct 01 16:59:04 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 8f03e67e-92e4-43dc-b669-5aeb480571dc does not exist
Oct 01 16:59:04 compute-0 sudo[265716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 16:59:04 compute-0 sudo[265716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:59:04 compute-0 sudo[265716]: pam_unix(sudo:session): session closed for user root
Oct 01 16:59:04 compute-0 sudo[265741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 16:59:04 compute-0 sudo[265741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 16:59:04 compute-0 sudo[265741]: pam_unix(sudo:session): session closed for user root
Oct 01 16:59:05 compute-0 ceph-mon[74273]: pgmap v859: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:05 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:59:05 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 16:59:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:59:05 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:07 compute-0 ceph-mon[74273]: pgmap v860: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:07 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:08 compute-0 ceph-mon[74273]: pgmap v861: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:09 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:59:10 compute-0 ceph-mon[74273]: pgmap v862: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_16:59:11
Oct 01 16:59:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 16:59:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 16:59:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'volumes', 'backups', 'images', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr']
Oct 01 16:59:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 16:59:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:59:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:59:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:59:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:59:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:59:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:59:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 16:59:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:59:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 16:59:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 16:59:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:59:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 16:59:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:59:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 16:59:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:59:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 16:59:11 compute-0 nova_compute[259504]: 2025-10-01 16:59:11.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:59:11 compute-0 nova_compute[259504]: 2025-10-01 16:59:11.751 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:59:11 compute-0 nova_compute[259504]: 2025-10-01 16:59:11.751 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 01 16:59:11 compute-0 nova_compute[259504]: 2025-10-01 16:59:11.823 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 01 16:59:11 compute-0 nova_compute[259504]: 2025-10-01 16:59:11.824 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:59:11 compute-0 nova_compute[259504]: 2025-10-01 16:59:11.825 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 01 16:59:11 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:11 compute-0 nova_compute[259504]: 2025-10-01 16:59:11.899 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:59:12 compute-0 ceph-mon[74273]: pgmap v863: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:13 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:13 compute-0 nova_compute[259504]: 2025-10-01 16:59:13.957 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:59:13 compute-0 nova_compute[259504]: 2025-10-01 16:59:13.958 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 16:59:13 compute-0 nova_compute[259504]: 2025-10-01 16:59:13.958 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 16:59:13 compute-0 nova_compute[259504]: 2025-10-01 16:59:13.979 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 16:59:14 compute-0 nova_compute[259504]: 2025-10-01 16:59:14.751 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:59:14 compute-0 nova_compute[259504]: 2025-10-01 16:59:14.751 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:59:14 compute-0 ceph-mon[74273]: pgmap v864: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:59:15 compute-0 nova_compute[259504]: 2025-10-01 16:59:15.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:59:15 compute-0 nova_compute[259504]: 2025-10-01 16:59:15.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:59:15 compute-0 nova_compute[259504]: 2025-10-01 16:59:15.750 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 16:59:15 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:16 compute-0 nova_compute[259504]: 2025-10-01 16:59:16.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:59:16 compute-0 nova_compute[259504]: 2025-10-01 16:59:16.751 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:59:16 compute-0 nova_compute[259504]: 2025-10-01 16:59:16.829 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 16:59:16 compute-0 nova_compute[259504]: 2025-10-01 16:59:16.829 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 16:59:16 compute-0 nova_compute[259504]: 2025-10-01 16:59:16.829 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 16:59:16 compute-0 nova_compute[259504]: 2025-10-01 16:59:16.830 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 16:59:16 compute-0 nova_compute[259504]: 2025-10-01 16:59:16.830 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 16:59:16 compute-0 ceph-mon[74273]: pgmap v865: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 16:59:17 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2314427518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 16:59:17 compute-0 nova_compute[259504]: 2025-10-01 16:59:17.302 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 16:59:17 compute-0 nova_compute[259504]: 2025-10-01 16:59:17.458 2 WARNING nova.virt.libvirt.driver [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 16:59:17 compute-0 nova_compute[259504]: 2025-10-01 16:59:17.460 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5153MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 16:59:17 compute-0 nova_compute[259504]: 2025-10-01 16:59:17.460 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 16:59:17 compute-0 nova_compute[259504]: 2025-10-01 16:59:17.460 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 16:59:17 compute-0 nova_compute[259504]: 2025-10-01 16:59:17.740 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 16:59:17 compute-0 nova_compute[259504]: 2025-10-01 16:59:17.740 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 16:59:17 compute-0 podman[265788]: 2025-10-01 16:59:17.774680763 +0000 UTC m=+0.086804004 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid)
Oct 01 16:59:17 compute-0 nova_compute[259504]: 2025-10-01 16:59:17.829 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Refreshing inventories for resource provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 01 16:59:17 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:17 compute-0 nova_compute[259504]: 2025-10-01 16:59:17.960 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Updating ProviderTree inventory for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 01 16:59:17 compute-0 nova_compute[259504]: 2025-10-01 16:59:17.961 2 DEBUG nova.compute.provider_tree [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Updating inventory in ProviderTree for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 01 16:59:17 compute-0 nova_compute[259504]: 2025-10-01 16:59:17.988 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Refreshing aggregate associations for resource provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 01 16:59:17 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2314427518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 16:59:18 compute-0 nova_compute[259504]: 2025-10-01 16:59:18.009 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Refreshing trait associations for resource provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_ABM,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AVX2,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_ACCELERATORS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSSE3,HW_CPU_X86_SHA,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 01 16:59:18 compute-0 nova_compute[259504]: 2025-10-01 16:59:18.029 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 16:59:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 16:59:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/602528211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 16:59:18 compute-0 nova_compute[259504]: 2025-10-01 16:59:18.460 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 16:59:18 compute-0 nova_compute[259504]: 2025-10-01 16:59:18.467 2 DEBUG nova.compute.provider_tree [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed in ProviderTree for provider: 2417da73-53f1-4edf-ae4c-fbd9fa470d6b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 16:59:18 compute-0 nova_compute[259504]: 2025-10-01 16:59:18.498 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 16:59:18 compute-0 nova_compute[259504]: 2025-10-01 16:59:18.500 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 16:59:18 compute-0 nova_compute[259504]: 2025-10-01 16:59:18.500 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.040s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 16:59:19 compute-0 ceph-mon[74273]: pgmap v866: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:19 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/602528211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 16:59:19 compute-0 nova_compute[259504]: 2025-10-01 16:59:19.501 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 16:59:19 compute-0 podman[265831]: 2025-10-01 16:59:19.817811459 +0000 UTC m=+0.082137171 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 01 16:59:19 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:59:19.963 162304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 16:59:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:59:19.964 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 16:59:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 16:59:19.964 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 16:59:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:59:21 compute-0 ceph-mon[74273]: pgmap v867: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 16:59:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:59:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 16:59:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:59:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:59:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:59:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:59:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:59:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:59:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:59:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:59:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:59:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 16:59:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:59:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:59:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:59:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 16:59:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:59:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 16:59:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:59:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 16:59:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 16:59:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 16:59:21 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:23 compute-0 ceph-mon[74273]: pgmap v868: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:23 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:24 compute-0 ceph-mon[74273]: pgmap v869: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:59:25 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:26 compute-0 ceph-mon[74273]: pgmap v870: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:27 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:28 compute-0 ceph-mon[74273]: pgmap v871: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:29 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:59:30 compute-0 podman[265852]: 2025-10-01 16:59:30.815683491 +0000 UTC m=+0.122650060 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 01 16:59:30 compute-0 ceph-mon[74273]: pgmap v872: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:31 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:32 compute-0 ceph-mon[74273]: pgmap v873: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:33 compute-0 podman[265879]: 2025-10-01 16:59:33.736811049 +0000 UTC m=+0.054199463 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 01 16:59:33 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:35 compute-0 ceph-mon[74273]: pgmap v874: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:35 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Oct 01 16:59:35 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:59:35.152858) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 16:59:35 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Oct 01 16:59:35 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337975152931, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2047, "num_deletes": 251, "total_data_size": 3430414, "memory_usage": 3489712, "flush_reason": "Manual Compaction"}
Oct 01 16:59:35 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Oct 01 16:59:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:59:35 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337975351791, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 3365411, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16349, "largest_seqno": 18395, "table_properties": {"data_size": 3356170, "index_size": 5799, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18333, "raw_average_key_size": 19, "raw_value_size": 3337777, "raw_average_value_size": 3604, "num_data_blocks": 263, "num_entries": 926, "num_filter_entries": 926, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759337745, "oldest_key_time": 1759337745, "file_creation_time": 1759337975, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Oct 01 16:59:35 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 198988 microseconds, and 6345 cpu microseconds.
Oct 01 16:59:35 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 16:59:35 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:59:35.351844) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 3365411 bytes OK
Oct 01 16:59:35 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:59:35.351861) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Oct 01 16:59:35 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:59:35.354083) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Oct 01 16:59:35 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:59:35.354093) EVENT_LOG_v1 {"time_micros": 1759337975354090, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 16:59:35 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:59:35.354108) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 16:59:35 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3421858, prev total WAL file size 3421858, number of live WAL files 2.
Oct 01 16:59:35 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 16:59:35 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:59:35.354835) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Oct 01 16:59:35 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 16:59:35 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(3286KB)], [38(7525KB)]
Oct 01 16:59:35 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337975354865, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11071166, "oldest_snapshot_seqno": -1}
Oct 01 16:59:35 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4419 keys, 9303534 bytes, temperature: kUnknown
Oct 01 16:59:35 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337975409660, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 9303534, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9270431, "index_size": 20961, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 106756, "raw_average_key_size": 24, "raw_value_size": 9187076, "raw_average_value_size": 2078, "num_data_blocks": 890, "num_entries": 4419, "num_filter_entries": 4419, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336399, "oldest_key_time": 0, "file_creation_time": 1759337975, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Oct 01 16:59:35 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 16:59:35 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:59:35.410065) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 9303534 bytes
Oct 01 16:59:35 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:59:35.411607) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 201.5 rd, 169.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.3 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 4933, records dropped: 514 output_compression: NoCompression
Oct 01 16:59:35 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:59:35.411635) EVENT_LOG_v1 {"time_micros": 1759337975411622, "job": 18, "event": "compaction_finished", "compaction_time_micros": 54949, "compaction_time_cpu_micros": 19868, "output_level": 6, "num_output_files": 1, "total_output_size": 9303534, "num_input_records": 4933, "num_output_records": 4419, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 16:59:35 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 16:59:35 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337975413213, "job": 18, "event": "table_file_deletion", "file_number": 40}
Oct 01 16:59:35 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 16:59:35 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337975416019, "job": 18, "event": "table_file_deletion", "file_number": 38}
Oct 01 16:59:35 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:59:35.354779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:59:35 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:59:35.416171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:59:35 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:59:35.416176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:59:35 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:59:35.416177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:59:35 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:59:35.416179) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:59:35 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:59:35.416181) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:59:35 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:36 compute-0 ceph-mon[74273]: pgmap v875: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:37 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:39 compute-0 ceph-mon[74273]: pgmap v876: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:39 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:59:41 compute-0 ceph-mon[74273]: pgmap v877: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:59:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:59:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:59:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:59:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 16:59:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 16:59:41 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:43 compute-0 ceph-mon[74273]: pgmap v878: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 16:59:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/388691667' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 16:59:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 16:59:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/388691667' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 16:59:43 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/388691667' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 16:59:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/388691667' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 16:59:45 compute-0 ceph-mon[74273]: pgmap v879: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:59:45 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:47 compute-0 ceph-mon[74273]: pgmap v880: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:47 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:48 compute-0 podman[265898]: 2025-10-01 16:59:48.751174275 +0000 UTC m=+0.072071105 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 01 16:59:49 compute-0 ceph-mon[74273]: pgmap v881: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:49 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:59:50 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Oct 01 16:59:50 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:59:50.182509) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 16:59:50 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Oct 01 16:59:50 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337990182544, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 370, "num_deletes": 250, "total_data_size": 239680, "memory_usage": 247960, "flush_reason": "Manual Compaction"}
Oct 01 16:59:50 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Oct 01 16:59:50 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337990186977, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 211288, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18396, "largest_seqno": 18765, "table_properties": {"data_size": 209053, "index_size": 399, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5785, "raw_average_key_size": 19, "raw_value_size": 204623, "raw_average_value_size": 684, "num_data_blocks": 18, "num_entries": 299, "num_filter_entries": 299, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759337976, "oldest_key_time": 1759337976, "file_creation_time": 1759337990, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Oct 01 16:59:50 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 4515 microseconds, and 1617 cpu microseconds.
Oct 01 16:59:50 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 16:59:50 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:59:50.187024) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 211288 bytes OK
Oct 01 16:59:50 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:59:50.187042) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Oct 01 16:59:50 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:59:50.189003) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Oct 01 16:59:50 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:59:50.189026) EVENT_LOG_v1 {"time_micros": 1759337990189019, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 16:59:50 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:59:50.189046) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 16:59:50 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 237248, prev total WAL file size 237248, number of live WAL files 2.
Oct 01 16:59:50 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 16:59:50 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:59:50.189672) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353033' seq:72057594037927935, type:22 .. '6D67727374617400373534' seq:0, type:0; will stop at (end)
Oct 01 16:59:50 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 16:59:50 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(206KB)], [41(9085KB)]
Oct 01 16:59:50 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337990189726, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 9514822, "oldest_snapshot_seqno": -1}
Oct 01 16:59:50 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4212 keys, 6250947 bytes, temperature: kUnknown
Oct 01 16:59:50 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337990241673, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6250947, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6223689, "index_size": 15629, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10565, "raw_key_size": 102870, "raw_average_key_size": 24, "raw_value_size": 6148388, "raw_average_value_size": 1459, "num_data_blocks": 658, "num_entries": 4212, "num_filter_entries": 4212, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336399, "oldest_key_time": 0, "file_creation_time": 1759337990, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Oct 01 16:59:50 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 16:59:50 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:59:50.241987) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6250947 bytes
Oct 01 16:59:50 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:59:50.244042) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.9 rd, 120.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 8.9 +0.0 blob) out(6.0 +0.0 blob), read-write-amplify(74.6) write-amplify(29.6) OK, records in: 4718, records dropped: 506 output_compression: NoCompression
Oct 01 16:59:50 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:59:50.244071) EVENT_LOG_v1 {"time_micros": 1759337990244058, "job": 20, "event": "compaction_finished", "compaction_time_micros": 52020, "compaction_time_cpu_micros": 29381, "output_level": 6, "num_output_files": 1, "total_output_size": 6250947, "num_input_records": 4718, "num_output_records": 4212, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 16:59:50 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 16:59:50 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337990244296, "job": 20, "event": "table_file_deletion", "file_number": 43}
Oct 01 16:59:50 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 16:59:50 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337990247289, "job": 20, "event": "table_file_deletion", "file_number": 41}
Oct 01 16:59:50 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:59:50.189421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:59:50 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:59:50.247390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:59:50 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:59:50.247398) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:59:50 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:59:50.247402) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:59:50 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:59:50.247406) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:59:50 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-16:59:50.247409) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 16:59:50 compute-0 podman[265918]: 2025-10-01 16:59:50.755041628 +0000 UTC m=+0.074429345 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 01 16:59:51 compute-0 ceph-mon[74273]: pgmap v882: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:51 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Oct 01 16:59:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Oct 01 16:59:52 compute-0 ceph-mon[74273]: pgmap v883: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 01 16:59:52 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Oct 01 16:59:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Oct 01 16:59:53 compute-0 ceph-mon[74273]: osdmap e123: 3 total, 3 up, 3 in
Oct 01 16:59:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Oct 01 16:59:53 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Oct 01 16:59:53 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Oct 01 16:59:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Oct 01 16:59:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Oct 01 16:59:54 compute-0 ceph-mon[74273]: osdmap e124: 3 total, 3 up, 3 in
Oct 01 16:59:54 compute-0 ceph-mon[74273]: pgmap v886: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Oct 01 16:59:54 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Oct 01 16:59:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 16:59:55 compute-0 ceph-mon[74273]: osdmap e125: 3 total, 3 up, 3 in
Oct 01 16:59:55 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s rd, 852 B/s wr, 11 op/s
Oct 01 16:59:56 compute-0 ceph-mon[74273]: pgmap v888: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s rd, 852 B/s wr, 11 op/s
Oct 01 16:59:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Oct 01 16:59:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Oct 01 16:59:57 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Oct 01 16:59:57 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 907 B/s wr, 12 op/s
Oct 01 16:59:58 compute-0 ceph-mon[74273]: osdmap e126: 3 total, 3 up, 3 in
Oct 01 16:59:58 compute-0 ceph-mon[74273]: pgmap v890: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 907 B/s wr, 12 op/s
Oct 01 16:59:59 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 6.2 MiB/s wr, 57 op/s
Oct 01 17:00:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 17:00:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Oct 01 17:00:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Oct 01 17:00:00 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Oct 01 17:00:01 compute-0 ceph-mon[74273]: pgmap v891: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 6.2 MiB/s wr, 57 op/s
Oct 01 17:00:01 compute-0 ceph-mon[74273]: osdmap e127: 3 total, 3 up, 3 in
Oct 01 17:00:01 compute-0 podman[265938]: 2025-10-01 17:00:01.80573656 +0000 UTC m=+0.116794307 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 01 17:00:01 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 5.4 MiB/s wr, 41 op/s
Oct 01 17:00:03 compute-0 ceph-mon[74273]: pgmap v893: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 5.4 MiB/s wr, 41 op/s
Oct 01 17:00:03 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 5.1 MiB/s wr, 39 op/s
Oct 01 17:00:04 compute-0 ceph-mon[74273]: pgmap v894: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 5.1 MiB/s wr, 39 op/s
Oct 01 17:00:04 compute-0 podman[265965]: 2025-10-01 17:00:04.745269093 +0000 UTC m=+0.061576446 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 01 17:00:04 compute-0 sudo[265984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:00:04 compute-0 sudo[265984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:00:04 compute-0 sudo[265984]: pam_unix(sudo:session): session closed for user root
Oct 01 17:00:05 compute-0 sudo[266009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:00:05 compute-0 sudo[266009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:00:05 compute-0 sudo[266009]: pam_unix(sudo:session): session closed for user root
Oct 01 17:00:05 compute-0 sudo[266034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:00:05 compute-0 sudo[266034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:00:05 compute-0 sudo[266034]: pam_unix(sudo:session): session closed for user root
Oct 01 17:00:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 17:00:05 compute-0 sudo[266059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 17:00:05 compute-0 sudo[266059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:00:05 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 4.8 MiB/s wr, 36 op/s
Oct 01 17:00:05 compute-0 sudo[266059]: pam_unix(sudo:session): session closed for user root
Oct 01 17:00:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 17:00:05 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:00:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 17:00:05 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 17:00:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 17:00:05 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:00:05 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev db97da90-8f4e-4acd-9fb0-f752d10f3104 does not exist
Oct 01 17:00:05 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 3e90850f-ec00-4d5f-a8b0-c258698a03a0 does not exist
Oct 01 17:00:05 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 32107f41-11a6-498a-977e-f31436d33c40 does not exist
Oct 01 17:00:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 17:00:05 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 17:00:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 17:00:05 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 17:00:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 17:00:05 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:00:06 compute-0 sudo[266115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:00:06 compute-0 sudo[266115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:00:06 compute-0 sudo[266115]: pam_unix(sudo:session): session closed for user root
Oct 01 17:00:06 compute-0 sudo[266140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:00:06 compute-0 sudo[266140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:00:06 compute-0 sudo[266140]: pam_unix(sudo:session): session closed for user root
Oct 01 17:00:06 compute-0 sudo[266165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:00:06 compute-0 sudo[266165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:00:06 compute-0 sudo[266165]: pam_unix(sudo:session): session closed for user root
Oct 01 17:00:06 compute-0 sudo[266190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 17:00:06 compute-0 sudo[266190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:00:06 compute-0 podman[266257]: 2025-10-01 17:00:06.778208402 +0000 UTC m=+0.066457323 container create 171f3668cd2d64517436011e89ab193cfb8143d8cd5b613ef97e648cb894afb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 17:00:06 compute-0 systemd[1]: Started libpod-conmon-171f3668cd2d64517436011e89ab193cfb8143d8cd5b613ef97e648cb894afb3.scope.
Oct 01 17:00:06 compute-0 podman[266257]: 2025-10-01 17:00:06.749109832 +0000 UTC m=+0.037358843 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:00:06 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:00:06 compute-0 podman[266257]: 2025-10-01 17:00:06.879399701 +0000 UTC m=+0.167648652 container init 171f3668cd2d64517436011e89ab193cfb8143d8cd5b613ef97e648cb894afb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_varahamihira, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:00:06 compute-0 podman[266257]: 2025-10-01 17:00:06.891502199 +0000 UTC m=+0.179751130 container start 171f3668cd2d64517436011e89ab193cfb8143d8cd5b613ef97e648cb894afb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:00:06 compute-0 podman[266257]: 2025-10-01 17:00:06.895109745 +0000 UTC m=+0.183358746 container attach 171f3668cd2d64517436011e89ab193cfb8143d8cd5b613ef97e648cb894afb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_varahamihira, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:00:06 compute-0 happy_varahamihira[266273]: 167 167
Oct 01 17:00:06 compute-0 systemd[1]: libpod-171f3668cd2d64517436011e89ab193cfb8143d8cd5b613ef97e648cb894afb3.scope: Deactivated successfully.
Oct 01 17:00:06 compute-0 podman[266257]: 2025-10-01 17:00:06.898007442 +0000 UTC m=+0.186256353 container died 171f3668cd2d64517436011e89ab193cfb8143d8cd5b613ef97e648cb894afb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_varahamihira, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 01 17:00:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-71653743cd45065862c00da0d0c841597b1aec56e4561dd0bfb1cf7d457b35f5-merged.mount: Deactivated successfully.
Oct 01 17:00:06 compute-0 podman[266257]: 2025-10-01 17:00:06.945079867 +0000 UTC m=+0.233328778 container remove 171f3668cd2d64517436011e89ab193cfb8143d8cd5b613ef97e648cb894afb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:00:06 compute-0 systemd[1]: libpod-conmon-171f3668cd2d64517436011e89ab193cfb8143d8cd5b613ef97e648cb894afb3.scope: Deactivated successfully.
Oct 01 17:00:06 compute-0 ceph-mon[74273]: pgmap v895: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 4.8 MiB/s wr, 36 op/s
Oct 01 17:00:06 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:00:06 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 17:00:06 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:00:06 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 17:00:06 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 17:00:06 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:00:07 compute-0 podman[266298]: 2025-10-01 17:00:07.154283791 +0000 UTC m=+0.069041136 container create accdb245c52f2f2fe13364ee4d2dbc711f859cc86601ecb6695e50e345609249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sammet, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:00:07 compute-0 systemd[1]: Started libpod-conmon-accdb245c52f2f2fe13364ee4d2dbc711f859cc86601ecb6695e50e345609249.scope.
Oct 01 17:00:07 compute-0 podman[266298]: 2025-10-01 17:00:07.127987148 +0000 UTC m=+0.042744503 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:00:07 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:00:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/260ea248b9ce8a71f5045928737029a2b11e032045f14d9826eecca857ee4f31/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:00:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/260ea248b9ce8a71f5045928737029a2b11e032045f14d9826eecca857ee4f31/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:00:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/260ea248b9ce8a71f5045928737029a2b11e032045f14d9826eecca857ee4f31/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:00:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/260ea248b9ce8a71f5045928737029a2b11e032045f14d9826eecca857ee4f31/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:00:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/260ea248b9ce8a71f5045928737029a2b11e032045f14d9826eecca857ee4f31/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 17:00:07 compute-0 podman[266298]: 2025-10-01 17:00:07.263501291 +0000 UTC m=+0.178258616 container init accdb245c52f2f2fe13364ee4d2dbc711f859cc86601ecb6695e50e345609249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sammet, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 01 17:00:07 compute-0 podman[266298]: 2025-10-01 17:00:07.275985968 +0000 UTC m=+0.190743273 container start accdb245c52f2f2fe13364ee4d2dbc711f859cc86601ecb6695e50e345609249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:00:07 compute-0 podman[266298]: 2025-10-01 17:00:07.279440677 +0000 UTC m=+0.194198092 container attach accdb245c52f2f2fe13364ee4d2dbc711f859cc86601ecb6695e50e345609249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sammet, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 01 17:00:07 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 4.1 MiB/s wr, 31 op/s
Oct 01 17:00:08 compute-0 eloquent_sammet[266314]: --> passed data devices: 0 physical, 3 LVM
Oct 01 17:00:08 compute-0 eloquent_sammet[266314]: --> relative data size: 1.0
Oct 01 17:00:08 compute-0 eloquent_sammet[266314]: --> All data devices are unavailable
Oct 01 17:00:08 compute-0 systemd[1]: libpod-accdb245c52f2f2fe13364ee4d2dbc711f859cc86601ecb6695e50e345609249.scope: Deactivated successfully.
Oct 01 17:00:08 compute-0 systemd[1]: libpod-accdb245c52f2f2fe13364ee4d2dbc711f859cc86601ecb6695e50e345609249.scope: Consumed 1.067s CPU time.
Oct 01 17:00:08 compute-0 podman[266343]: 2025-10-01 17:00:08.422418751 +0000 UTC m=+0.029940153 container died accdb245c52f2f2fe13364ee4d2dbc711f859cc86601ecb6695e50e345609249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sammet, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:00:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-260ea248b9ce8a71f5045928737029a2b11e032045f14d9826eecca857ee4f31-merged.mount: Deactivated successfully.
Oct 01 17:00:08 compute-0 podman[266343]: 2025-10-01 17:00:08.489666836 +0000 UTC m=+0.097188198 container remove accdb245c52f2f2fe13364ee4d2dbc711f859cc86601ecb6695e50e345609249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Oct 01 17:00:08 compute-0 systemd[1]: libpod-conmon-accdb245c52f2f2fe13364ee4d2dbc711f859cc86601ecb6695e50e345609249.scope: Deactivated successfully.
Oct 01 17:00:08 compute-0 sudo[266190]: pam_unix(sudo:session): session closed for user root
Oct 01 17:00:08 compute-0 sudo[266358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:00:08 compute-0 sudo[266358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:00:08 compute-0 sudo[266358]: pam_unix(sudo:session): session closed for user root
Oct 01 17:00:08 compute-0 sudo[266383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:00:08 compute-0 sudo[266383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:00:08 compute-0 sudo[266383]: pam_unix(sudo:session): session closed for user root
Oct 01 17:00:08 compute-0 sudo[266408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:00:08 compute-0 sudo[266408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:00:08 compute-0 sudo[266408]: pam_unix(sudo:session): session closed for user root
Oct 01 17:00:08 compute-0 sudo[266433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 17:00:08 compute-0 sudo[266433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:00:08 compute-0 ceph-mon[74273]: pgmap v896: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 4.1 MiB/s wr, 31 op/s
Oct 01 17:00:09 compute-0 podman[266501]: 2025-10-01 17:00:09.273948382 +0000 UTC m=+0.077757021 container create 88c502a236848ff1753434000996842605a576369b5808da69b92d74680b7dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sanderson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:00:09 compute-0 podman[266501]: 2025-10-01 17:00:09.219781759 +0000 UTC m=+0.023590378 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:00:09 compute-0 systemd[1]: Started libpod-conmon-88c502a236848ff1753434000996842605a576369b5808da69b92d74680b7dbd.scope.
Oct 01 17:00:09 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:00:09 compute-0 podman[266501]: 2025-10-01 17:00:09.36423326 +0000 UTC m=+0.168041949 container init 88c502a236848ff1753434000996842605a576369b5808da69b92d74680b7dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sanderson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 01 17:00:09 compute-0 podman[266501]: 2025-10-01 17:00:09.372703987 +0000 UTC m=+0.176512596 container start 88c502a236848ff1753434000996842605a576369b5808da69b92d74680b7dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:00:09 compute-0 podman[266501]: 2025-10-01 17:00:09.375958538 +0000 UTC m=+0.179767247 container attach 88c502a236848ff1753434000996842605a576369b5808da69b92d74680b7dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:00:09 compute-0 jolly_sanderson[266517]: 167 167
Oct 01 17:00:09 compute-0 systemd[1]: libpod-88c502a236848ff1753434000996842605a576369b5808da69b92d74680b7dbd.scope: Deactivated successfully.
Oct 01 17:00:09 compute-0 conmon[266517]: conmon 88c502a236848ff17534 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-88c502a236848ff1753434000996842605a576369b5808da69b92d74680b7dbd.scope/container/memory.events
Oct 01 17:00:09 compute-0 podman[266501]: 2025-10-01 17:00:09.379840396 +0000 UTC m=+0.183649025 container died 88c502a236848ff1753434000996842605a576369b5808da69b92d74680b7dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Oct 01 17:00:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-bdbcb02f7f3f951df5eb670160ea30c8a7b0fcd3a38675d50b40f3f2627cde34-merged.mount: Deactivated successfully.
Oct 01 17:00:09 compute-0 podman[266501]: 2025-10-01 17:00:09.433724219 +0000 UTC m=+0.237532848 container remove 88c502a236848ff1753434000996842605a576369b5808da69b92d74680b7dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 01 17:00:09 compute-0 systemd[1]: libpod-conmon-88c502a236848ff1753434000996842605a576369b5808da69b92d74680b7dbd.scope: Deactivated successfully.
Oct 01 17:00:09 compute-0 podman[266540]: 2025-10-01 17:00:09.639143316 +0000 UTC m=+0.061651138 container create 7d4b856bea31b9bdb849c30f8de61c443be09c34e32dc3ab43ce8b6dec6a35f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_edison, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 01 17:00:09 compute-0 podman[266540]: 2025-10-01 17:00:09.600293514 +0000 UTC m=+0.022801316 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:00:09 compute-0 systemd[1]: Started libpod-conmon-7d4b856bea31b9bdb849c30f8de61c443be09c34e32dc3ab43ce8b6dec6a35f9.scope.
Oct 01 17:00:09 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:00:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46c20fc1c5a444d9a896f109d86b18a7cfab3e1c09c2e746f80de5c1ca057e46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:00:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46c20fc1c5a444d9a896f109d86b18a7cfab3e1c09c2e746f80de5c1ca057e46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:00:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46c20fc1c5a444d9a896f109d86b18a7cfab3e1c09c2e746f80de5c1ca057e46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:00:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46c20fc1c5a444d9a896f109d86b18a7cfab3e1c09c2e746f80de5c1ca057e46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:00:09 compute-0 podman[266540]: 2025-10-01 17:00:09.822366649 +0000 UTC m=+0.244874431 container init 7d4b856bea31b9bdb849c30f8de61c443be09c34e32dc3ab43ce8b6dec6a35f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Oct 01 17:00:09 compute-0 podman[266540]: 2025-10-01 17:00:09.833596224 +0000 UTC m=+0.256104006 container start 7d4b856bea31b9bdb849c30f8de61c443be09c34e32dc3ab43ce8b6dec6a35f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_edison, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 01 17:00:09 compute-0 podman[266540]: 2025-10-01 17:00:09.883962807 +0000 UTC m=+0.306470619 container attach 7d4b856bea31b9bdb849c30f8de61c443be09c34e32dc3ab43ce8b6dec6a35f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 17:00:09 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:00:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 17:00:10 compute-0 great_edison[266556]: {
Oct 01 17:00:10 compute-0 great_edison[266556]:     "0": [
Oct 01 17:00:10 compute-0 great_edison[266556]:         {
Oct 01 17:00:10 compute-0 great_edison[266556]:             "devices": [
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "/dev/loop3"
Oct 01 17:00:10 compute-0 great_edison[266556]:             ],
Oct 01 17:00:10 compute-0 great_edison[266556]:             "lv_name": "ceph_lv0",
Oct 01 17:00:10 compute-0 great_edison[266556]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:00:10 compute-0 great_edison[266556]:             "lv_size": "21470642176",
Oct 01 17:00:10 compute-0 great_edison[266556]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:00:10 compute-0 great_edison[266556]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 17:00:10 compute-0 great_edison[266556]:             "name": "ceph_lv0",
Oct 01 17:00:10 compute-0 great_edison[266556]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:00:10 compute-0 great_edison[266556]:             "tags": {
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "ceph.cluster_name": "ceph",
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "ceph.crush_device_class": "",
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "ceph.encrypted": "0",
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "ceph.osd_id": "0",
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "ceph.type": "block",
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "ceph.vdo": "0"
Oct 01 17:00:10 compute-0 great_edison[266556]:             },
Oct 01 17:00:10 compute-0 great_edison[266556]:             "type": "block",
Oct 01 17:00:10 compute-0 great_edison[266556]:             "vg_name": "ceph_vg0"
Oct 01 17:00:10 compute-0 great_edison[266556]:         }
Oct 01 17:00:10 compute-0 great_edison[266556]:     ],
Oct 01 17:00:10 compute-0 great_edison[266556]:     "1": [
Oct 01 17:00:10 compute-0 great_edison[266556]:         {
Oct 01 17:00:10 compute-0 great_edison[266556]:             "devices": [
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "/dev/loop4"
Oct 01 17:00:10 compute-0 great_edison[266556]:             ],
Oct 01 17:00:10 compute-0 great_edison[266556]:             "lv_name": "ceph_lv1",
Oct 01 17:00:10 compute-0 great_edison[266556]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:00:10 compute-0 great_edison[266556]:             "lv_size": "21470642176",
Oct 01 17:00:10 compute-0 great_edison[266556]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:00:10 compute-0 great_edison[266556]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 17:00:10 compute-0 great_edison[266556]:             "name": "ceph_lv1",
Oct 01 17:00:10 compute-0 great_edison[266556]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:00:10 compute-0 great_edison[266556]:             "tags": {
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "ceph.cluster_name": "ceph",
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "ceph.crush_device_class": "",
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "ceph.encrypted": "0",
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "ceph.osd_id": "1",
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "ceph.type": "block",
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "ceph.vdo": "0"
Oct 01 17:00:10 compute-0 great_edison[266556]:             },
Oct 01 17:00:10 compute-0 great_edison[266556]:             "type": "block",
Oct 01 17:00:10 compute-0 great_edison[266556]:             "vg_name": "ceph_vg1"
Oct 01 17:00:10 compute-0 great_edison[266556]:         }
Oct 01 17:00:10 compute-0 great_edison[266556]:     ],
Oct 01 17:00:10 compute-0 great_edison[266556]:     "2": [
Oct 01 17:00:10 compute-0 great_edison[266556]:         {
Oct 01 17:00:10 compute-0 great_edison[266556]:             "devices": [
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "/dev/loop5"
Oct 01 17:00:10 compute-0 great_edison[266556]:             ],
Oct 01 17:00:10 compute-0 great_edison[266556]:             "lv_name": "ceph_lv2",
Oct 01 17:00:10 compute-0 great_edison[266556]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:00:10 compute-0 great_edison[266556]:             "lv_size": "21470642176",
Oct 01 17:00:10 compute-0 great_edison[266556]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:00:10 compute-0 great_edison[266556]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 17:00:10 compute-0 great_edison[266556]:             "name": "ceph_lv2",
Oct 01 17:00:10 compute-0 great_edison[266556]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:00:10 compute-0 great_edison[266556]:             "tags": {
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "ceph.cluster_name": "ceph",
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "ceph.crush_device_class": "",
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "ceph.encrypted": "0",
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "ceph.osd_id": "2",
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "ceph.type": "block",
Oct 01 17:00:10 compute-0 great_edison[266556]:                 "ceph.vdo": "0"
Oct 01 17:00:10 compute-0 great_edison[266556]:             },
Oct 01 17:00:10 compute-0 great_edison[266556]:             "type": "block",
Oct 01 17:00:10 compute-0 great_edison[266556]:             "vg_name": "ceph_vg2"
Oct 01 17:00:10 compute-0 great_edison[266556]:         }
Oct 01 17:00:10 compute-0 great_edison[266556]:     ]
Oct 01 17:00:10 compute-0 great_edison[266556]: }
Oct 01 17:00:10 compute-0 systemd[1]: libpod-7d4b856bea31b9bdb849c30f8de61c443be09c34e32dc3ab43ce8b6dec6a35f9.scope: Deactivated successfully.
Oct 01 17:00:10 compute-0 podman[266565]: 2025-10-01 17:00:10.737011303 +0000 UTC m=+0.044328434 container died 7d4b856bea31b9bdb849c30f8de61c443be09c34e32dc3ab43ce8b6dec6a35f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_edison, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:00:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-46c20fc1c5a444d9a896f109d86b18a7cfab3e1c09c2e746f80de5c1ca057e46-merged.mount: Deactivated successfully.
Oct 01 17:00:10 compute-0 podman[266565]: 2025-10-01 17:00:10.791612809 +0000 UTC m=+0.098929920 container remove 7d4b856bea31b9bdb849c30f8de61c443be09c34e32dc3ab43ce8b6dec6a35f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_edison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 01 17:00:10 compute-0 systemd[1]: libpod-conmon-7d4b856bea31b9bdb849c30f8de61c443be09c34e32dc3ab43ce8b6dec6a35f9.scope: Deactivated successfully.
Oct 01 17:00:10 compute-0 sudo[266433]: pam_unix(sudo:session): session closed for user root
Oct 01 17:00:10 compute-0 sudo[266580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:00:10 compute-0 sudo[266580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:00:10 compute-0 sudo[266580]: pam_unix(sudo:session): session closed for user root
Oct 01 17:00:11 compute-0 sudo[266605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:00:11 compute-0 sudo[266605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:00:11 compute-0 sudo[266605]: pam_unix(sudo:session): session closed for user root
Oct 01 17:00:11 compute-0 sudo[266630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:00:11 compute-0 sudo[266630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:00:11 compute-0 sudo[266630]: pam_unix(sudo:session): session closed for user root
Oct 01 17:00:11 compute-0 ceph-mon[74273]: pgmap v897: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:00:11 compute-0 sudo[266655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 17:00:11 compute-0 sudo[266655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:00:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_17:00:11
Oct 01 17:00:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 17:00:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 17:00:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'images', 'volumes', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'backups']
Oct 01 17:00:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 17:00:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:00:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:00:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:00:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:00:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:00:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:00:11 compute-0 podman[266719]: 2025-10-01 17:00:11.505486229 +0000 UTC m=+0.039375854 container create 1864fd56439b9cdb933d5d73a77d976c4adb6b1ba4e09f963569f13e7dda76bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hugle, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 01 17:00:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 17:00:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 17:00:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:00:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:00:11 compute-0 systemd[1]: Started libpod-conmon-1864fd56439b9cdb933d5d73a77d976c4adb6b1ba4e09f963569f13e7dda76bb.scope.
Oct 01 17:00:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:00:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:00:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:00:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:00:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:00:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:00:11 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:00:11 compute-0 podman[266719]: 2025-10-01 17:00:11.579648838 +0000 UTC m=+0.113538553 container init 1864fd56439b9cdb933d5d73a77d976c4adb6b1ba4e09f963569f13e7dda76bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:00:11 compute-0 podman[266719]: 2025-10-01 17:00:11.488616656 +0000 UTC m=+0.022506311 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:00:11 compute-0 podman[266719]: 2025-10-01 17:00:11.588070295 +0000 UTC m=+0.121959960 container start 1864fd56439b9cdb933d5d73a77d976c4adb6b1ba4e09f963569f13e7dda76bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hugle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:00:11 compute-0 podman[266719]: 2025-10-01 17:00:11.59219015 +0000 UTC m=+0.126079785 container attach 1864fd56439b9cdb933d5d73a77d976c4adb6b1ba4e09f963569f13e7dda76bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hugle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 01 17:00:11 compute-0 compassionate_hugle[266735]: 167 167
Oct 01 17:00:11 compute-0 systemd[1]: libpod-1864fd56439b9cdb933d5d73a77d976c4adb6b1ba4e09f963569f13e7dda76bb.scope: Deactivated successfully.
Oct 01 17:00:11 compute-0 podman[266719]: 2025-10-01 17:00:11.595105817 +0000 UTC m=+0.128995472 container died 1864fd56439b9cdb933d5d73a77d976c4adb6b1ba4e09f963569f13e7dda76bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 01 17:00:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-84a80b4f08e2cb7eb569e549be53adbd4510ca0f499faa249f874becb2b96d68-merged.mount: Deactivated successfully.
Oct 01 17:00:11 compute-0 podman[266719]: 2025-10-01 17:00:11.633287402 +0000 UTC m=+0.167177037 container remove 1864fd56439b9cdb933d5d73a77d976c4adb6b1ba4e09f963569f13e7dda76bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hugle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Oct 01 17:00:11 compute-0 systemd[1]: libpod-conmon-1864fd56439b9cdb933d5d73a77d976c4adb6b1ba4e09f963569f13e7dda76bb.scope: Deactivated successfully.
Oct 01 17:00:11 compute-0 nova_compute[259504]: 2025-10-01 17:00:11.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:00:11 compute-0 podman[266759]: 2025-10-01 17:00:11.851460109 +0000 UTC m=+0.048453379 container create 8557648ccf77ed3d13873f7696706501add8a49535ecb1294ef5530bb229f588 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_goldstine, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 01 17:00:11 compute-0 systemd[1]: Started libpod-conmon-8557648ccf77ed3d13873f7696706501add8a49535ecb1294ef5530bb229f588.scope.
Oct 01 17:00:11 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:00:11 compute-0 podman[266759]: 2025-10-01 17:00:11.835391611 +0000 UTC m=+0.032384881 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:00:11 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:00:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/603e068c1b53049eb75613c874a29963c60d685db18c650799a1c8468c1ed3e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:00:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/603e068c1b53049eb75613c874a29963c60d685db18c650799a1c8468c1ed3e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:00:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/603e068c1b53049eb75613c874a29963c60d685db18c650799a1c8468c1ed3e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:00:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/603e068c1b53049eb75613c874a29963c60d685db18c650799a1c8468c1ed3e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:00:11 compute-0 podman[266759]: 2025-10-01 17:00:11.957129805 +0000 UTC m=+0.154123145 container init 8557648ccf77ed3d13873f7696706501add8a49535ecb1294ef5530bb229f588 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_goldstine, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:00:11 compute-0 podman[266759]: 2025-10-01 17:00:11.97111653 +0000 UTC m=+0.168109800 container start 8557648ccf77ed3d13873f7696706501add8a49535ecb1294ef5530bb229f588 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_goldstine, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 01 17:00:11 compute-0 podman[266759]: 2025-10-01 17:00:11.975054578 +0000 UTC m=+0.172047868 container attach 8557648ccf77ed3d13873f7696706501add8a49535ecb1294ef5530bb229f588 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_goldstine, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 17:00:12 compute-0 thirsty_goldstine[266776]: {
Oct 01 17:00:12 compute-0 thirsty_goldstine[266776]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 17:00:12 compute-0 thirsty_goldstine[266776]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:00:12 compute-0 thirsty_goldstine[266776]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 17:00:12 compute-0 thirsty_goldstine[266776]:         "osd_id": 2,
Oct 01 17:00:12 compute-0 thirsty_goldstine[266776]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 17:00:12 compute-0 thirsty_goldstine[266776]:         "type": "bluestore"
Oct 01 17:00:12 compute-0 thirsty_goldstine[266776]:     },
Oct 01 17:00:12 compute-0 thirsty_goldstine[266776]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 17:00:12 compute-0 thirsty_goldstine[266776]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:00:12 compute-0 thirsty_goldstine[266776]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 17:00:12 compute-0 thirsty_goldstine[266776]:         "osd_id": 0,
Oct 01 17:00:12 compute-0 thirsty_goldstine[266776]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 17:00:12 compute-0 thirsty_goldstine[266776]:         "type": "bluestore"
Oct 01 17:00:12 compute-0 thirsty_goldstine[266776]:     },
Oct 01 17:00:12 compute-0 thirsty_goldstine[266776]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 17:00:12 compute-0 thirsty_goldstine[266776]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:00:12 compute-0 thirsty_goldstine[266776]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 17:00:12 compute-0 thirsty_goldstine[266776]:         "osd_id": 1,
Oct 01 17:00:12 compute-0 thirsty_goldstine[266776]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 17:00:12 compute-0 thirsty_goldstine[266776]:         "type": "bluestore"
Oct 01 17:00:12 compute-0 thirsty_goldstine[266776]:     }
Oct 01 17:00:12 compute-0 thirsty_goldstine[266776]: }
Oct 01 17:00:13 compute-0 systemd[1]: libpod-8557648ccf77ed3d13873f7696706501add8a49535ecb1294ef5530bb229f588.scope: Deactivated successfully.
Oct 01 17:00:13 compute-0 systemd[1]: libpod-8557648ccf77ed3d13873f7696706501add8a49535ecb1294ef5530bb229f588.scope: Consumed 1.046s CPU time.
Oct 01 17:00:13 compute-0 podman[266759]: 2025-10-01 17:00:13.007035246 +0000 UTC m=+1.204028546 container died 8557648ccf77ed3d13873f7696706501add8a49535ecb1294ef5530bb229f588 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_goldstine, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 01 17:00:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-603e068c1b53049eb75613c874a29963c60d685db18c650799a1c8468c1ed3e6-merged.mount: Deactivated successfully.
Oct 01 17:00:13 compute-0 podman[266759]: 2025-10-01 17:00:13.083534593 +0000 UTC m=+1.280527853 container remove 8557648ccf77ed3d13873f7696706501add8a49535ecb1294ef5530bb229f588 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_goldstine, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 01 17:00:13 compute-0 ceph-mon[74273]: pgmap v898: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:00:13 compute-0 systemd[1]: libpod-conmon-8557648ccf77ed3d13873f7696706501add8a49535ecb1294ef5530bb229f588.scope: Deactivated successfully.
Oct 01 17:00:13 compute-0 sudo[266655]: pam_unix(sudo:session): session closed for user root
Oct 01 17:00:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 17:00:13 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:00:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 17:00:13 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:00:13 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev beb675e1-26df-4a90-8769-cabcedf43f06 does not exist
Oct 01 17:00:13 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 34e42923-f8f8-46cb-a93b-45ed090de03b does not exist
Oct 01 17:00:13 compute-0 sudo[266823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:00:13 compute-0 sudo[266823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:00:13 compute-0 sudo[266823]: pam_unix(sudo:session): session closed for user root
Oct 01 17:00:13 compute-0 sudo[266848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 17:00:13 compute-0 sudo[266848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:00:13 compute-0 sudo[266848]: pam_unix(sudo:session): session closed for user root
Oct 01 17:00:13 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:00:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:00:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:00:14 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:00:14 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:00:14 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:00:14.215+0000 7f813a030640 -1 client.0 error registering admin socket command: (17) File exists
Oct 01 17:00:14 compute-0 ceph-mgr[74571]: client.0 error registering admin socket command: (17) File exists
Oct 01 17:00:14 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:00:14.215+0000 7f813a030640 -1 client.0 error registering admin socket command: (17) File exists
Oct 01 17:00:14 compute-0 ceph-mgr[74571]: client.0 error registering admin socket command: (17) File exists
Oct 01 17:00:14 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:00:14.215+0000 7f813a030640 -1 client.0 error registering admin socket command: (17) File exists
Oct 01 17:00:14 compute-0 ceph-mgr[74571]: client.0 error registering admin socket command: (17) File exists
Oct 01 17:00:14 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:00:14.215+0000 7f813a030640 -1 client.0 error registering admin socket command: (17) File exists
Oct 01 17:00:14 compute-0 ceph-mgr[74571]: client.0 error registering admin socket command: (17) File exists
Oct 01 17:00:14 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:00:14.215+0000 7f813a030640 -1 client.0 error registering admin socket command: (17) File exists
Oct 01 17:00:14 compute-0 ceph-mgr[74571]: client.0 error registering admin socket command: (17) File exists
Oct 01 17:00:14 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta.tmp'
Oct 01 17:00:14 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta.tmp' to config b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta'
Oct 01 17:00:14 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:00:14 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "format": "json"}]: dispatch
Oct 01 17:00:14 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:00:14 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:00:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:00:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:00:14 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5954ec06-59c3-4fff-a9f8-48042027054b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:00:14 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5954ec06-59c3-4fff-a9f8-48042027054b, vol_name:cephfs) < ""
Oct 01 17:00:14 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5954ec06-59c3-4fff-a9f8-48042027054b/.meta.tmp'
Oct 01 17:00:14 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5954ec06-59c3-4fff-a9f8-48042027054b/.meta.tmp' to config b'/volumes/_nogroup/5954ec06-59c3-4fff-a9f8-48042027054b/.meta'
Oct 01 17:00:14 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5954ec06-59c3-4fff-a9f8-48042027054b, vol_name:cephfs) < ""
Oct 01 17:00:14 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5954ec06-59c3-4fff-a9f8-48042027054b", "format": "json"}]: dispatch
Oct 01 17:00:14 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5954ec06-59c3-4fff-a9f8-48042027054b, vol_name:cephfs) < ""
Oct 01 17:00:14 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5954ec06-59c3-4fff-a9f8-48042027054b, vol_name:cephfs) < ""
Oct 01 17:00:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:00:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:00:14 compute-0 nova_compute[259504]: 2025-10-01 17:00:14.746 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:00:14 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:00:14.865 162304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:71:db', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:60:3f:78:bd:29'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 17:00:14 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:00:14.867 162304 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 17:00:15 compute-0 ceph-mon[74273]: pgmap v899: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:00:15 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:00:15 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:00:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 17:00:15 compute-0 nova_compute[259504]: 2025-10-01 17:00:15.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:00:15 compute-0 nova_compute[259504]: 2025-10-01 17:00:15.750 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 17:00:15 compute-0 nova_compute[259504]: 2025-10-01 17:00:15.751 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 17:00:15 compute-0 nova_compute[259504]: 2025-10-01 17:00:15.792 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 17:00:15 compute-0 nova_compute[259504]: 2025-10-01 17:00:15.792 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:00:15 compute-0 nova_compute[259504]: 2025-10-01 17:00:15.792 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:00:15 compute-0 nova_compute[259504]: 2025-10-01 17:00:15.792 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 17:00:15 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:00:15.869 162304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d2971fc2-5b75-459a-98a0-6e626d0d4d99, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 17:00:15 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:00:16 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:00:16 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "format": "json"}]: dispatch
Oct 01 17:00:16 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5954ec06-59c3-4fff-a9f8-48042027054b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:00:16 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5954ec06-59c3-4fff-a9f8-48042027054b", "format": "json"}]: dispatch
Oct 01 17:00:16 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.pmbdpj(active, since 26m)
Oct 01 17:00:16 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8d007d9c-80e4-4e40-ac68-5fe06e924d50", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:00:16 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8d007d9c-80e4-4e40-ac68-5fe06e924d50, vol_name:cephfs) < ""
Oct 01 17:00:16 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8d007d9c-80e4-4e40-ac68-5fe06e924d50/.meta.tmp'
Oct 01 17:00:16 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8d007d9c-80e4-4e40-ac68-5fe06e924d50/.meta.tmp' to config b'/volumes/_nogroup/8d007d9c-80e4-4e40-ac68-5fe06e924d50/.meta'
Oct 01 17:00:16 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8d007d9c-80e4-4e40-ac68-5fe06e924d50, vol_name:cephfs) < ""
Oct 01 17:00:16 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8d007d9c-80e4-4e40-ac68-5fe06e924d50", "format": "json"}]: dispatch
Oct 01 17:00:16 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8d007d9c-80e4-4e40-ac68-5fe06e924d50, vol_name:cephfs) < ""
Oct 01 17:00:16 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8d007d9c-80e4-4e40-ac68-5fe06e924d50, vol_name:cephfs) < ""
Oct 01 17:00:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:00:16 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:00:16 compute-0 nova_compute[259504]: 2025-10-01 17:00:16.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:00:16 compute-0 nova_compute[259504]: 2025-10-01 17:00:16.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:00:16 compute-0 nova_compute[259504]: 2025-10-01 17:00:16.751 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:00:16 compute-0 nova_compute[259504]: 2025-10-01 17:00:16.824 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:00:16 compute-0 nova_compute[259504]: 2025-10-01 17:00:16.825 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:00:16 compute-0 nova_compute[259504]: 2025-10-01 17:00:16.825 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:00:16 compute-0 nova_compute[259504]: 2025-10-01 17:00:16.826 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 17:00:16 compute-0 nova_compute[259504]: 2025-10-01 17:00:16.826 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:00:17 compute-0 ceph-mon[74273]: pgmap v900: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:00:17 compute-0 ceph-mon[74273]: mgrmap e10: compute-0.pmbdpj(active, since 26m)
Oct 01 17:00:17 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:00:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:00:17 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2022753379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:00:17 compute-0 nova_compute[259504]: 2025-10-01 17:00:17.281 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:00:17 compute-0 nova_compute[259504]: 2025-10-01 17:00:17.499 2 WARNING nova.virt.libvirt.driver [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 17:00:17 compute-0 nova_compute[259504]: 2025-10-01 17:00:17.501 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5090MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 17:00:17 compute-0 nova_compute[259504]: 2025-10-01 17:00:17.502 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:00:17 compute-0 nova_compute[259504]: 2025-10-01 17:00:17.503 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:00:17 compute-0 nova_compute[259504]: 2025-10-01 17:00:17.899 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 17:00:17 compute-0 nova_compute[259504]: 2025-10-01 17:00:17.901 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 17:00:17 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:00:17 compute-0 nova_compute[259504]: 2025-10-01 17:00:17.922 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:00:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:00:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2078542980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:00:18 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ad3738e8-3ae5-4ca0-bfa2-ec80b0780d33", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:00:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ad3738e8-3ae5-4ca0-bfa2-ec80b0780d33, vol_name:cephfs) < ""
Oct 01 17:00:18 compute-0 nova_compute[259504]: 2025-10-01 17:00:18.331 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:00:18 compute-0 nova_compute[259504]: 2025-10-01 17:00:18.338 2 DEBUG nova.compute.provider_tree [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed in ProviderTree for provider: 2417da73-53f1-4edf-ae4c-fbd9fa470d6b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 17:00:18 compute-0 nova_compute[259504]: 2025-10-01 17:00:18.365 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 17:00:18 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8d007d9c-80e4-4e40-ac68-5fe06e924d50", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:00:18 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8d007d9c-80e4-4e40-ac68-5fe06e924d50", "format": "json"}]: dispatch
Oct 01 17:00:18 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2022753379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:00:18 compute-0 nova_compute[259504]: 2025-10-01 17:00:18.370 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 17:00:18 compute-0 nova_compute[259504]: 2025-10-01 17:00:18.370 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:00:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ad3738e8-3ae5-4ca0-bfa2-ec80b0780d33/.meta.tmp'
Oct 01 17:00:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ad3738e8-3ae5-4ca0-bfa2-ec80b0780d33/.meta.tmp' to config b'/volumes/_nogroup/ad3738e8-3ae5-4ca0-bfa2-ec80b0780d33/.meta'
Oct 01 17:00:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ad3738e8-3ae5-4ca0-bfa2-ec80b0780d33, vol_name:cephfs) < ""
Oct 01 17:00:18 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ad3738e8-3ae5-4ca0-bfa2-ec80b0780d33", "format": "json"}]: dispatch
Oct 01 17:00:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ad3738e8-3ae5-4ca0-bfa2-ec80b0780d33, vol_name:cephfs) < ""
Oct 01 17:00:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ad3738e8-3ae5-4ca0-bfa2-ec80b0780d33, vol_name:cephfs) < ""
Oct 01 17:00:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:00:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:00:18 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b6442d06-c01f-49a2-aa90-70b48363f5f9", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:00:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b6442d06-c01f-49a2-aa90-70b48363f5f9, vol_name:cephfs) < ""
Oct 01 17:00:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b6442d06-c01f-49a2-aa90-70b48363f5f9/.meta.tmp'
Oct 01 17:00:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b6442d06-c01f-49a2-aa90-70b48363f5f9/.meta.tmp' to config b'/volumes/_nogroup/b6442d06-c01f-49a2-aa90-70b48363f5f9/.meta'
Oct 01 17:00:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b6442d06-c01f-49a2-aa90-70b48363f5f9, vol_name:cephfs) < ""
Oct 01 17:00:18 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b6442d06-c01f-49a2-aa90-70b48363f5f9", "format": "json"}]: dispatch
Oct 01 17:00:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b6442d06-c01f-49a2-aa90-70b48363f5f9, vol_name:cephfs) < ""
Oct 01 17:00:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b6442d06-c01f-49a2-aa90-70b48363f5f9, vol_name:cephfs) < ""
Oct 01 17:00:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:00:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:00:19 compute-0 nova_compute[259504]: 2025-10-01 17:00:19.367 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:00:19 compute-0 ceph-mon[74273]: pgmap v901: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:00:19 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2078542980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:00:19 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ad3738e8-3ae5-4ca0-bfa2-ec80b0780d33", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:00:19 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ad3738e8-3ae5-4ca0-bfa2-ec80b0780d33", "format": "json"}]: dispatch
Oct 01 17:00:19 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:00:19 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b6442d06-c01f-49a2-aa90-70b48363f5f9", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:00:19 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b6442d06-c01f-49a2-aa90-70b48363f5f9", "format": "json"}]: dispatch
Oct 01 17:00:19 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:00:19 compute-0 nova_compute[259504]: 2025-10-01 17:00:19.456 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:00:19 compute-0 podman[266931]: 2025-10-01 17:00:19.769145846 +0000 UTC m=+0.086190272 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 01 17:00:19 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s wr, 1 op/s
Oct 01 17:00:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:00:19.964 162304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:00:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:00:19.964 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:00:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:00:19.965 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:00:20 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8d007d9c-80e4-4e40-ac68-5fe06e924d50", "snap_name": "eaae8952-6b44-417e-bb88-4247cfccb824", "format": "json"}]: dispatch
Oct 01 17:00:20 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:eaae8952-6b44-417e-bb88-4247cfccb824, sub_name:8d007d9c-80e4-4e40-ac68-5fe06e924d50, vol_name:cephfs) < ""
Oct 01 17:00:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 17:00:20 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:eaae8952-6b44-417e-bb88-4247cfccb824, sub_name:8d007d9c-80e4-4e40-ac68-5fe06e924d50, vol_name:cephfs) < ""
Oct 01 17:00:20 compute-0 ceph-mon[74273]: pgmap v902: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s wr, 1 op/s
Oct 01 17:00:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 17:00:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:00:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 17:00:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:00:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:00:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:00:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:00:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:00:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:00:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:00:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 01 17:00:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:00:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.462586279872371e-06 of space, bias 4.0, pg target 0.0017551035358468452 quantized to 16 (current 16)
Oct 01 17:00:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:00:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 01 17:00:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:00:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 17:00:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:00:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 17:00:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:00:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:00:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:00:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 17:00:21 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8d007d9c-80e4-4e40-ac68-5fe06e924d50", "snap_name": "eaae8952-6b44-417e-bb88-4247cfccb824", "format": "json"}]: dispatch
Oct 01 17:00:21 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "b6442d06-c01f-49a2-aa90-70b48363f5f9", "new_size": 2147483648, "format": "json"}]: dispatch
Oct 01 17:00:21 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:b6442d06-c01f-49a2-aa90-70b48363f5f9, vol_name:cephfs) < ""
Oct 01 17:00:21 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:b6442d06-c01f-49a2-aa90-70b48363f5f9, vol_name:cephfs) < ""
Oct 01 17:00:21 compute-0 podman[266953]: 2025-10-01 17:00:21.748734417 +0000 UTC m=+0.062865536 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=multipathd)
Oct 01 17:00:21 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s wr, 1 op/s
Oct 01 17:00:22 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "b6442d06-c01f-49a2-aa90-70b48363f5f9", "new_size": 2147483648, "format": "json"}]: dispatch
Oct 01 17:00:22 compute-0 ceph-mon[74273]: pgmap v903: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s wr, 1 op/s
Oct 01 17:00:22 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b6442d06-c01f-49a2-aa90-70b48363f5f9", "format": "json"}]: dispatch
Oct 01 17:00:22 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b6442d06-c01f-49a2-aa90-70b48363f5f9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:00:22 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b6442d06-c01f-49a2-aa90-70b48363f5f9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:00:22 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:00:22.941+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b6442d06-c01f-49a2-aa90-70b48363f5f9' of type subvolume
Oct 01 17:00:22 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b6442d06-c01f-49a2-aa90-70b48363f5f9' of type subvolume
Oct 01 17:00:22 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b6442d06-c01f-49a2-aa90-70b48363f5f9", "force": true, "format": "json"}]: dispatch
Oct 01 17:00:22 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b6442d06-c01f-49a2-aa90-70b48363f5f9, vol_name:cephfs) < ""
Oct 01 17:00:22 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b6442d06-c01f-49a2-aa90-70b48363f5f9'' moved to trashcan
Oct 01 17:00:22 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:00:22 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b6442d06-c01f-49a2-aa90-70b48363f5f9, vol_name:cephfs) < ""
Oct 01 17:00:22 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:00:22.959+0000 7f813c835640 -1 client.0 error registering admin socket command: (17) File exists
Oct 01 17:00:22 compute-0 ceph-mgr[74571]: client.0 error registering admin socket command: (17) File exists
Oct 01 17:00:22 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:00:22.959+0000 7f813c835640 -1 client.0 error registering admin socket command: (17) File exists
Oct 01 17:00:22 compute-0 ceph-mgr[74571]: client.0 error registering admin socket command: (17) File exists
Oct 01 17:00:22 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:00:22.959+0000 7f813c835640 -1 client.0 error registering admin socket command: (17) File exists
Oct 01 17:00:22 compute-0 ceph-mgr[74571]: client.0 error registering admin socket command: (17) File exists
Oct 01 17:00:22 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:00:22.959+0000 7f813c835640 -1 client.0 error registering admin socket command: (17) File exists
Oct 01 17:00:22 compute-0 ceph-mgr[74571]: client.0 error registering admin socket command: (17) File exists
Oct 01 17:00:22 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:00:22.959+0000 7f813c835640 -1 client.0 error registering admin socket command: (17) File exists
Oct 01 17:00:22 compute-0 ceph-mgr[74571]: client.0 error registering admin socket command: (17) File exists
Oct 01 17:00:22 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:00:22.988+0000 7f813b833640 -1 client.0 error registering admin socket command: (17) File exists
Oct 01 17:00:22 compute-0 ceph-mgr[74571]: client.0 error registering admin socket command: (17) File exists
Oct 01 17:00:22 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:00:22.988+0000 7f813b833640 -1 client.0 error registering admin socket command: (17) File exists
Oct 01 17:00:22 compute-0 ceph-mgr[74571]: client.0 error registering admin socket command: (17) File exists
Oct 01 17:00:22 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:00:22.988+0000 7f813b833640 -1 client.0 error registering admin socket command: (17) File exists
Oct 01 17:00:22 compute-0 ceph-mgr[74571]: client.0 error registering admin socket command: (17) File exists
Oct 01 17:00:22 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:00:22.988+0000 7f813b833640 -1 client.0 error registering admin socket command: (17) File exists
Oct 01 17:00:22 compute-0 ceph-mgr[74571]: client.0 error registering admin socket command: (17) File exists
Oct 01 17:00:22 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:00:22.988+0000 7f813b833640 -1 client.0 error registering admin socket command: (17) File exists
Oct 01 17:00:22 compute-0 ceph-mgr[74571]: client.0 error registering admin socket command: (17) File exists
Oct 01 17:00:23 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b6442d06-c01f-49a2-aa90-70b48363f5f9", "format": "json"}]: dispatch
Oct 01 17:00:23 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b6442d06-c01f-49a2-aa90-70b48363f5f9", "force": true, "format": "json"}]: dispatch
Oct 01 17:00:23 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s wr, 4 op/s
Oct 01 17:00:24 compute-0 ceph-mon[74273]: pgmap v904: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s wr, 4 op/s
Oct 01 17:00:24 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.pmbdpj(active, since 26m)
Oct 01 17:00:24 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ad3738e8-3ae5-4ca0-bfa2-ec80b0780d33", "format": "json"}]: dispatch
Oct 01 17:00:24 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ad3738e8-3ae5-4ca0-bfa2-ec80b0780d33, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:00:24 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ad3738e8-3ae5-4ca0-bfa2-ec80b0780d33, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:00:24 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ad3738e8-3ae5-4ca0-bfa2-ec80b0780d33' of type subvolume
Oct 01 17:00:24 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:00:24.824+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ad3738e8-3ae5-4ca0-bfa2-ec80b0780d33' of type subvolume
Oct 01 17:00:24 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ad3738e8-3ae5-4ca0-bfa2-ec80b0780d33", "force": true, "format": "json"}]: dispatch
Oct 01 17:00:24 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ad3738e8-3ae5-4ca0-bfa2-ec80b0780d33, vol_name:cephfs) < ""
Oct 01 17:00:24 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ad3738e8-3ae5-4ca0-bfa2-ec80b0780d33'' moved to trashcan
Oct 01 17:00:24 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:00:24 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ad3738e8-3ae5-4ca0-bfa2-ec80b0780d33, vol_name:cephfs) < ""
Oct 01 17:00:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 17:00:25 compute-0 ceph-mon[74273]: mgrmap e11: compute-0.pmbdpj(active, since 26m)
Oct 01 17:00:25 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ad3738e8-3ae5-4ca0-bfa2-ec80b0780d33", "format": "json"}]: dispatch
Oct 01 17:00:25 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ad3738e8-3ae5-4ca0-bfa2-ec80b0780d33", "force": true, "format": "json"}]: dispatch
Oct 01 17:00:25 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s wr, 4 op/s
Oct 01 17:00:26 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8803df6e-514e-40bf-8107-3891be3d00b0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:00:26 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8803df6e-514e-40bf-8107-3891be3d00b0, vol_name:cephfs) < ""
Oct 01 17:00:26 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8803df6e-514e-40bf-8107-3891be3d00b0/.meta.tmp'
Oct 01 17:00:26 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8803df6e-514e-40bf-8107-3891be3d00b0/.meta.tmp' to config b'/volumes/_nogroup/8803df6e-514e-40bf-8107-3891be3d00b0/.meta'
Oct 01 17:00:26 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8803df6e-514e-40bf-8107-3891be3d00b0, vol_name:cephfs) < ""
Oct 01 17:00:26 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8803df6e-514e-40bf-8107-3891be3d00b0", "format": "json"}]: dispatch
Oct 01 17:00:26 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8803df6e-514e-40bf-8107-3891be3d00b0, vol_name:cephfs) < ""
Oct 01 17:00:26 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8803df6e-514e-40bf-8107-3891be3d00b0, vol_name:cephfs) < ""
Oct 01 17:00:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:00:26 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:00:26 compute-0 ceph-mon[74273]: pgmap v905: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s wr, 4 op/s
Oct 01 17:00:26 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:00:27 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8803df6e-514e-40bf-8107-3891be3d00b0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:00:27 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8803df6e-514e-40bf-8107-3891be3d00b0", "format": "json"}]: dispatch
Oct 01 17:00:27 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s wr, 4 op/s
Oct 01 17:00:28 compute-0 ceph-mon[74273]: pgmap v906: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s wr, 4 op/s
Oct 01 17:00:29 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "8803df6e-514e-40bf-8107-3891be3d00b0", "new_size": 2147483648, "format": "json"}]: dispatch
Oct 01 17:00:29 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:8803df6e-514e-40bf-8107-3891be3d00b0, vol_name:cephfs) < ""
Oct 01 17:00:29 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 21 KiB/s wr, 6 op/s
Oct 01 17:00:29 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:8803df6e-514e-40bf-8107-3891be3d00b0, vol_name:cephfs) < ""
Oct 01 17:00:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 17:00:30 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Oct 01 17:00:30 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:00:30.227798) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 17:00:30 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Oct 01 17:00:30 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338030227879, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 702, "num_deletes": 255, "total_data_size": 907198, "memory_usage": 919928, "flush_reason": "Manual Compaction"}
Oct 01 17:00:30 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Oct 01 17:00:30 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338030237171, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 899787, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18766, "largest_seqno": 19467, "table_properties": {"data_size": 896058, "index_size": 1509, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8532, "raw_average_key_size": 18, "raw_value_size": 888281, "raw_average_value_size": 1956, "num_data_blocks": 68, "num_entries": 454, "num_filter_entries": 454, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759337991, "oldest_key_time": 1759337991, "file_creation_time": 1759338030, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Oct 01 17:00:30 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 9425 microseconds, and 3119 cpu microseconds.
Oct 01 17:00:30 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 17:00:30 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:00:30.237216) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 899787 bytes OK
Oct 01 17:00:30 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:00:30.237237) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Oct 01 17:00:30 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:00:30.239191) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Oct 01 17:00:30 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:00:30.239296) EVENT_LOG_v1 {"time_micros": 1759338030239290, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 17:00:30 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:00:30.239315) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 17:00:30 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 903409, prev total WAL file size 906050, number of live WAL files 2.
Oct 01 17:00:30 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 17:00:30 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "8803df6e-514e-40bf-8107-3891be3d00b0", "new_size": 2147483648, "format": "json"}]: dispatch
Oct 01 17:00:30 compute-0 ceph-mon[74273]: pgmap v907: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 21 KiB/s wr, 6 op/s
Oct 01 17:00:30 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:00:30.240015) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323530' seq:72057594037927935, type:22 .. '6C6F676D00353031' seq:0, type:0; will stop at (end)
Oct 01 17:00:30 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 17:00:30 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(878KB)], [44(6104KB)]
Oct 01 17:00:30 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338030240097, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7150734, "oldest_snapshot_seqno": -1}
Oct 01 17:00:30 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4138 keys, 7022319 bytes, temperature: kUnknown
Oct 01 17:00:30 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338030279011, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7022319, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6994265, "index_size": 16616, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10373, "raw_key_size": 102643, "raw_average_key_size": 24, "raw_value_size": 6918928, "raw_average_value_size": 1672, "num_data_blocks": 696, "num_entries": 4138, "num_filter_entries": 4138, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336399, "oldest_key_time": 0, "file_creation_time": 1759338030, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Oct 01 17:00:30 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 17:00:30 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:00:30.279250) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7022319 bytes
Oct 01 17:00:30 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:00:30.280842) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.4 rd, 180.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 6.0 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(15.8) write-amplify(7.8) OK, records in: 4666, records dropped: 528 output_compression: NoCompression
Oct 01 17:00:30 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:00:30.280861) EVENT_LOG_v1 {"time_micros": 1759338030280851, "job": 22, "event": "compaction_finished", "compaction_time_micros": 38986, "compaction_time_cpu_micros": 17971, "output_level": 6, "num_output_files": 1, "total_output_size": 7022319, "num_input_records": 4666, "num_output_records": 4138, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 17:00:30 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 17:00:30 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338030281128, "job": 22, "event": "table_file_deletion", "file_number": 46}
Oct 01 17:00:30 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 17:00:30 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338030282194, "job": 22, "event": "table_file_deletion", "file_number": 44}
Oct 01 17:00:30 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:00:30.239661) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:00:30 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:00:30.282295) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:00:30 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:00:30.282303) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:00:30 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:00:30.282306) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:00:30 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:00:30.282309) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:00:30 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:00:30.282311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:00:31 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 15 KiB/s wr, 4 op/s
Oct 01 17:00:32 compute-0 podman[267000]: 2025-10-01 17:00:32.814328219 +0000 UTC m=+0.120004026 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 01 17:00:33 compute-0 ceph-mon[74273]: pgmap v908: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 15 KiB/s wr, 4 op/s
Oct 01 17:00:33 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8803df6e-514e-40bf-8107-3891be3d00b0", "format": "json"}]: dispatch
Oct 01 17:00:33 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8803df6e-514e-40bf-8107-3891be3d00b0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:00:33 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8803df6e-514e-40bf-8107-3891be3d00b0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:00:33 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8803df6e-514e-40bf-8107-3891be3d00b0' of type subvolume
Oct 01 17:00:33 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:00:33.460+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8803df6e-514e-40bf-8107-3891be3d00b0' of type subvolume
Oct 01 17:00:33 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8803df6e-514e-40bf-8107-3891be3d00b0", "force": true, "format": "json"}]: dispatch
Oct 01 17:00:33 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8803df6e-514e-40bf-8107-3891be3d00b0, vol_name:cephfs) < ""
Oct 01 17:00:33 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8803df6e-514e-40bf-8107-3891be3d00b0'' moved to trashcan
Oct 01 17:00:33 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:00:33 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8803df6e-514e-40bf-8107-3891be3d00b0, vol_name:cephfs) < ""
Oct 01 17:00:33 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 17 KiB/s wr, 5 op/s
Oct 01 17:00:34 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8d007d9c-80e4-4e40-ac68-5fe06e924d50", "snap_name": "eaae8952-6b44-417e-bb88-4247cfccb824_5863d869-5948-4ff6-8691-98e4329485df", "force": true, "format": "json"}]: dispatch
Oct 01 17:00:34 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eaae8952-6b44-417e-bb88-4247cfccb824_5863d869-5948-4ff6-8691-98e4329485df, sub_name:8d007d9c-80e4-4e40-ac68-5fe06e924d50, vol_name:cephfs) < ""
Oct 01 17:00:34 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8803df6e-514e-40bf-8107-3891be3d00b0", "format": "json"}]: dispatch
Oct 01 17:00:34 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8803df6e-514e-40bf-8107-3891be3d00b0", "force": true, "format": "json"}]: dispatch
Oct 01 17:00:34 compute-0 ceph-mon[74273]: pgmap v909: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 17 KiB/s wr, 5 op/s
Oct 01 17:00:34 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8d007d9c-80e4-4e40-ac68-5fe06e924d50/.meta.tmp'
Oct 01 17:00:34 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8d007d9c-80e4-4e40-ac68-5fe06e924d50/.meta.tmp' to config b'/volumes/_nogroup/8d007d9c-80e4-4e40-ac68-5fe06e924d50/.meta'
Oct 01 17:00:34 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eaae8952-6b44-417e-bb88-4247cfccb824_5863d869-5948-4ff6-8691-98e4329485df, sub_name:8d007d9c-80e4-4e40-ac68-5fe06e924d50, vol_name:cephfs) < ""
Oct 01 17:00:34 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8d007d9c-80e4-4e40-ac68-5fe06e924d50", "snap_name": "eaae8952-6b44-417e-bb88-4247cfccb824", "force": true, "format": "json"}]: dispatch
Oct 01 17:00:34 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eaae8952-6b44-417e-bb88-4247cfccb824, sub_name:8d007d9c-80e4-4e40-ac68-5fe06e924d50, vol_name:cephfs) < ""
Oct 01 17:00:34 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8d007d9c-80e4-4e40-ac68-5fe06e924d50/.meta.tmp'
Oct 01 17:00:34 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8d007d9c-80e4-4e40-ac68-5fe06e924d50/.meta.tmp' to config b'/volumes/_nogroup/8d007d9c-80e4-4e40-ac68-5fe06e924d50/.meta'
Oct 01 17:00:34 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eaae8952-6b44-417e-bb88-4247cfccb824, sub_name:8d007d9c-80e4-4e40-ac68-5fe06e924d50, vol_name:cephfs) < ""
Oct 01 17:00:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 17:00:35 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8d007d9c-80e4-4e40-ac68-5fe06e924d50", "snap_name": "eaae8952-6b44-417e-bb88-4247cfccb824_5863d869-5948-4ff6-8691-98e4329485df", "force": true, "format": "json"}]: dispatch
Oct 01 17:00:35 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8d007d9c-80e4-4e40-ac68-5fe06e924d50", "snap_name": "eaae8952-6b44-417e-bb88-4247cfccb824", "force": true, "format": "json"}]: dispatch
Oct 01 17:00:35 compute-0 podman[267026]: 2025-10-01 17:00:35.791626572 +0000 UTC m=+0.090218818 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 01 17:00:35 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 8.0 KiB/s wr, 2 op/s
Oct 01 17:00:36 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "78560bf7-3bfe-43b8-a6d9-8fc4f7716ce5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:00:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:78560bf7-3bfe-43b8-a6d9-8fc4f7716ce5, vol_name:cephfs) < ""
Oct 01 17:00:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/78560bf7-3bfe-43b8-a6d9-8fc4f7716ce5/.meta.tmp'
Oct 01 17:00:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/78560bf7-3bfe-43b8-a6d9-8fc4f7716ce5/.meta.tmp' to config b'/volumes/_nogroup/78560bf7-3bfe-43b8-a6d9-8fc4f7716ce5/.meta'
Oct 01 17:00:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:78560bf7-3bfe-43b8-a6d9-8fc4f7716ce5, vol_name:cephfs) < ""
Oct 01 17:00:36 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "78560bf7-3bfe-43b8-a6d9-8fc4f7716ce5", "format": "json"}]: dispatch
Oct 01 17:00:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:78560bf7-3bfe-43b8-a6d9-8fc4f7716ce5, vol_name:cephfs) < ""
Oct 01 17:00:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:78560bf7-3bfe-43b8-a6d9-8fc4f7716ce5, vol_name:cephfs) < ""
Oct 01 17:00:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:00:36 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:00:36 compute-0 ceph-mon[74273]: pgmap v910: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 8.0 KiB/s wr, 2 op/s
Oct 01 17:00:36 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:00:36 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ae3b1acf-5bd9-4e7d-9dda-2786b242cb55", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:00:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ae3b1acf-5bd9-4e7d-9dda-2786b242cb55, vol_name:cephfs) < ""
Oct 01 17:00:37 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ae3b1acf-5bd9-4e7d-9dda-2786b242cb55/.meta.tmp'
Oct 01 17:00:37 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ae3b1acf-5bd9-4e7d-9dda-2786b242cb55/.meta.tmp' to config b'/volumes/_nogroup/ae3b1acf-5bd9-4e7d-9dda-2786b242cb55/.meta'
Oct 01 17:00:37 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ae3b1acf-5bd9-4e7d-9dda-2786b242cb55, vol_name:cephfs) < ""
Oct 01 17:00:37 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ae3b1acf-5bd9-4e7d-9dda-2786b242cb55", "format": "json"}]: dispatch
Oct 01 17:00:37 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ae3b1acf-5bd9-4e7d-9dda-2786b242cb55, vol_name:cephfs) < ""
Oct 01 17:00:37 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ae3b1acf-5bd9-4e7d-9dda-2786b242cb55, vol_name:cephfs) < ""
Oct 01 17:00:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:00:37 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:00:37 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "78560bf7-3bfe-43b8-a6d9-8fc4f7716ce5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:00:37 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "78560bf7-3bfe-43b8-a6d9-8fc4f7716ce5", "format": "json"}]: dispatch
Oct 01 17:00:37 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ae3b1acf-5bd9-4e7d-9dda-2786b242cb55", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:00:37 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:00:37 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 8.0 KiB/s wr, 2 op/s
Oct 01 17:00:38 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8d007d9c-80e4-4e40-ac68-5fe06e924d50", "format": "json"}]: dispatch
Oct 01 17:00:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8d007d9c-80e4-4e40-ac68-5fe06e924d50, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:00:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8d007d9c-80e4-4e40-ac68-5fe06e924d50, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:00:38 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8d007d9c-80e4-4e40-ac68-5fe06e924d50' of type subvolume
Oct 01 17:00:38 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:00:38.346+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8d007d9c-80e4-4e40-ac68-5fe06e924d50' of type subvolume
Oct 01 17:00:38 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8d007d9c-80e4-4e40-ac68-5fe06e924d50", "force": true, "format": "json"}]: dispatch
Oct 01 17:00:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8d007d9c-80e4-4e40-ac68-5fe06e924d50, vol_name:cephfs) < ""
Oct 01 17:00:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8d007d9c-80e4-4e40-ac68-5fe06e924d50'' moved to trashcan
Oct 01 17:00:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:00:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8d007d9c-80e4-4e40-ac68-5fe06e924d50, vol_name:cephfs) < ""
Oct 01 17:00:38 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ae3b1acf-5bd9-4e7d-9dda-2786b242cb55", "format": "json"}]: dispatch
Oct 01 17:00:38 compute-0 ceph-mon[74273]: pgmap v911: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 8.0 KiB/s wr, 2 op/s
Oct 01 17:00:39 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8d007d9c-80e4-4e40-ac68-5fe06e924d50", "format": "json"}]: dispatch
Oct 01 17:00:39 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8d007d9c-80e4-4e40-ac68-5fe06e924d50", "force": true, "format": "json"}]: dispatch
Oct 01 17:00:39 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 18 KiB/s wr, 6 op/s
Oct 01 17:00:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 17:00:40 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ae3b1acf-5bd9-4e7d-9dda-2786b242cb55", "format": "json"}]: dispatch
Oct 01 17:00:40 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ae3b1acf-5bd9-4e7d-9dda-2786b242cb55, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:00:40 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ae3b1acf-5bd9-4e7d-9dda-2786b242cb55, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:00:40 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ae3b1acf-5bd9-4e7d-9dda-2786b242cb55' of type subvolume
Oct 01 17:00:40 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:00:40.562+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ae3b1acf-5bd9-4e7d-9dda-2786b242cb55' of type subvolume
Oct 01 17:00:40 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ae3b1acf-5bd9-4e7d-9dda-2786b242cb55", "force": true, "format": "json"}]: dispatch
Oct 01 17:00:40 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ae3b1acf-5bd9-4e7d-9dda-2786b242cb55, vol_name:cephfs) < ""
Oct 01 17:00:40 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ae3b1acf-5bd9-4e7d-9dda-2786b242cb55'' moved to trashcan
Oct 01 17:00:40 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:00:40 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ae3b1acf-5bd9-4e7d-9dda-2786b242cb55, vol_name:cephfs) < ""
Oct 01 17:00:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Oct 01 17:00:40 compute-0 ceph-mon[74273]: pgmap v912: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 18 KiB/s wr, 6 op/s
Oct 01 17:00:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Oct 01 17:00:40 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Oct 01 17:00:40 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fef691f7-09c0-4a8b-8ca5-0edd6d3edf1f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:00:40 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fef691f7-09c0-4a8b-8ca5-0edd6d3edf1f, vol_name:cephfs) < ""
Oct 01 17:00:41 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fef691f7-09c0-4a8b-8ca5-0edd6d3edf1f/.meta.tmp'
Oct 01 17:00:41 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fef691f7-09c0-4a8b-8ca5-0edd6d3edf1f/.meta.tmp' to config b'/volumes/_nogroup/fef691f7-09c0-4a8b-8ca5-0edd6d3edf1f/.meta'
Oct 01 17:00:41 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fef691f7-09c0-4a8b-8ca5-0edd6d3edf1f, vol_name:cephfs) < ""
Oct 01 17:00:41 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fef691f7-09c0-4a8b-8ca5-0edd6d3edf1f", "format": "json"}]: dispatch
Oct 01 17:00:41 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fef691f7-09c0-4a8b-8ca5-0edd6d3edf1f, vol_name:cephfs) < ""
Oct 01 17:00:41 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fef691f7-09c0-4a8b-8ca5-0edd6d3edf1f, vol_name:cephfs) < ""
Oct 01 17:00:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:00:41 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:00:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:00:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:00:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:00:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:00:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:00:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:00:41 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ae3b1acf-5bd9-4e7d-9dda-2786b242cb55", "format": "json"}]: dispatch
Oct 01 17:00:41 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ae3b1acf-5bd9-4e7d-9dda-2786b242cb55", "force": true, "format": "json"}]: dispatch
Oct 01 17:00:41 compute-0 ceph-mon[74273]: osdmap e128: 3 total, 3 up, 3 in
Oct 01 17:00:41 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fef691f7-09c0-4a8b-8ca5-0edd6d3edf1f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:00:41 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fef691f7-09c0-4a8b-8ca5-0edd6d3edf1f", "format": "json"}]: dispatch
Oct 01 17:00:41 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:00:41 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "78560bf7-3bfe-43b8-a6d9-8fc4f7716ce5", "format": "json"}]: dispatch
Oct 01 17:00:41 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:78560bf7-3bfe-43b8-a6d9-8fc4f7716ce5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:00:41 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:78560bf7-3bfe-43b8-a6d9-8fc4f7716ce5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:00:41 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '78560bf7-3bfe-43b8-a6d9-8fc4f7716ce5' of type subvolume
Oct 01 17:00:41 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:00:41.904+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '78560bf7-3bfe-43b8-a6d9-8fc4f7716ce5' of type subvolume
Oct 01 17:00:41 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "78560bf7-3bfe-43b8-a6d9-8fc4f7716ce5", "force": true, "format": "json"}]: dispatch
Oct 01 17:00:41 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:78560bf7-3bfe-43b8-a6d9-8fc4f7716ce5, vol_name:cephfs) < ""
Oct 01 17:00:41 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/78560bf7-3bfe-43b8-a6d9-8fc4f7716ce5'' moved to trashcan
Oct 01 17:00:41 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 16 KiB/s wr, 5 op/s
Oct 01 17:00:41 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:00:41 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:78560bf7-3bfe-43b8-a6d9-8fc4f7716ce5, vol_name:cephfs) < ""
Oct 01 17:00:42 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5954ec06-59c3-4fff-a9f8-48042027054b", "format": "json"}]: dispatch
Oct 01 17:00:42 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5954ec06-59c3-4fff-a9f8-48042027054b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:00:42 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5954ec06-59c3-4fff-a9f8-48042027054b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:00:42 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5954ec06-59c3-4fff-a9f8-48042027054b' of type subvolume
Oct 01 17:00:42 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:00:42.066+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5954ec06-59c3-4fff-a9f8-48042027054b' of type subvolume
Oct 01 17:00:42 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5954ec06-59c3-4fff-a9f8-48042027054b", "force": true, "format": "json"}]: dispatch
Oct 01 17:00:42 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5954ec06-59c3-4fff-a9f8-48042027054b, vol_name:cephfs) < ""
Oct 01 17:00:42 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5954ec06-59c3-4fff-a9f8-48042027054b'' moved to trashcan
Oct 01 17:00:42 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:00:42 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5954ec06-59c3-4fff-a9f8-48042027054b, vol_name:cephfs) < ""
Oct 01 17:00:42 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "78560bf7-3bfe-43b8-a6d9-8fc4f7716ce5", "format": "json"}]: dispatch
Oct 01 17:00:42 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "78560bf7-3bfe-43b8-a6d9-8fc4f7716ce5", "force": true, "format": "json"}]: dispatch
Oct 01 17:00:42 compute-0 ceph-mon[74273]: pgmap v914: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 16 KiB/s wr, 5 op/s
Oct 01 17:00:42 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5954ec06-59c3-4fff-a9f8-48042027054b", "format": "json"}]: dispatch
Oct 01 17:00:42 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5954ec06-59c3-4fff-a9f8-48042027054b", "force": true, "format": "json"}]: dispatch
Oct 01 17:00:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 17:00:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1093424943' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 17:00:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 17:00:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1093424943' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 17:00:43 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 30 KiB/s wr, 7 op/s
Oct 01 17:00:44 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3394bfe3-ff37-4de3-a0d3-c419f1e9c40a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:00:44 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3394bfe3-ff37-4de3-a0d3-c419f1e9c40a, vol_name:cephfs) < ""
Oct 01 17:00:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1093424943' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 17:00:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1093424943' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 17:00:44 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3394bfe3-ff37-4de3-a0d3-c419f1e9c40a/.meta.tmp'
Oct 01 17:00:44 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3394bfe3-ff37-4de3-a0d3-c419f1e9c40a/.meta.tmp' to config b'/volumes/_nogroup/3394bfe3-ff37-4de3-a0d3-c419f1e9c40a/.meta'
Oct 01 17:00:44 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3394bfe3-ff37-4de3-a0d3-c419f1e9c40a, vol_name:cephfs) < ""
Oct 01 17:00:44 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3394bfe3-ff37-4de3-a0d3-c419f1e9c40a", "format": "json"}]: dispatch
Oct 01 17:00:44 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3394bfe3-ff37-4de3-a0d3-c419f1e9c40a, vol_name:cephfs) < ""
Oct 01 17:00:44 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3394bfe3-ff37-4de3-a0d3-c419f1e9c40a, vol_name:cephfs) < ""
Oct 01 17:00:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:00:44 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:00:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 17:00:45 compute-0 ceph-mon[74273]: pgmap v915: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 30 KiB/s wr, 7 op/s
Oct 01 17:00:45 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3394bfe3-ff37-4de3-a0d3-c419f1e9c40a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:00:45 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3394bfe3-ff37-4de3-a0d3-c419f1e9c40a", "format": "json"}]: dispatch
Oct 01 17:00:45 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:00:45 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 30 KiB/s wr, 7 op/s
Oct 01 17:00:46 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fef691f7-09c0-4a8b-8ca5-0edd6d3edf1f", "format": "json"}]: dispatch
Oct 01 17:00:46 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:fef691f7-09c0-4a8b-8ca5-0edd6d3edf1f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:00:46 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:fef691f7-09c0-4a8b-8ca5-0edd6d3edf1f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:00:46 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fef691f7-09c0-4a8b-8ca5-0edd6d3edf1f' of type subvolume
Oct 01 17:00:46 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:00:46.284+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fef691f7-09c0-4a8b-8ca5-0edd6d3edf1f' of type subvolume
Oct 01 17:00:46 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fef691f7-09c0-4a8b-8ca5-0edd6d3edf1f", "force": true, "format": "json"}]: dispatch
Oct 01 17:00:46 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fef691f7-09c0-4a8b-8ca5-0edd6d3edf1f, vol_name:cephfs) < ""
Oct 01 17:00:46 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/fef691f7-09c0-4a8b-8ca5-0edd6d3edf1f'' moved to trashcan
Oct 01 17:00:46 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:00:46 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fef691f7-09c0-4a8b-8ca5-0edd6d3edf1f, vol_name:cephfs) < ""
Oct 01 17:00:46 compute-0 ceph-mon[74273]: pgmap v916: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 30 KiB/s wr, 7 op/s
Oct 01 17:00:47 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fef691f7-09c0-4a8b-8ca5-0edd6d3edf1f", "format": "json"}]: dispatch
Oct 01 17:00:47 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fef691f7-09c0-4a8b-8ca5-0edd6d3edf1f", "force": true, "format": "json"}]: dispatch
Oct 01 17:00:47 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3394bfe3-ff37-4de3-a0d3-c419f1e9c40a", "format": "json"}]: dispatch
Oct 01 17:00:47 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3394bfe3-ff37-4de3-a0d3-c419f1e9c40a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:00:47 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3394bfe3-ff37-4de3-a0d3-c419f1e9c40a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:00:47 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:00:47.729+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3394bfe3-ff37-4de3-a0d3-c419f1e9c40a' of type subvolume
Oct 01 17:00:47 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3394bfe3-ff37-4de3-a0d3-c419f1e9c40a' of type subvolume
Oct 01 17:00:47 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3394bfe3-ff37-4de3-a0d3-c419f1e9c40a", "force": true, "format": "json"}]: dispatch
Oct 01 17:00:47 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3394bfe3-ff37-4de3-a0d3-c419f1e9c40a, vol_name:cephfs) < ""
Oct 01 17:00:47 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3394bfe3-ff37-4de3-a0d3-c419f1e9c40a'' moved to trashcan
Oct 01 17:00:47 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:00:47 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3394bfe3-ff37-4de3-a0d3-c419f1e9c40a, vol_name:cephfs) < ""
Oct 01 17:00:47 compute-0 ceph-osd[89167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 01 17:00:47 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 30 KiB/s wr, 7 op/s
Oct 01 17:00:48 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3394bfe3-ff37-4de3-a0d3-c419f1e9c40a", "format": "json"}]: dispatch
Oct 01 17:00:48 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3394bfe3-ff37-4de3-a0d3-c419f1e9c40a", "force": true, "format": "json"}]: dispatch
Oct 01 17:00:48 compute-0 ceph-mon[74273]: pgmap v917: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 30 KiB/s wr, 7 op/s
Oct 01 17:00:49 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 27 KiB/s wr, 6 op/s
Oct 01 17:00:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 17:00:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Oct 01 17:00:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Oct 01 17:00:50 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Oct 01 17:00:50 compute-0 podman[267045]: 2025-10-01 17:00:50.771405025 +0000 UTC m=+0.085739937 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 01 17:00:50 compute-0 ceph-mon[74273]: pgmap v918: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 27 KiB/s wr, 6 op/s
Oct 01 17:00:50 compute-0 ceph-mon[74273]: osdmap e129: 3 total, 3 up, 3 in
Oct 01 17:00:51 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "snap_name": "56cc5791-7061-44c0-b106-1963f41e0a83", "format": "json"}]: dispatch
Oct 01 17:00:51 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:56cc5791-7061-44c0-b106-1963f41e0a83, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:00:51 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:56cc5791-7061-44c0-b106-1963f41e0a83, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:00:51 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 27 KiB/s wr, 6 op/s
Oct 01 17:00:52 compute-0 podman[267067]: 2025-10-01 17:00:52.785935156 +0000 UTC m=+0.097900026 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible)
Oct 01 17:00:52 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "snap_name": "56cc5791-7061-44c0-b106-1963f41e0a83", "format": "json"}]: dispatch
Oct 01 17:00:52 compute-0 ceph-mon[74273]: pgmap v920: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 27 KiB/s wr, 6 op/s
Oct 01 17:00:53 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 14 KiB/s wr, 5 op/s
Oct 01 17:00:55 compute-0 ceph-mon[74273]: pgmap v921: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 14 KiB/s wr, 5 op/s
Oct 01 17:00:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 17:00:55 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "snap_name": "56cc5791-7061-44c0-b106-1963f41e0a83_64fa0867-43fa-4f45-819a-9b61bcc0a47e", "force": true, "format": "json"}]: dispatch
Oct 01 17:00:55 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:56cc5791-7061-44c0-b106-1963f41e0a83_64fa0867-43fa-4f45-819a-9b61bcc0a47e, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:00:55 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta.tmp'
Oct 01 17:00:55 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta.tmp' to config b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta'
Oct 01 17:00:55 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:56cc5791-7061-44c0-b106-1963f41e0a83_64fa0867-43fa-4f45-819a-9b61bcc0a47e, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:00:55 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "snap_name": "56cc5791-7061-44c0-b106-1963f41e0a83", "force": true, "format": "json"}]: dispatch
Oct 01 17:00:55 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:56cc5791-7061-44c0-b106-1963f41e0a83, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:00:55 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta.tmp'
Oct 01 17:00:55 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta.tmp' to config b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta'
Oct 01 17:00:55 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:56cc5791-7061-44c0-b106-1963f41e0a83, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:00:55 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 14 KiB/s wr, 5 op/s
Oct 01 17:00:57 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "snap_name": "56cc5791-7061-44c0-b106-1963f41e0a83_64fa0867-43fa-4f45-819a-9b61bcc0a47e", "force": true, "format": "json"}]: dispatch
Oct 01 17:00:57 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "snap_name": "56cc5791-7061-44c0-b106-1963f41e0a83", "force": true, "format": "json"}]: dispatch
Oct 01 17:00:57 compute-0 ceph-mon[74273]: pgmap v922: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 14 KiB/s wr, 5 op/s
Oct 01 17:00:57 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 14 KiB/s wr, 5 op/s
Oct 01 17:00:59 compute-0 ceph-mon[74273]: pgmap v923: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 14 KiB/s wr, 5 op/s
Oct 01 17:00:59 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 9.5 KiB/s wr, 4 op/s
Oct 01 17:01:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 17:01:01 compute-0 ceph-mon[74273]: pgmap v924: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 9.5 KiB/s wr, 4 op/s
Oct 01 17:01:01 compute-0 CROND[267088]: (root) CMD (run-parts /etc/cron.hourly)
Oct 01 17:01:01 compute-0 run-parts[267091]: (/etc/cron.hourly) starting 0anacron
Oct 01 17:01:01 compute-0 run-parts[267097]: (/etc/cron.hourly) finished 0anacron
Oct 01 17:01:01 compute-0 CROND[267087]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 01 17:01:01 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 525 B/s rd, 8.1 KiB/s wr, 3 op/s
Oct 01 17:01:02 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "76e4be4c-744e-45f7-a850-fa855b7c5552", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:01:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:76e4be4c-744e-45f7-a850-fa855b7c5552, vol_name:cephfs) < ""
Oct 01 17:01:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/76e4be4c-744e-45f7-a850-fa855b7c5552/.meta.tmp'
Oct 01 17:01:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/76e4be4c-744e-45f7-a850-fa855b7c5552/.meta.tmp' to config b'/volumes/_nogroup/76e4be4c-744e-45f7-a850-fa855b7c5552/.meta'
Oct 01 17:01:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:76e4be4c-744e-45f7-a850-fa855b7c5552, vol_name:cephfs) < ""
Oct 01 17:01:02 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "76e4be4c-744e-45f7-a850-fa855b7c5552", "format": "json"}]: dispatch
Oct 01 17:01:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:76e4be4c-744e-45f7-a850-fa855b7c5552, vol_name:cephfs) < ""
Oct 01 17:01:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:76e4be4c-744e-45f7-a850-fa855b7c5552, vol_name:cephfs) < ""
Oct 01 17:01:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:01:02 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:01:03 compute-0 ceph-mon[74273]: pgmap v925: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 525 B/s rd, 8.1 KiB/s wr, 3 op/s
Oct 01 17:01:03 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:01:03 compute-0 podman[267098]: 2025-10-01 17:01:03.825985932 +0000 UTC m=+0.141535929 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 01 17:01:03 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 12 KiB/s wr, 4 op/s
Oct 01 17:01:04 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "76e4be4c-744e-45f7-a850-fa855b7c5552", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:01:04 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "76e4be4c-744e-45f7-a850-fa855b7c5552", "format": "json"}]: dispatch
Oct 01 17:01:05 compute-0 ceph-mon[74273]: pgmap v926: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 12 KiB/s wr, 4 op/s
Oct 01 17:01:05 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "snap_name": "65fe10ee-973e-4685-94ee-6c8649be79ad", "format": "json"}]: dispatch
Oct 01 17:01:05 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:65fe10ee-973e-4685-94ee-6c8649be79ad, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:01:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 17:01:05 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:65fe10ee-973e-4685-94ee-6c8649be79ad, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:01:05 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 8.0 KiB/s wr, 2 op/s
Oct 01 17:01:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Oct 01 17:01:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Oct 01 17:01:06 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Oct 01 17:01:06 compute-0 podman[267124]: 2025-10-01 17:01:06.81549026 +0000 UTC m=+0.121618893 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:01:07 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "snap_name": "65fe10ee-973e-4685-94ee-6c8649be79ad", "format": "json"}]: dispatch
Oct 01 17:01:07 compute-0 ceph-mon[74273]: pgmap v927: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 8.0 KiB/s wr, 2 op/s
Oct 01 17:01:07 compute-0 ceph-mon[74273]: osdmap e130: 3 total, 3 up, 3 in
Oct 01 17:01:07 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 9.4 KiB/s wr, 2 op/s
Oct 01 17:01:09 compute-0 ceph-mon[74273]: pgmap v929: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 9.4 KiB/s wr, 2 op/s
Oct 01 17:01:09 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "snap_name": "0efe5646-e561-4cdf-9c56-ad36284bde88", "format": "json"}]: dispatch
Oct 01 17:01:09 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:0efe5646-e561-4cdf-9c56-ad36284bde88, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:01:09 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:0efe5646-e561-4cdf-9c56-ad36284bde88, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:01:09 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s wr, 2 op/s
Oct 01 17:01:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 17:01:11 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "snap_name": "0efe5646-e561-4cdf-9c56-ad36284bde88", "format": "json"}]: dispatch
Oct 01 17:01:11 compute-0 ceph-mon[74273]: pgmap v930: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s wr, 2 op/s
Oct 01 17:01:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_17:01:11
Oct 01 17:01:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 17:01:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 17:01:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', '.mgr', 'images', 'backups', 'vms', '.rgw.root', 'default.rgw.meta']
Oct 01 17:01:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:01:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:01:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 17:01:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:01:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:01:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:01:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:01:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 17:01:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:01:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 17:01:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:01:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:01:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:01:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:01:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:01:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:01:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:01:11 compute-0 nova_compute[259504]: 2025-10-01 17:01:11.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:01:11 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s wr, 2 op/s
Oct 01 17:01:13 compute-0 ceph-mon[74273]: pgmap v931: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s wr, 2 op/s
Oct 01 17:01:13 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d0586e81-22a4-46d7-91b6-618299744cd8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:01:13 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d0586e81-22a4-46d7-91b6-618299744cd8, vol_name:cephfs) < ""
Oct 01 17:01:13 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d0586e81-22a4-46d7-91b6-618299744cd8/.meta.tmp'
Oct 01 17:01:13 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d0586e81-22a4-46d7-91b6-618299744cd8/.meta.tmp' to config b'/volumes/_nogroup/d0586e81-22a4-46d7-91b6-618299744cd8/.meta'
Oct 01 17:01:13 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d0586e81-22a4-46d7-91b6-618299744cd8, vol_name:cephfs) < ""
Oct 01 17:01:13 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d0586e81-22a4-46d7-91b6-618299744cd8", "format": "json"}]: dispatch
Oct 01 17:01:13 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d0586e81-22a4-46d7-91b6-618299744cd8, vol_name:cephfs) < ""
Oct 01 17:01:13 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d0586e81-22a4-46d7-91b6-618299744cd8, vol_name:cephfs) < ""
Oct 01 17:01:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:01:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:01:13 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "snap_name": "99077526-b8d6-449c-8b0b-3d920742c77d", "format": "json"}]: dispatch
Oct 01 17:01:13 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:99077526-b8d6-449c-8b0b-3d920742c77d, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:01:13 compute-0 sudo[267143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:01:13 compute-0 sudo[267143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:01:13 compute-0 sudo[267143]: pam_unix(sudo:session): session closed for user root
Oct 01 17:01:13 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:99077526-b8d6-449c-8b0b-3d920742c77d, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:01:13 compute-0 sudo[267168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:01:13 compute-0 sudo[267168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:01:13 compute-0 sudo[267168]: pam_unix(sudo:session): session closed for user root
Oct 01 17:01:13 compute-0 sudo[267193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:01:13 compute-0 sudo[267193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:01:13 compute-0 sudo[267193]: pam_unix(sudo:session): session closed for user root
Oct 01 17:01:13 compute-0 sudo[267218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 17:01:13 compute-0 sudo[267218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:01:13 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 6.8 KiB/s wr, 1 op/s
Oct 01 17:01:14 compute-0 sudo[267218]: pam_unix(sudo:session): session closed for user root
Oct 01 17:01:14 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d0586e81-22a4-46d7-91b6-618299744cd8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:01:14 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:01:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 17:01:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:01:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 17:01:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 17:01:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 17:01:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:01:14 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev b04285a2-350b-492d-a859-779fc62f2f71 does not exist
Oct 01 17:01:14 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 82c38a40-51ae-4040-ae0c-deb32264b321 does not exist
Oct 01 17:01:14 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev d41516e3-69b7-43f2-a116-75e681290e9b does not exist
Oct 01 17:01:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 17:01:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 17:01:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 17:01:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 17:01:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 17:01:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:01:14 compute-0 sudo[267274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:01:14 compute-0 sudo[267274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:01:14 compute-0 sudo[267274]: pam_unix(sudo:session): session closed for user root
Oct 01 17:01:14 compute-0 sudo[267299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:01:14 compute-0 sudo[267299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:01:14 compute-0 sudo[267299]: pam_unix(sudo:session): session closed for user root
Oct 01 17:01:14 compute-0 sudo[267324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:01:14 compute-0 sudo[267324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:01:14 compute-0 sudo[267324]: pam_unix(sudo:session): session closed for user root
Oct 01 17:01:14 compute-0 sudo[267349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 17:01:14 compute-0 sudo[267349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:01:14 compute-0 podman[267415]: 2025-10-01 17:01:14.953046017 +0000 UTC m=+0.069456363 container create d3fb1df127b7b7fb2af10e071d994de6accd45f05bcb2d0ff025ce7acccb226b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dhawan, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 01 17:01:15 compute-0 systemd[1]: Started libpod-conmon-d3fb1df127b7b7fb2af10e071d994de6accd45f05bcb2d0ff025ce7acccb226b.scope.
Oct 01 17:01:15 compute-0 podman[267415]: 2025-10-01 17:01:14.923192572 +0000 UTC m=+0.039602968 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:01:15 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:01:15 compute-0 podman[267415]: 2025-10-01 17:01:15.064723581 +0000 UTC m=+0.181133937 container init d3fb1df127b7b7fb2af10e071d994de6accd45f05bcb2d0ff025ce7acccb226b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Oct 01 17:01:15 compute-0 podman[267415]: 2025-10-01 17:01:15.077741394 +0000 UTC m=+0.194151760 container start d3fb1df127b7b7fb2af10e071d994de6accd45f05bcb2d0ff025ce7acccb226b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dhawan, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:01:15 compute-0 podman[267415]: 2025-10-01 17:01:15.082174889 +0000 UTC m=+0.198585225 container attach d3fb1df127b7b7fb2af10e071d994de6accd45f05bcb2d0ff025ce7acccb226b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 01 17:01:15 compute-0 wizardly_dhawan[267431]: 167 167
Oct 01 17:01:15 compute-0 systemd[1]: libpod-d3fb1df127b7b7fb2af10e071d994de6accd45f05bcb2d0ff025ce7acccb226b.scope: Deactivated successfully.
Oct 01 17:01:15 compute-0 podman[267415]: 2025-10-01 17:01:15.086607815 +0000 UTC m=+0.203018161 container died d3fb1df127b7b7fb2af10e071d994de6accd45f05bcb2d0ff025ce7acccb226b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dhawan, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 01 17:01:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-2af6452b38df97ea1a23d28c454891cba398cd9c18b71a27a9df6a1b19b78abf-merged.mount: Deactivated successfully.
Oct 01 17:01:15 compute-0 podman[267415]: 2025-10-01 17:01:15.133693923 +0000 UTC m=+0.250104259 container remove d3fb1df127b7b7fb2af10e071d994de6accd45f05bcb2d0ff025ce7acccb226b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:01:15 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d0586e81-22a4-46d7-91b6-618299744cd8", "format": "json"}]: dispatch
Oct 01 17:01:15 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "snap_name": "99077526-b8d6-449c-8b0b-3d920742c77d", "format": "json"}]: dispatch
Oct 01 17:01:15 compute-0 ceph-mon[74273]: pgmap v932: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 6.8 KiB/s wr, 1 op/s
Oct 01 17:01:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:01:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 17:01:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:01:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 17:01:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 17:01:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:01:15 compute-0 systemd[1]: libpod-conmon-d3fb1df127b7b7fb2af10e071d994de6accd45f05bcb2d0ff025ce7acccb226b.scope: Deactivated successfully.
Oct 01 17:01:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 17:01:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Oct 01 17:01:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Oct 01 17:01:15 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Oct 01 17:01:15 compute-0 podman[267454]: 2025-10-01 17:01:15.371974083 +0000 UTC m=+0.056811772 container create d85ebce59f9bec072cee5be02b6b59e0330d945fcce22c8ebb07d7b548ab8643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banach, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:01:15 compute-0 systemd[1]: Started libpod-conmon-d85ebce59f9bec072cee5be02b6b59e0330d945fcce22c8ebb07d7b548ab8643.scope.
Oct 01 17:01:15 compute-0 podman[267454]: 2025-10-01 17:01:15.345428048 +0000 UTC m=+0.030265817 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:01:15 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:01:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/099ed83e6842b701b5bfe7987e3ac1e3a684a50b3d240027565f875b9975bc6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:01:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/099ed83e6842b701b5bfe7987e3ac1e3a684a50b3d240027565f875b9975bc6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:01:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/099ed83e6842b701b5bfe7987e3ac1e3a684a50b3d240027565f875b9975bc6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:01:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/099ed83e6842b701b5bfe7987e3ac1e3a684a50b3d240027565f875b9975bc6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:01:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/099ed83e6842b701b5bfe7987e3ac1e3a684a50b3d240027565f875b9975bc6e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 17:01:15 compute-0 podman[267454]: 2025-10-01 17:01:15.477405079 +0000 UTC m=+0.162242788 container init d85ebce59f9bec072cee5be02b6b59e0330d945fcce22c8ebb07d7b548ab8643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banach, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:01:15 compute-0 podman[267454]: 2025-10-01 17:01:15.492307122 +0000 UTC m=+0.177144801 container start d85ebce59f9bec072cee5be02b6b59e0330d945fcce22c8ebb07d7b548ab8643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:01:15 compute-0 podman[267454]: 2025-10-01 17:01:15.496807196 +0000 UTC m=+0.181644915 container attach d85ebce59f9bec072cee5be02b6b59e0330d945fcce22c8ebb07d7b548ab8643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 01 17:01:15 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 6.9 KiB/s wr, 1 op/s
Oct 01 17:01:16 compute-0 ceph-mon[74273]: osdmap e131: 3 total, 3 up, 3 in
Oct 01 17:01:16 compute-0 ceph-mon[74273]: pgmap v934: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 6.9 KiB/s wr, 1 op/s
Oct 01 17:01:16 compute-0 beautiful_banach[267468]: --> passed data devices: 0 physical, 3 LVM
Oct 01 17:01:16 compute-0 beautiful_banach[267468]: --> relative data size: 1.0
Oct 01 17:01:16 compute-0 beautiful_banach[267468]: --> All data devices are unavailable
Oct 01 17:01:16 compute-0 systemd[1]: libpod-d85ebce59f9bec072cee5be02b6b59e0330d945fcce22c8ebb07d7b548ab8643.scope: Deactivated successfully.
Oct 01 17:01:16 compute-0 podman[267454]: 2025-10-01 17:01:16.657800482 +0000 UTC m=+1.342638201 container died d85ebce59f9bec072cee5be02b6b59e0330d945fcce22c8ebb07d7b548ab8643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banach, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:01:16 compute-0 systemd[1]: libpod-d85ebce59f9bec072cee5be02b6b59e0330d945fcce22c8ebb07d7b548ab8643.scope: Consumed 1.119s CPU time.
Oct 01 17:01:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-099ed83e6842b701b5bfe7987e3ac1e3a684a50b3d240027565f875b9975bc6e-merged.mount: Deactivated successfully.
Oct 01 17:01:16 compute-0 podman[267454]: 2025-10-01 17:01:16.73634625 +0000 UTC m=+1.421183939 container remove d85ebce59f9bec072cee5be02b6b59e0330d945fcce22c8ebb07d7b548ab8643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banach, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:01:16 compute-0 nova_compute[259504]: 2025-10-01 17:01:16.746 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:01:16 compute-0 systemd[1]: libpod-conmon-d85ebce59f9bec072cee5be02b6b59e0330d945fcce22c8ebb07d7b548ab8643.scope: Deactivated successfully.
Oct 01 17:01:16 compute-0 nova_compute[259504]: 2025-10-01 17:01:16.749 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:01:16 compute-0 nova_compute[259504]: 2025-10-01 17:01:16.749 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 17:01:16 compute-0 nova_compute[259504]: 2025-10-01 17:01:16.749 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 17:01:16 compute-0 nova_compute[259504]: 2025-10-01 17:01:16.768 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 17:01:16 compute-0 nova_compute[259504]: 2025-10-01 17:01:16.769 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:01:16 compute-0 nova_compute[259504]: 2025-10-01 17:01:16.769 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:01:16 compute-0 sudo[267349]: pam_unix(sudo:session): session closed for user root
Oct 01 17:01:16 compute-0 nova_compute[259504]: 2025-10-01 17:01:16.789 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:01:16 compute-0 nova_compute[259504]: 2025-10-01 17:01:16.789 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:01:16 compute-0 nova_compute[259504]: 2025-10-01 17:01:16.790 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:01:16 compute-0 nova_compute[259504]: 2025-10-01 17:01:16.790 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 17:01:16 compute-0 nova_compute[259504]: 2025-10-01 17:01:16.790 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:01:16 compute-0 sudo[267512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:01:16 compute-0 sudo[267512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:01:16 compute-0 sudo[267512]: pam_unix(sudo:session): session closed for user root
Oct 01 17:01:16 compute-0 sudo[267538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:01:16 compute-0 sudo[267538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:01:16 compute-0 sudo[267538]: pam_unix(sudo:session): session closed for user root
Oct 01 17:01:17 compute-0 sudo[267580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:01:17 compute-0 sudo[267580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:01:17 compute-0 sudo[267580]: pam_unix(sudo:session): session closed for user root
Oct 01 17:01:17 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d0586e81-22a4-46d7-91b6-618299744cd8", "format": "json"}]: dispatch
Oct 01 17:01:17 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d0586e81-22a4-46d7-91b6-618299744cd8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:01:17 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d0586e81-22a4-46d7-91b6-618299744cd8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:01:17 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:01:17.043+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd0586e81-22a4-46d7-91b6-618299744cd8' of type subvolume
Oct 01 17:01:17 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd0586e81-22a4-46d7-91b6-618299744cd8' of type subvolume
Oct 01 17:01:17 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d0586e81-22a4-46d7-91b6-618299744cd8", "force": true, "format": "json"}]: dispatch
Oct 01 17:01:17 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d0586e81-22a4-46d7-91b6-618299744cd8, vol_name:cephfs) < ""
Oct 01 17:01:17 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d0586e81-22a4-46d7-91b6-618299744cd8'' moved to trashcan
Oct 01 17:01:17 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:01:17 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d0586e81-22a4-46d7-91b6-618299744cd8, vol_name:cephfs) < ""
Oct 01 17:01:17 compute-0 sudo[267607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 17:01:17 compute-0 sudo[267607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:01:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:01:17 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/247154741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:01:17 compute-0 nova_compute[259504]: 2025-10-01 17:01:17.268 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:01:17 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d0586e81-22a4-46d7-91b6-618299744cd8", "format": "json"}]: dispatch
Oct 01 17:01:17 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d0586e81-22a4-46d7-91b6-618299744cd8", "force": true, "format": "json"}]: dispatch
Oct 01 17:01:17 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/247154741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:01:17 compute-0 nova_compute[259504]: 2025-10-01 17:01:17.444 2 WARNING nova.virt.libvirt.driver [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 17:01:17 compute-0 nova_compute[259504]: 2025-10-01 17:01:17.445 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5083MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 17:01:17 compute-0 nova_compute[259504]: 2025-10-01 17:01:17.445 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:01:17 compute-0 nova_compute[259504]: 2025-10-01 17:01:17.445 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:01:17 compute-0 podman[267674]: 2025-10-01 17:01:17.515843344 +0000 UTC m=+0.067813038 container create e7dc7752844cef02957fa1d2124ba0f05b5e86d686acbc363de11152e57296c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_snyder, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:01:17 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "snap_name": "82d8c993-67bf-46d4-81da-17b2c2b9d616", "format": "json"}]: dispatch
Oct 01 17:01:17 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:82d8c993-67bf-46d4-81da-17b2c2b9d616, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:01:17 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:82d8c993-67bf-46d4-81da-17b2c2b9d616, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:01:17 compute-0 systemd[1]: Started libpod-conmon-e7dc7752844cef02957fa1d2124ba0f05b5e86d686acbc363de11152e57296c4.scope.
Oct 01 17:01:17 compute-0 nova_compute[259504]: 2025-10-01 17:01:17.570 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 17:01:17 compute-0 nova_compute[259504]: 2025-10-01 17:01:17.570 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 17:01:17 compute-0 podman[267674]: 2025-10-01 17:01:17.489281499 +0000 UTC m=+0.041251243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:01:17 compute-0 nova_compute[259504]: 2025-10-01 17:01:17.596 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:01:17 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:01:17 compute-0 podman[267674]: 2025-10-01 17:01:17.619921639 +0000 UTC m=+0.171891333 container init e7dc7752844cef02957fa1d2124ba0f05b5e86d686acbc363de11152e57296c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_snyder, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 01 17:01:17 compute-0 podman[267674]: 2025-10-01 17:01:17.630320028 +0000 UTC m=+0.182289692 container start e7dc7752844cef02957fa1d2124ba0f05b5e86d686acbc363de11152e57296c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:01:17 compute-0 podman[267674]: 2025-10-01 17:01:17.633932221 +0000 UTC m=+0.185901885 container attach e7dc7752844cef02957fa1d2124ba0f05b5e86d686acbc363de11152e57296c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 01 17:01:17 compute-0 epic_snyder[267690]: 167 167
Oct 01 17:01:17 compute-0 systemd[1]: libpod-e7dc7752844cef02957fa1d2124ba0f05b5e86d686acbc363de11152e57296c4.scope: Deactivated successfully.
Oct 01 17:01:17 compute-0 podman[267674]: 2025-10-01 17:01:17.638076083 +0000 UTC m=+0.190045777 container died e7dc7752844cef02957fa1d2124ba0f05b5e86d686acbc363de11152e57296c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_snyder, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 01 17:01:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0cac4fe4ad55aa2bdc0c8a26d246f37bd8793f9c6ecb7aaf6383ff8953491c4-merged.mount: Deactivated successfully.
Oct 01 17:01:17 compute-0 podman[267674]: 2025-10-01 17:01:17.680785804 +0000 UTC m=+0.232755458 container remove e7dc7752844cef02957fa1d2124ba0f05b5e86d686acbc363de11152e57296c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_snyder, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 01 17:01:17 compute-0 systemd[1]: libpod-conmon-e7dc7752844cef02957fa1d2124ba0f05b5e86d686acbc363de11152e57296c4.scope: Deactivated successfully.
Oct 01 17:01:17 compute-0 podman[267734]: 2025-10-01 17:01:17.847587564 +0000 UTC m=+0.055190256 container create 2843481fe40224a7b4891968fe101be6775a7e00461de7f70a2b0ef67ce8ad33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 01 17:01:17 compute-0 systemd[1]: Started libpod-conmon-2843481fe40224a7b4891968fe101be6775a7e00461de7f70a2b0ef67ce8ad33.scope.
Oct 01 17:01:17 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:01:17 compute-0 podman[267734]: 2025-10-01 17:01:17.820153958 +0000 UTC m=+0.027756640 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:01:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eb3566bdac706ccfbd93a8811781222765c4adceea215888352cb2b9fdd3a84/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:01:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eb3566bdac706ccfbd93a8811781222765c4adceea215888352cb2b9fdd3a84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:01:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eb3566bdac706ccfbd93a8811781222765c4adceea215888352cb2b9fdd3a84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:01:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eb3566bdac706ccfbd93a8811781222765c4adceea215888352cb2b9fdd3a84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:01:17 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 6.8 KiB/s wr, 1 op/s
Oct 01 17:01:17 compute-0 podman[267734]: 2025-10-01 17:01:17.942639431 +0000 UTC m=+0.150242093 container init 2843481fe40224a7b4891968fe101be6775a7e00461de7f70a2b0ef67ce8ad33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mcnulty, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 01 17:01:17 compute-0 podman[267734]: 2025-10-01 17:01:17.954021069 +0000 UTC m=+0.161623751 container start 2843481fe40224a7b4891968fe101be6775a7e00461de7f70a2b0ef67ce8ad33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mcnulty, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:01:17 compute-0 podman[267734]: 2025-10-01 17:01:17.958352077 +0000 UTC m=+0.165954749 container attach 2843481fe40224a7b4891968fe101be6775a7e00461de7f70a2b0ef67ce8ad33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mcnulty, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Oct 01 17:01:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:01:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2295449168' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:01:18 compute-0 nova_compute[259504]: 2025-10-01 17:01:18.077 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:01:18 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "76e4be4c-744e-45f7-a850-fa855b7c5552", "format": "json"}]: dispatch
Oct 01 17:01:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:76e4be4c-744e-45f7-a850-fa855b7c5552, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:01:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:76e4be4c-744e-45f7-a850-fa855b7c5552, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:01:18 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:01:18.088+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '76e4be4c-744e-45f7-a850-fa855b7c5552' of type subvolume
Oct 01 17:01:18 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '76e4be4c-744e-45f7-a850-fa855b7c5552' of type subvolume
Oct 01 17:01:18 compute-0 nova_compute[259504]: 2025-10-01 17:01:18.090 2 DEBUG nova.compute.provider_tree [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed in ProviderTree for provider: 2417da73-53f1-4edf-ae4c-fbd9fa470d6b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 17:01:18 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "76e4be4c-744e-45f7-a850-fa855b7c5552", "force": true, "format": "json"}]: dispatch
Oct 01 17:01:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:76e4be4c-744e-45f7-a850-fa855b7c5552, vol_name:cephfs) < ""
Oct 01 17:01:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/76e4be4c-744e-45f7-a850-fa855b7c5552'' moved to trashcan
Oct 01 17:01:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:01:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:76e4be4c-744e-45f7-a850-fa855b7c5552, vol_name:cephfs) < ""
Oct 01 17:01:18 compute-0 nova_compute[259504]: 2025-10-01 17:01:18.109 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 17:01:18 compute-0 nova_compute[259504]: 2025-10-01 17:01:18.113 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 17:01:18 compute-0 nova_compute[259504]: 2025-10-01 17:01:18.113 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:01:18 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "snap_name": "82d8c993-67bf-46d4-81da-17b2c2b9d616", "format": "json"}]: dispatch
Oct 01 17:01:18 compute-0 ceph-mon[74273]: pgmap v935: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 6.8 KiB/s wr, 1 op/s
Oct 01 17:01:18 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2295449168' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:01:18 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "76e4be4c-744e-45f7-a850-fa855b7c5552", "format": "json"}]: dispatch
Oct 01 17:01:18 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "76e4be4c-744e-45f7-a850-fa855b7c5552", "force": true, "format": "json"}]: dispatch
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]: {
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:     "0": [
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:         {
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             "devices": [
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "/dev/loop3"
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             ],
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             "lv_name": "ceph_lv0",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             "lv_size": "21470642176",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             "name": "ceph_lv0",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             "tags": {
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "ceph.cluster_name": "ceph",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "ceph.crush_device_class": "",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "ceph.encrypted": "0",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "ceph.osd_id": "0",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "ceph.type": "block",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "ceph.vdo": "0"
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             },
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             "type": "block",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             "vg_name": "ceph_vg0"
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:         }
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:     ],
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:     "1": [
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:         {
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             "devices": [
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "/dev/loop4"
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             ],
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             "lv_name": "ceph_lv1",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             "lv_size": "21470642176",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             "name": "ceph_lv1",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             "tags": {
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "ceph.cluster_name": "ceph",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "ceph.crush_device_class": "",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "ceph.encrypted": "0",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "ceph.osd_id": "1",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "ceph.type": "block",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "ceph.vdo": "0"
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             },
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             "type": "block",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             "vg_name": "ceph_vg1"
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:         }
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:     ],
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:     "2": [
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:         {
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             "devices": [
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "/dev/loop5"
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             ],
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             "lv_name": "ceph_lv2",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             "lv_size": "21470642176",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             "name": "ceph_lv2",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             "tags": {
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "ceph.cluster_name": "ceph",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "ceph.crush_device_class": "",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "ceph.encrypted": "0",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "ceph.osd_id": "2",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "ceph.type": "block",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:                 "ceph.vdo": "0"
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             },
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             "type": "block",
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:             "vg_name": "ceph_vg2"
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:         }
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]:     ]
Oct 01 17:01:18 compute-0 unruffled_mcnulty[267751]: }
Oct 01 17:01:18 compute-0 systemd[1]: libpod-2843481fe40224a7b4891968fe101be6775a7e00461de7f70a2b0ef67ce8ad33.scope: Deactivated successfully.
Oct 01 17:01:18 compute-0 podman[267734]: 2025-10-01 17:01:18.754829519 +0000 UTC m=+0.962432211 container died 2843481fe40224a7b4891968fe101be6775a7e00461de7f70a2b0ef67ce8ad33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 01 17:01:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-5eb3566bdac706ccfbd93a8811781222765c4adceea215888352cb2b9fdd3a84-merged.mount: Deactivated successfully.
Oct 01 17:01:18 compute-0 podman[267734]: 2025-10-01 17:01:18.84031682 +0000 UTC m=+1.047919502 container remove 2843481fe40224a7b4891968fe101be6775a7e00461de7f70a2b0ef67ce8ad33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mcnulty, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:01:18 compute-0 systemd[1]: libpod-conmon-2843481fe40224a7b4891968fe101be6775a7e00461de7f70a2b0ef67ce8ad33.scope: Deactivated successfully.
Oct 01 17:01:18 compute-0 sudo[267607]: pam_unix(sudo:session): session closed for user root
Oct 01 17:01:18 compute-0 sudo[267775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:01:19 compute-0 sudo[267775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:01:19 compute-0 sudo[267775]: pam_unix(sudo:session): session closed for user root
Oct 01 17:01:19 compute-0 sudo[267800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:01:19 compute-0 sudo[267800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:01:19 compute-0 nova_compute[259504]: 2025-10-01 17:01:19.095 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:01:19 compute-0 nova_compute[259504]: 2025-10-01 17:01:19.096 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:01:19 compute-0 nova_compute[259504]: 2025-10-01 17:01:19.097 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:01:19 compute-0 nova_compute[259504]: 2025-10-01 17:01:19.097 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 17:01:19 compute-0 sudo[267800]: pam_unix(sudo:session): session closed for user root
Oct 01 17:01:19 compute-0 sudo[267825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:01:19 compute-0 sudo[267825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:01:19 compute-0 sudo[267825]: pam_unix(sudo:session): session closed for user root
Oct 01 17:01:19 compute-0 sudo[267850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 17:01:19 compute-0 sudo[267850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:01:19 compute-0 podman[267915]: 2025-10-01 17:01:19.739080635 +0000 UTC m=+0.067557114 container create 77d55047e44a2cf4fc37f44bd7a18114e0844e35207a94c511c39cbdb445dad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dewdney, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:01:19 compute-0 nova_compute[259504]: 2025-10-01 17:01:19.751 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:01:19 compute-0 systemd[1]: Started libpod-conmon-77d55047e44a2cf4fc37f44bd7a18114e0844e35207a94c511c39cbdb445dad1.scope.
Oct 01 17:01:19 compute-0 podman[267915]: 2025-10-01 17:01:19.710760168 +0000 UTC m=+0.039236687 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:01:19 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:01:19 compute-0 podman[267915]: 2025-10-01 17:01:19.832787191 +0000 UTC m=+0.161263730 container init 77d55047e44a2cf4fc37f44bd7a18114e0844e35207a94c511c39cbdb445dad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dewdney, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:01:19 compute-0 podman[267915]: 2025-10-01 17:01:19.844365805 +0000 UTC m=+0.172842274 container start 77d55047e44a2cf4fc37f44bd7a18114e0844e35207a94c511c39cbdb445dad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dewdney, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:01:19 compute-0 podman[267915]: 2025-10-01 17:01:19.849064155 +0000 UTC m=+0.177540674 container attach 77d55047e44a2cf4fc37f44bd7a18114e0844e35207a94c511c39cbdb445dad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 17:01:19 compute-0 nostalgic_dewdney[267931]: 167 167
Oct 01 17:01:19 compute-0 podman[267915]: 2025-10-01 17:01:19.850887826 +0000 UTC m=+0.179364295 container died 77d55047e44a2cf4fc37f44bd7a18114e0844e35207a94c511c39cbdb445dad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 01 17:01:19 compute-0 systemd[1]: libpod-77d55047e44a2cf4fc37f44bd7a18114e0844e35207a94c511c39cbdb445dad1.scope: Deactivated successfully.
Oct 01 17:01:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7792f1a3b96c6ee2068d855cc3f7cc617b2f8cfc68c0eb45ac3937b264e5d40-merged.mount: Deactivated successfully.
Oct 01 17:01:19 compute-0 podman[267915]: 2025-10-01 17:01:19.904665261 +0000 UTC m=+0.233141740 container remove 77d55047e44a2cf4fc37f44bd7a18114e0844e35207a94c511c39cbdb445dad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:01:19 compute-0 systemd[1]: libpod-conmon-77d55047e44a2cf4fc37f44bd7a18114e0844e35207a94c511c39cbdb445dad1.scope: Deactivated successfully.
Oct 01 17:01:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:01:19.969 162304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:01:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:01:19.969 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:01:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:01:19.969 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:01:19 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s wr, 2 op/s
Oct 01 17:01:20 compute-0 podman[267955]: 2025-10-01 17:01:20.188355595 +0000 UTC m=+0.065077486 container create 401d2873b755f8defc0e16e0e2d1e0599e4fa77015dcb84ef065d717f75ddc92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:01:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 17:01:20 compute-0 systemd[1]: Started libpod-conmon-401d2873b755f8defc0e16e0e2d1e0599e4fa77015dcb84ef065d717f75ddc92.scope.
Oct 01 17:01:20 compute-0 podman[267955]: 2025-10-01 17:01:20.169732681 +0000 UTC m=+0.046454582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:01:20 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:01:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d5da01df8bda6edaeaf689baaf0a4bce39a073b6d6c9836b81a0c72687b183b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:01:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d5da01df8bda6edaeaf689baaf0a4bce39a073b6d6c9836b81a0c72687b183b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:01:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d5da01df8bda6edaeaf689baaf0a4bce39a073b6d6c9836b81a0c72687b183b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:01:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d5da01df8bda6edaeaf689baaf0a4bce39a073b6d6c9836b81a0c72687b183b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:01:20 compute-0 podman[267955]: 2025-10-01 17:01:20.308076527 +0000 UTC m=+0.184798448 container init 401d2873b755f8defc0e16e0e2d1e0599e4fa77015dcb84ef065d717f75ddc92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:01:20 compute-0 podman[267955]: 2025-10-01 17:01:20.321471392 +0000 UTC m=+0.198193313 container start 401d2873b755f8defc0e16e0e2d1e0599e4fa77015dcb84ef065d717f75ddc92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_darwin, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Oct 01 17:01:20 compute-0 podman[267955]: 2025-10-01 17:01:20.325714852 +0000 UTC m=+0.202436773 container attach 401d2873b755f8defc0e16e0e2d1e0599e4fa77015dcb84ef065d717f75ddc92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:01:20 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:01:20.869 162304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:71:db', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:60:3f:78:bd:29'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 17:01:20 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:01:20.871 162304 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 17:01:21 compute-0 ceph-mon[74273]: pgmap v936: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s wr, 2 op/s
Oct 01 17:01:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 17:01:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:01:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 17:01:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:01:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:01:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:01:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:01:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:01:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:01:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:01:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 01 17:01:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:01:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4689453506544248e-05 of space, bias 4.0, pg target 0.0176273442078531 quantized to 16 (current 16)
Oct 01 17:01:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:01:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 01 17:01:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:01:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 17:01:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:01:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 17:01:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:01:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:01:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:01:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 17:01:21 compute-0 gallant_darwin[267971]: {
Oct 01 17:01:21 compute-0 gallant_darwin[267971]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 17:01:21 compute-0 gallant_darwin[267971]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:01:21 compute-0 gallant_darwin[267971]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 17:01:21 compute-0 gallant_darwin[267971]:         "osd_id": 2,
Oct 01 17:01:21 compute-0 gallant_darwin[267971]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 17:01:21 compute-0 gallant_darwin[267971]:         "type": "bluestore"
Oct 01 17:01:21 compute-0 gallant_darwin[267971]:     },
Oct 01 17:01:21 compute-0 gallant_darwin[267971]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 17:01:21 compute-0 gallant_darwin[267971]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:01:21 compute-0 gallant_darwin[267971]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 17:01:21 compute-0 gallant_darwin[267971]:         "osd_id": 0,
Oct 01 17:01:21 compute-0 gallant_darwin[267971]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 17:01:21 compute-0 gallant_darwin[267971]:         "type": "bluestore"
Oct 01 17:01:21 compute-0 gallant_darwin[267971]:     },
Oct 01 17:01:21 compute-0 gallant_darwin[267971]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 17:01:21 compute-0 gallant_darwin[267971]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:01:21 compute-0 gallant_darwin[267971]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 17:01:21 compute-0 gallant_darwin[267971]:         "osd_id": 1,
Oct 01 17:01:21 compute-0 gallant_darwin[267971]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 17:01:21 compute-0 gallant_darwin[267971]:         "type": "bluestore"
Oct 01 17:01:21 compute-0 gallant_darwin[267971]:     }
Oct 01 17:01:21 compute-0 gallant_darwin[267971]: }
Oct 01 17:01:21 compute-0 systemd[1]: libpod-401d2873b755f8defc0e16e0e2d1e0599e4fa77015dcb84ef065d717f75ddc92.scope: Deactivated successfully.
Oct 01 17:01:21 compute-0 podman[267955]: 2025-10-01 17:01:21.39345853 +0000 UTC m=+1.270180451 container died 401d2873b755f8defc0e16e0e2d1e0599e4fa77015dcb84ef065d717f75ddc92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:01:21 compute-0 systemd[1]: libpod-401d2873b755f8defc0e16e0e2d1e0599e4fa77015dcb84ef065d717f75ddc92.scope: Consumed 1.057s CPU time.
Oct 01 17:01:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d5da01df8bda6edaeaf689baaf0a4bce39a073b6d6c9836b81a0c72687b183b-merged.mount: Deactivated successfully.
Oct 01 17:01:21 compute-0 podman[267955]: 2025-10-01 17:01:21.610012942 +0000 UTC m=+1.486734853 container remove 401d2873b755f8defc0e16e0e2d1e0599e4fa77015dcb84ef065d717f75ddc92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_darwin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 01 17:01:21 compute-0 podman[268004]: 2025-10-01 17:01:21.619858093 +0000 UTC m=+0.181166656 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:01:21 compute-0 systemd[1]: libpod-conmon-401d2873b755f8defc0e16e0e2d1e0599e4fa77015dcb84ef065d717f75ddc92.scope: Deactivated successfully.
Oct 01 17:01:21 compute-0 sudo[267850]: pam_unix(sudo:session): session closed for user root
Oct 01 17:01:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 17:01:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:01:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 17:01:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:01:21 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 362e60ce-43c1-4454-86d7-249cd3c1f450 does not exist
Oct 01 17:01:21 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev c1b86030-f6ec-4692-a66b-664a57c6ac5e does not exist
Oct 01 17:01:21 compute-0 sudo[268038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:01:21 compute-0 sudo[268038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:01:21 compute-0 sudo[268038]: pam_unix(sudo:session): session closed for user root
Oct 01 17:01:21 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "snap_name": "ce72fa9d-6cfa-4bc1-9bbe-b500ad65a9aa", "format": "json"}]: dispatch
Oct 01 17:01:21 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ce72fa9d-6cfa-4bc1-9bbe-b500ad65a9aa, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:01:21 compute-0 sudo[268063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 17:01:21 compute-0 sudo[268063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:01:21 compute-0 sudo[268063]: pam_unix(sudo:session): session closed for user root
Oct 01 17:01:21 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ce72fa9d-6cfa-4bc1-9bbe-b500ad65a9aa, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:01:21 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s wr, 2 op/s
Oct 01 17:01:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:01:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:01:22 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "snap_name": "ce72fa9d-6cfa-4bc1-9bbe-b500ad65a9aa", "format": "json"}]: dispatch
Oct 01 17:01:22 compute-0 ceph-mon[74273]: pgmap v937: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s wr, 2 op/s
Oct 01 17:01:23 compute-0 podman[268088]: 2025-10-01 17:01:23.775019524 +0000 UTC m=+0.081364370 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:01:23 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:01:23.874 162304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d2971fc2-5b75-459a-98a0-6e626d0d4d99, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 17:01:23 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 408 B/s rd, 22 KiB/s wr, 4 op/s
Oct 01 17:01:25 compute-0 ceph-mon[74273]: pgmap v938: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 408 B/s rd, 22 KiB/s wr, 4 op/s
Oct 01 17:01:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 17:01:25 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "93b513ed-8592-46d2-ae45-b578c4708055", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:01:25 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:93b513ed-8592-46d2-ae45-b578c4708055, vol_name:cephfs) < ""
Oct 01 17:01:25 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/93b513ed-8592-46d2-ae45-b578c4708055/.meta.tmp'
Oct 01 17:01:25 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/93b513ed-8592-46d2-ae45-b578c4708055/.meta.tmp' to config b'/volumes/_nogroup/93b513ed-8592-46d2-ae45-b578c4708055/.meta'
Oct 01 17:01:25 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:93b513ed-8592-46d2-ae45-b578c4708055, vol_name:cephfs) < ""
Oct 01 17:01:25 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "93b513ed-8592-46d2-ae45-b578c4708055", "format": "json"}]: dispatch
Oct 01 17:01:25 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:93b513ed-8592-46d2-ae45-b578c4708055, vol_name:cephfs) < ""
Oct 01 17:01:25 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:93b513ed-8592-46d2-ae45-b578c4708055, vol_name:cephfs) < ""
Oct 01 17:01:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:01:25 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:01:25 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 381 B/s rd, 21 KiB/s wr, 4 op/s
Oct 01 17:01:26 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "snap_name": "ce72fa9d-6cfa-4bc1-9bbe-b500ad65a9aa_8d43c405-9c5d-44c9-8d91-7d233d10bd92", "force": true, "format": "json"}]: dispatch
Oct 01 17:01:26 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ce72fa9d-6cfa-4bc1-9bbe-b500ad65a9aa_8d43c405-9c5d-44c9-8d91-7d233d10bd92, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:01:26 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta.tmp'
Oct 01 17:01:26 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta.tmp' to config b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta'
Oct 01 17:01:26 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ce72fa9d-6cfa-4bc1-9bbe-b500ad65a9aa_8d43c405-9c5d-44c9-8d91-7d233d10bd92, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:01:26 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "snap_name": "ce72fa9d-6cfa-4bc1-9bbe-b500ad65a9aa", "force": true, "format": "json"}]: dispatch
Oct 01 17:01:26 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ce72fa9d-6cfa-4bc1-9bbe-b500ad65a9aa, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:01:26 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta.tmp'
Oct 01 17:01:26 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta.tmp' to config b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta'
Oct 01 17:01:26 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:01:26 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ce72fa9d-6cfa-4bc1-9bbe-b500ad65a9aa, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:01:26 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c5ae6d5e-ebae-47c0-867d-5722bbee2320", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:01:26 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c5ae6d5e-ebae-47c0-867d-5722bbee2320, vol_name:cephfs) < ""
Oct 01 17:01:27 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c5ae6d5e-ebae-47c0-867d-5722bbee2320/.meta.tmp'
Oct 01 17:01:27 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c5ae6d5e-ebae-47c0-867d-5722bbee2320/.meta.tmp' to config b'/volumes/_nogroup/c5ae6d5e-ebae-47c0-867d-5722bbee2320/.meta'
Oct 01 17:01:27 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c5ae6d5e-ebae-47c0-867d-5722bbee2320, vol_name:cephfs) < ""
Oct 01 17:01:27 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c5ae6d5e-ebae-47c0-867d-5722bbee2320", "format": "json"}]: dispatch
Oct 01 17:01:27 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c5ae6d5e-ebae-47c0-867d-5722bbee2320, vol_name:cephfs) < ""
Oct 01 17:01:27 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c5ae6d5e-ebae-47c0-867d-5722bbee2320, vol_name:cephfs) < ""
Oct 01 17:01:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:01:27 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:01:27 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "93b513ed-8592-46d2-ae45-b578c4708055", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:01:27 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "93b513ed-8592-46d2-ae45-b578c4708055", "format": "json"}]: dispatch
Oct 01 17:01:27 compute-0 ceph-mon[74273]: pgmap v939: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 381 B/s rd, 21 KiB/s wr, 4 op/s
Oct 01 17:01:27 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "snap_name": "ce72fa9d-6cfa-4bc1-9bbe-b500ad65a9aa_8d43c405-9c5d-44c9-8d91-7d233d10bd92", "force": true, "format": "json"}]: dispatch
Oct 01 17:01:27 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "snap_name": "ce72fa9d-6cfa-4bc1-9bbe-b500ad65a9aa", "force": true, "format": "json"}]: dispatch
Oct 01 17:01:27 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c5ae6d5e-ebae-47c0-867d-5722bbee2320", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:01:27 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c5ae6d5e-ebae-47c0-867d-5722bbee2320", "format": "json"}]: dispatch
Oct 01 17:01:27 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:01:27 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 340 B/s rd, 19 KiB/s wr, 3 op/s
Oct 01 17:01:28 compute-0 ceph-mon[74273]: pgmap v940: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 340 B/s rd, 19 KiB/s wr, 3 op/s
Oct 01 17:01:29 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "93b513ed-8592-46d2-ae45-b578c4708055", "snap_name": "0836034c-448e-4a12-a3fa-b569fb708078", "format": "json"}]: dispatch
Oct 01 17:01:29 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:0836034c-448e-4a12-a3fa-b569fb708078, sub_name:93b513ed-8592-46d2-ae45-b578c4708055, vol_name:cephfs) < ""
Oct 01 17:01:29 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:0836034c-448e-4a12-a3fa-b569fb708078, sub_name:93b513ed-8592-46d2-ae45-b578c4708055, vol_name:cephfs) < ""
Oct 01 17:01:29 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "snap_name": "82d8c993-67bf-46d4-81da-17b2c2b9d616_14f6007c-8bd2-4e39-815d-0aa48f9a3796", "force": true, "format": "json"}]: dispatch
Oct 01 17:01:29 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:82d8c993-67bf-46d4-81da-17b2c2b9d616_14f6007c-8bd2-4e39-815d-0aa48f9a3796, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:01:29 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "93b513ed-8592-46d2-ae45-b578c4708055", "snap_name": "0836034c-448e-4a12-a3fa-b569fb708078", "format": "json"}]: dispatch
Oct 01 17:01:29 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 340 B/s rd, 33 KiB/s wr, 5 op/s
Oct 01 17:01:30 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta.tmp'
Oct 01 17:01:30 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta.tmp' to config b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta'
Oct 01 17:01:30 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:82d8c993-67bf-46d4-81da-17b2c2b9d616_14f6007c-8bd2-4e39-815d-0aa48f9a3796, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:01:30 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "snap_name": "82d8c993-67bf-46d4-81da-17b2c2b9d616", "force": true, "format": "json"}]: dispatch
Oct 01 17:01:30 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:82d8c993-67bf-46d4-81da-17b2c2b9d616, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:01:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 17:01:30 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta.tmp'
Oct 01 17:01:30 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta.tmp' to config b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta'
Oct 01 17:01:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Oct 01 17:01:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Oct 01 17:01:31 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Oct 01 17:01:31 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "snap_name": "82d8c993-67bf-46d4-81da-17b2c2b9d616_14f6007c-8bd2-4e39-815d-0aa48f9a3796", "force": true, "format": "json"}]: dispatch
Oct 01 17:01:31 compute-0 ceph-mon[74273]: pgmap v941: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 340 B/s rd, 33 KiB/s wr, 5 op/s
Oct 01 17:01:31 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "snap_name": "82d8c993-67bf-46d4-81da-17b2c2b9d616", "force": true, "format": "json"}]: dispatch
Oct 01 17:01:31 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 24 KiB/s wr, 4 op/s
Oct 01 17:01:32 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:82d8c993-67bf-46d4-81da-17b2c2b9d616, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:01:32 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "c5ae6d5e-ebae-47c0-867d-5722bbee2320", "snap_name": "95f0d76c-a17b-4697-b50f-2bff25aa56cd", "format": "json"}]: dispatch
Oct 01 17:01:32 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:95f0d76c-a17b-4697-b50f-2bff25aa56cd, sub_name:c5ae6d5e-ebae-47c0-867d-5722bbee2320, vol_name:cephfs) < ""
Oct 01 17:01:33 compute-0 ceph-mon[74273]: osdmap e132: 3 total, 3 up, 3 in
Oct 01 17:01:33 compute-0 ceph-mon[74273]: pgmap v943: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 24 KiB/s wr, 4 op/s
Oct 01 17:01:33 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 43 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 32 KiB/s wr, 4 op/s
Oct 01 17:01:34 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:95f0d76c-a17b-4697-b50f-2bff25aa56cd, sub_name:c5ae6d5e-ebae-47c0-867d-5722bbee2320, vol_name:cephfs) < ""
Oct 01 17:01:34 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "93b513ed-8592-46d2-ae45-b578c4708055", "snap_name": "0836034c-448e-4a12-a3fa-b569fb708078", "target_sub_name": "a5f570e8-f76e-482d-b0fa-146c6f6d8dd3", "format": "json"}]: dispatch
Oct 01 17:01:34 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:0836034c-448e-4a12-a3fa-b569fb708078, sub_name:93b513ed-8592-46d2-ae45-b578c4708055, target_sub_name:a5f570e8-f76e-482d-b0fa-146c6f6d8dd3, vol_name:cephfs) < ""
Oct 01 17:01:34 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "c5ae6d5e-ebae-47c0-867d-5722bbee2320", "snap_name": "95f0d76c-a17b-4697-b50f-2bff25aa56cd", "format": "json"}]: dispatch
Oct 01 17:01:34 compute-0 ceph-mon[74273]: pgmap v944: 305 pgs: 305 active+clean; 43 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 32 KiB/s wr, 4 op/s
Oct 01 17:01:34 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "93b513ed-8592-46d2-ae45-b578c4708055", "snap_name": "0836034c-448e-4a12-a3fa-b569fb708078", "target_sub_name": "a5f570e8-f76e-482d-b0fa-146c6f6d8dd3", "format": "json"}]: dispatch
Oct 01 17:01:34 compute-0 podman[268108]: 2025-10-01 17:01:34.828302668 +0000 UTC m=+0.139001093 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 01 17:01:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 17:01:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Oct 01 17:01:35 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 43 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 32 KiB/s wr, 4 op/s
Oct 01 17:01:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Oct 01 17:01:36 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Oct 01 17:01:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/a5f570e8-f76e-482d-b0fa-146c6f6d8dd3/.meta.tmp'
Oct 01 17:01:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a5f570e8-f76e-482d-b0fa-146c6f6d8dd3/.meta.tmp' to config b'/volumes/_nogroup/a5f570e8-f76e-482d-b0fa-146c6f6d8dd3/.meta'
Oct 01 17:01:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.clone_index] tracking-id a786ac5b-d6ed-4306-ab1d-a196cdae0708 for path b'/volumes/_nogroup/a5f570e8-f76e-482d-b0fa-146c6f6d8dd3'
Oct 01 17:01:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/93b513ed-8592-46d2-ae45-b578c4708055/.meta.tmp'
Oct 01 17:01:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/93b513ed-8592-46d2-ae45-b578c4708055/.meta.tmp' to config b'/volumes/_nogroup/93b513ed-8592-46d2-ae45-b578c4708055/.meta'
Oct 01 17:01:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:01:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:0836034c-448e-4a12-a3fa-b569fb708078, sub_name:93b513ed-8592-46d2-ae45-b578c4708055, target_sub_name:a5f570e8-f76e-482d-b0fa-146c6f6d8dd3, vol_name:cephfs) < ""
Oct 01 17:01:36 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:01:36.282+0000 7f814003c640 -1 client.0 error registering admin socket command: (17) File exists
Oct 01 17:01:36 compute-0 ceph-mgr[74571]: client.0 error registering admin socket command: (17) File exists
Oct 01 17:01:36 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:01:36.282+0000 7f814003c640 -1 client.0 error registering admin socket command: (17) File exists
Oct 01 17:01:36 compute-0 ceph-mgr[74571]: client.0 error registering admin socket command: (17) File exists
Oct 01 17:01:36 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:01:36.282+0000 7f814003c640 -1 client.0 error registering admin socket command: (17) File exists
Oct 01 17:01:36 compute-0 ceph-mgr[74571]: client.0 error registering admin socket command: (17) File exists
Oct 01 17:01:36 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:01:36.282+0000 7f814003c640 -1 client.0 error registering admin socket command: (17) File exists
Oct 01 17:01:36 compute-0 ceph-mgr[74571]: client.0 error registering admin socket command: (17) File exists
Oct 01 17:01:36 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:01:36.282+0000 7f814003c640 -1 client.0 error registering admin socket command: (17) File exists
Oct 01 17:01:36 compute-0 ceph-mgr[74571]: client.0 error registering admin socket command: (17) File exists
Oct 01 17:01:36 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a5f570e8-f76e-482d-b0fa-146c6f6d8dd3", "format": "json"}]: dispatch
Oct 01 17:01:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a5f570e8-f76e-482d-b0fa-146c6f6d8dd3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:01:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a5f570e8-f76e-482d-b0fa-146c6f6d8dd3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:01:36 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "snap_name": "99077526-b8d6-449c-8b0b-3d920742c77d_372b8162-783c-4a60-a7f6-da25b169c7da", "force": true, "format": "json"}]: dispatch
Oct 01 17:01:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:99077526-b8d6-449c-8b0b-3d920742c77d_372b8162-783c-4a60-a7f6-da25b169c7da, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:01:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta.tmp'
Oct 01 17:01:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta.tmp' to config b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta'
Oct 01 17:01:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:99077526-b8d6-449c-8b0b-3d920742c77d_372b8162-783c-4a60-a7f6-da25b169c7da, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:01:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/a5f570e8-f76e-482d-b0fa-146c6f6d8dd3
Oct 01 17:01:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, a5f570e8-f76e-482d-b0fa-146c6f6d8dd3)
Oct 01 17:01:36 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "snap_name": "99077526-b8d6-449c-8b0b-3d920742c77d", "force": true, "format": "json"}]: dispatch
Oct 01 17:01:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:99077526-b8d6-449c-8b0b-3d920742c77d, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:01:36 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:01:36.778+0000 7f813f83b640 -1 client.0 error registering admin socket command: (17) File exists
Oct 01 17:01:36 compute-0 ceph-mgr[74571]: client.0 error registering admin socket command: (17) File exists
Oct 01 17:01:36 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:01:36.778+0000 7f813f83b640 -1 client.0 error registering admin socket command: (17) File exists
Oct 01 17:01:36 compute-0 ceph-mgr[74571]: client.0 error registering admin socket command: (17) File exists
Oct 01 17:01:36 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:01:36.778+0000 7f813f83b640 -1 client.0 error registering admin socket command: (17) File exists
Oct 01 17:01:36 compute-0 ceph-mgr[74571]: client.0 error registering admin socket command: (17) File exists
Oct 01 17:01:36 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:01:36.778+0000 7f813f83b640 -1 client.0 error registering admin socket command: (17) File exists
Oct 01 17:01:36 compute-0 ceph-mgr[74571]: client.0 error registering admin socket command: (17) File exists
Oct 01 17:01:36 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:01:36.778+0000 7f813f83b640 -1 client.0 error registering admin socket command: (17) File exists
Oct 01 17:01:36 compute-0 ceph-mgr[74571]: client.0 error registering admin socket command: (17) File exists
Oct 01 17:01:37 compute-0 ceph-mon[74273]: pgmap v945: 305 pgs: 305 active+clean; 43 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 32 KiB/s wr, 4 op/s
Oct 01 17:01:37 compute-0 ceph-mon[74273]: osdmap e133: 3 total, 3 up, 3 in
Oct 01 17:01:37 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, a5f570e8-f76e-482d-b0fa-146c6f6d8dd3) -- by 0 seconds
Oct 01 17:01:37 compute-0 podman[268158]: 2025-10-01 17:01:37.736744077 +0000 UTC m=+0.051493105 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 17:01:37 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/a5f570e8-f76e-482d-b0fa-146c6f6d8dd3/.meta.tmp'
Oct 01 17:01:37 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a5f570e8-f76e-482d-b0fa-146c6f6d8dd3/.meta.tmp' to config b'/volumes/_nogroup/a5f570e8-f76e-482d-b0fa-146c6f6d8dd3/.meta'
Oct 01 17:01:37 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 43 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 18 KiB/s wr, 2 op/s
Oct 01 17:01:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta.tmp'
Oct 01 17:01:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta.tmp' to config b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta'
Oct 01 17:01:38 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a5f570e8-f76e-482d-b0fa-146c6f6d8dd3", "format": "json"}]: dispatch
Oct 01 17:01:38 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "snap_name": "99077526-b8d6-449c-8b0b-3d920742c77d_372b8162-783c-4a60-a7f6-da25b169c7da", "force": true, "format": "json"}]: dispatch
Oct 01 17:01:38 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "snap_name": "99077526-b8d6-449c-8b0b-3d920742c77d", "force": true, "format": "json"}]: dispatch
Oct 01 17:01:38 compute-0 ceph-mon[74273]: pgmap v947: 305 pgs: 305 active+clean; 43 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 18 KiB/s wr, 2 op/s
Oct 01 17:01:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/93b513ed-8592-46d2-ae45-b578c4708055/.snap/0836034c-448e-4a12-a3fa-b569fb708078/49bc7a7c-014e-4e9c-8b9f-a9c00a0f0993' to b'/volumes/_nogroup/a5f570e8-f76e-482d-b0fa-146c6f6d8dd3/94ce63e3-c262-4e88-805f-e02ee4fbcabe'
Oct 01 17:01:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:99077526-b8d6-449c-8b0b-3d920742c77d, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:01:38 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c5ae6d5e-ebae-47c0-867d-5722bbee2320", "snap_name": "95f0d76c-a17b-4697-b50f-2bff25aa56cd_4f34b0d7-ac8c-42b9-84bc-4ce41ca1bd2a", "force": true, "format": "json"}]: dispatch
Oct 01 17:01:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:95f0d76c-a17b-4697-b50f-2bff25aa56cd_4f34b0d7-ac8c-42b9-84bc-4ce41ca1bd2a, sub_name:c5ae6d5e-ebae-47c0-867d-5722bbee2320, vol_name:cephfs) < ""
Oct 01 17:01:38 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.pmbdpj(active, since 27m)
Oct 01 17:01:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c5ae6d5e-ebae-47c0-867d-5722bbee2320/.meta.tmp'
Oct 01 17:01:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c5ae6d5e-ebae-47c0-867d-5722bbee2320/.meta.tmp' to config b'/volumes/_nogroup/c5ae6d5e-ebae-47c0-867d-5722bbee2320/.meta'
Oct 01 17:01:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:95f0d76c-a17b-4697-b50f-2bff25aa56cd_4f34b0d7-ac8c-42b9-84bc-4ce41ca1bd2a, sub_name:c5ae6d5e-ebae-47c0-867d-5722bbee2320, vol_name:cephfs) < ""
Oct 01 17:01:38 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c5ae6d5e-ebae-47c0-867d-5722bbee2320", "snap_name": "95f0d76c-a17b-4697-b50f-2bff25aa56cd", "force": true, "format": "json"}]: dispatch
Oct 01 17:01:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:95f0d76c-a17b-4697-b50f-2bff25aa56cd, sub_name:c5ae6d5e-ebae-47c0-867d-5722bbee2320, vol_name:cephfs) < ""
Oct 01 17:01:39 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c5ae6d5e-ebae-47c0-867d-5722bbee2320/.meta.tmp'
Oct 01 17:01:39 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c5ae6d5e-ebae-47c0-867d-5722bbee2320/.meta.tmp' to config b'/volumes/_nogroup/c5ae6d5e-ebae-47c0-867d-5722bbee2320/.meta'
Oct 01 17:01:39 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:95f0d76c-a17b-4697-b50f-2bff25aa56cd, sub_name:c5ae6d5e-ebae-47c0-867d-5722bbee2320, vol_name:cephfs) < ""
Oct 01 17:01:39 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "snap_name": "0efe5646-e561-4cdf-9c56-ad36284bde88_b975d937-f96d-40aa-9e6e-dd7325454ef0", "force": true, "format": "json"}]: dispatch
Oct 01 17:01:39 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0efe5646-e561-4cdf-9c56-ad36284bde88_b975d937-f96d-40aa-9e6e-dd7325454ef0, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:01:39 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/a5f570e8-f76e-482d-b0fa-146c6f6d8dd3/.meta.tmp'
Oct 01 17:01:39 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a5f570e8-f76e-482d-b0fa-146c6f6d8dd3/.meta.tmp' to config b'/volumes/_nogroup/a5f570e8-f76e-482d-b0fa-146c6f6d8dd3/.meta'
Oct 01 17:01:39 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 43 MiB data, 209 MiB used, 60 GiB / 60 GiB avail; 609 B/s rd, 51 KiB/s wr, 7 op/s
Oct 01 17:01:40 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c5ae6d5e-ebae-47c0-867d-5722bbee2320", "snap_name": "95f0d76c-a17b-4697-b50f-2bff25aa56cd_4f34b0d7-ac8c-42b9-84bc-4ce41ca1bd2a", "force": true, "format": "json"}]: dispatch
Oct 01 17:01:40 compute-0 ceph-mon[74273]: mgrmap e12: compute-0.pmbdpj(active, since 27m)
Oct 01 17:01:40 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c5ae6d5e-ebae-47c0-867d-5722bbee2320", "snap_name": "95f0d76c-a17b-4697-b50f-2bff25aa56cd", "force": true, "format": "json"}]: dispatch
Oct 01 17:01:40 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta.tmp'
Oct 01 17:01:40 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta.tmp' to config b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta'
Oct 01 17:01:40 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0efe5646-e561-4cdf-9c56-ad36284bde88_b975d937-f96d-40aa-9e6e-dd7325454ef0, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:01:40 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "snap_name": "0efe5646-e561-4cdf-9c56-ad36284bde88", "force": true, "format": "json"}]: dispatch
Oct 01 17:01:40 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0efe5646-e561-4cdf-9c56-ad36284bde88, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:01:40 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.clone_index] untracking a786ac5b-d6ed-4306-ab1d-a196cdae0708
Oct 01 17:01:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 17:01:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Oct 01 17:01:40 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/93b513ed-8592-46d2-ae45-b578c4708055/.meta.tmp'
Oct 01 17:01:40 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/93b513ed-8592-46d2-ae45-b578c4708055/.meta.tmp' to config b'/volumes/_nogroup/93b513ed-8592-46d2-ae45-b578c4708055/.meta'
Oct 01 17:01:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Oct 01 17:01:40 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Oct 01 17:01:40 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/a5f570e8-f76e-482d-b0fa-146c6f6d8dd3/.meta.tmp'
Oct 01 17:01:40 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a5f570e8-f76e-482d-b0fa-146c6f6d8dd3/.meta.tmp' to config b'/volumes/_nogroup/a5f570e8-f76e-482d-b0fa-146c6f6d8dd3/.meta'
Oct 01 17:01:40 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, a5f570e8-f76e-482d-b0fa-146c6f6d8dd3)
Oct 01 17:01:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:01:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:01:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:01:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:01:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:01:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:01:41 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "snap_name": "0efe5646-e561-4cdf-9c56-ad36284bde88_b975d937-f96d-40aa-9e6e-dd7325454ef0", "force": true, "format": "json"}]: dispatch
Oct 01 17:01:41 compute-0 ceph-mon[74273]: pgmap v948: 305 pgs: 305 active+clean; 43 MiB data, 209 MiB used, 60 GiB / 60 GiB avail; 609 B/s rd, 51 KiB/s wr, 7 op/s
Oct 01 17:01:41 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "snap_name": "0efe5646-e561-4cdf-9c56-ad36284bde88", "force": true, "format": "json"}]: dispatch
Oct 01 17:01:41 compute-0 ceph-mon[74273]: osdmap e134: 3 total, 3 up, 3 in
Oct 01 17:01:41 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 43 MiB data, 209 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 35 KiB/s wr, 5 op/s
Oct 01 17:01:42 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta.tmp'
Oct 01 17:01:42 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta.tmp' to config b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta'
Oct 01 17:01:42 compute-0 ceph-mon[74273]: pgmap v950: 305 pgs: 305 active+clean; 43 MiB data, 209 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 35 KiB/s wr, 5 op/s
Oct 01 17:01:43 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0efe5646-e561-4cdf-9c56-ad36284bde88, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:01:43 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c5ae6d5e-ebae-47c0-867d-5722bbee2320", "format": "json"}]: dispatch
Oct 01 17:01:43 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c5ae6d5e-ebae-47c0-867d-5722bbee2320, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:01:43 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c5ae6d5e-ebae-47c0-867d-5722bbee2320, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:01:43 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:01:43.195+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c5ae6d5e-ebae-47c0-867d-5722bbee2320' of type subvolume
Oct 01 17:01:43 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c5ae6d5e-ebae-47c0-867d-5722bbee2320' of type subvolume
Oct 01 17:01:43 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c5ae6d5e-ebae-47c0-867d-5722bbee2320", "force": true, "format": "json"}]: dispatch
Oct 01 17:01:43 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c5ae6d5e-ebae-47c0-867d-5722bbee2320, vol_name:cephfs) < ""
Oct 01 17:01:43 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c5ae6d5e-ebae-47c0-867d-5722bbee2320'' moved to trashcan
Oct 01 17:01:43 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:01:43 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c5ae6d5e-ebae-47c0-867d-5722bbee2320, vol_name:cephfs) < ""
Oct 01 17:01:43 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c5ae6d5e-ebae-47c0-867d-5722bbee2320", "format": "json"}]: dispatch
Oct 01 17:01:43 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c5ae6d5e-ebae-47c0-867d-5722bbee2320", "force": true, "format": "json"}]: dispatch
Oct 01 17:01:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 17:01:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2725709038' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 17:01:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 17:01:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2725709038' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 17:01:43 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 72 KiB/s wr, 11 op/s
Oct 01 17:01:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2725709038' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 17:01:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2725709038' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 17:01:44 compute-0 ceph-mon[74273]: pgmap v951: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 72 KiB/s wr, 11 op/s
Oct 01 17:01:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:01:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Oct 01 17:01:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Oct 01 17:01:45 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Oct 01 17:01:45 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 72 KiB/s wr, 11 op/s
Oct 01 17:01:46 compute-0 ceph-mon[74273]: osdmap e135: 3 total, 3 up, 3 in
Oct 01 17:01:46 compute-0 ceph-mon[74273]: pgmap v953: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 72 KiB/s wr, 11 op/s
Oct 01 17:01:46 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "snap_name": "65fe10ee-973e-4685-94ee-6c8649be79ad_04def128-3433-4b04-912c-de5945031a2a", "force": true, "format": "json"}]: dispatch
Oct 01 17:01:46 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:65fe10ee-973e-4685-94ee-6c8649be79ad_04def128-3433-4b04-912c-de5945031a2a, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:01:46 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta.tmp'
Oct 01 17:01:46 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta.tmp' to config b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta'
Oct 01 17:01:46 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:65fe10ee-973e-4685-94ee-6c8649be79ad_04def128-3433-4b04-912c-de5945031a2a, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:01:46 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "snap_name": "65fe10ee-973e-4685-94ee-6c8649be79ad", "force": true, "format": "json"}]: dispatch
Oct 01 17:01:46 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:65fe10ee-973e-4685-94ee-6c8649be79ad, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:01:46 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta.tmp'
Oct 01 17:01:46 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta.tmp' to config b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb/.meta'
Oct 01 17:01:46 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:65fe10ee-973e-4685-94ee-6c8649be79ad, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:01:47 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "snap_name": "65fe10ee-973e-4685-94ee-6c8649be79ad_04def128-3433-4b04-912c-de5945031a2a", "force": true, "format": "json"}]: dispatch
Oct 01 17:01:47 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "snap_name": "65fe10ee-973e-4685-94ee-6c8649be79ad", "force": true, "format": "json"}]: dispatch
Oct 01 17:01:47 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 895 B/s rd, 37 KiB/s wr, 6 op/s
Oct 01 17:01:48 compute-0 ceph-mon[74273]: pgmap v954: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 895 B/s rd, 37 KiB/s wr, 6 op/s
Oct 01 17:01:49 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 59 KiB/s wr, 10 op/s
Oct 01 17:01:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:01:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Oct 01 17:01:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Oct 01 17:01:50 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Oct 01 17:01:50 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c469f818-07e0-4818-86b2-37da251687bb", "format": "json"}]: dispatch
Oct 01 17:01:50 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c469f818-07e0-4818-86b2-37da251687bb, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:01:50 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c469f818-07e0-4818-86b2-37da251687bb, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:01:50 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:01:50.677+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c469f818-07e0-4818-86b2-37da251687bb' of type subvolume
Oct 01 17:01:50 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c469f818-07e0-4818-86b2-37da251687bb' of type subvolume
Oct 01 17:01:50 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "force": true, "format": "json"}]: dispatch
Oct 01 17:01:50 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:01:50 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c469f818-07e0-4818-86b2-37da251687bb'' moved to trashcan
Oct 01 17:01:50 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:01:50 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c469f818-07e0-4818-86b2-37da251687bb, vol_name:cephfs) < ""
Oct 01 17:01:51 compute-0 ceph-mon[74273]: pgmap v955: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 59 KiB/s wr, 10 op/s
Oct 01 17:01:51 compute-0 ceph-mon[74273]: osdmap e136: 3 total, 3 up, 3 in
Oct 01 17:01:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Oct 01 17:01:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Oct 01 17:01:51 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Oct 01 17:01:51 compute-0 podman[268178]: 2025-10-01 17:01:51.738930978 +0000 UTC m=+0.058682743 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 01 17:01:51 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 926 B/s rd, 38 KiB/s wr, 6 op/s
Oct 01 17:01:52 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c469f818-07e0-4818-86b2-37da251687bb", "format": "json"}]: dispatch
Oct 01 17:01:52 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c469f818-07e0-4818-86b2-37da251687bb", "force": true, "format": "json"}]: dispatch
Oct 01 17:01:52 compute-0 ceph-mon[74273]: osdmap e137: 3 total, 3 up, 3 in
Oct 01 17:01:53 compute-0 ceph-mon[74273]: pgmap v958: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 926 B/s rd, 38 KiB/s wr, 6 op/s
Oct 01 17:01:53 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 43 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 44 KiB/s wr, 7 op/s
Oct 01 17:01:54 compute-0 podman[268198]: 2025-10-01 17:01:54.7439633 +0000 UTC m=+0.058738306 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 01 17:01:55 compute-0 ceph-mon[74273]: pgmap v959: 305 pgs: 305 active+clean; 43 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 44 KiB/s wr, 7 op/s
Oct 01 17:01:55 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9cd665eb-c288-4c00-89a5-5693af445149", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:01:55 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9cd665eb-c288-4c00-89a5-5693af445149, vol_name:cephfs) < ""
Oct 01 17:01:55 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9cd665eb-c288-4c00-89a5-5693af445149/.meta.tmp'
Oct 01 17:01:55 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9cd665eb-c288-4c00-89a5-5693af445149/.meta.tmp' to config b'/volumes/_nogroup/9cd665eb-c288-4c00-89a5-5693af445149/.meta'
Oct 01 17:01:55 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9cd665eb-c288-4c00-89a5-5693af445149, vol_name:cephfs) < ""
Oct 01 17:01:55 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9cd665eb-c288-4c00-89a5-5693af445149", "format": "json"}]: dispatch
Oct 01 17:01:55 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9cd665eb-c288-4c00-89a5-5693af445149, vol_name:cephfs) < ""
Oct 01 17:01:55 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9cd665eb-c288-4c00-89a5-5693af445149, vol_name:cephfs) < ""
Oct 01 17:01:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:01:55 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:01:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:01:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Oct 01 17:01:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Oct 01 17:01:55 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Oct 01 17:01:55 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a5f570e8-f76e-482d-b0fa-146c6f6d8dd3", "format": "json"}]: dispatch
Oct 01 17:01:55 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a5f570e8-f76e-482d-b0fa-146c6f6d8dd3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:01:55 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 43 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 16 KiB/s wr, 3 op/s
Oct 01 17:01:56 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9cd665eb-c288-4c00-89a5-5693af445149", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:01:56 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9cd665eb-c288-4c00-89a5-5693af445149", "format": "json"}]: dispatch
Oct 01 17:01:56 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:01:56 compute-0 ceph-mon[74273]: osdmap e138: 3 total, 3 up, 3 in
Oct 01 17:01:57 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a5f570e8-f76e-482d-b0fa-146c6f6d8dd3", "format": "json"}]: dispatch
Oct 01 17:01:57 compute-0 ceph-mon[74273]: pgmap v961: 305 pgs: 305 active+clean; 43 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 16 KiB/s wr, 3 op/s
Oct 01 17:01:57 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 43 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 536 B/s rd, 13 KiB/s wr, 2 op/s
Oct 01 17:01:59 compute-0 ceph-mon[74273]: pgmap v962: 305 pgs: 305 active+clean; 43 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 536 B/s rd, 13 KiB/s wr, 2 op/s
Oct 01 17:01:59 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 43 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 474 B/s rd, 27 KiB/s wr, 4 op/s
Oct 01 17:02:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a5f570e8-f76e-482d-b0fa-146c6f6d8dd3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:02:00 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a5f570e8-f76e-482d-b0fa-146c6f6d8dd3", "format": "json"}]: dispatch
Oct 01 17:02:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a5f570e8-f76e-482d-b0fa-146c6f6d8dd3, vol_name:cephfs) < ""
Oct 01 17:02:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a5f570e8-f76e-482d-b0fa-146c6f6d8dd3, vol_name:cephfs) < ""
Oct 01 17:02:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:02:00 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:02:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:02:00 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "29bcbd73-54c4-429e-b29a-68e1af0b3512", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:02:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:29bcbd73-54c4-429e-b29a-68e1af0b3512, vol_name:cephfs) < ""
Oct 01 17:02:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/29bcbd73-54c4-429e-b29a-68e1af0b3512/.meta.tmp'
Oct 01 17:02:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/29bcbd73-54c4-429e-b29a-68e1af0b3512/.meta.tmp' to config b'/volumes/_nogroup/29bcbd73-54c4-429e-b29a-68e1af0b3512/.meta'
Oct 01 17:02:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:29bcbd73-54c4-429e-b29a-68e1af0b3512, vol_name:cephfs) < ""
Oct 01 17:02:00 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "29bcbd73-54c4-429e-b29a-68e1af0b3512", "format": "json"}]: dispatch
Oct 01 17:02:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:29bcbd73-54c4-429e-b29a-68e1af0b3512, vol_name:cephfs) < ""
Oct 01 17:02:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:29bcbd73-54c4-429e-b29a-68e1af0b3512, vol_name:cephfs) < ""
Oct 01 17:02:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:02:00 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:02:00 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "52ee1ac4-a3b4-464a-84cf-883d69e24a62", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:02:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:52ee1ac4-a3b4-464a-84cf-883d69e24a62, vol_name:cephfs) < ""
Oct 01 17:02:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/52ee1ac4-a3b4-464a-84cf-883d69e24a62/.meta.tmp'
Oct 01 17:02:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/52ee1ac4-a3b4-464a-84cf-883d69e24a62/.meta.tmp' to config b'/volumes/_nogroup/52ee1ac4-a3b4-464a-84cf-883d69e24a62/.meta'
Oct 01 17:02:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:52ee1ac4-a3b4-464a-84cf-883d69e24a62, vol_name:cephfs) < ""
Oct 01 17:02:00 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "52ee1ac4-a3b4-464a-84cf-883d69e24a62", "format": "json"}]: dispatch
Oct 01 17:02:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:52ee1ac4-a3b4-464a-84cf-883d69e24a62, vol_name:cephfs) < ""
Oct 01 17:02:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:52ee1ac4-a3b4-464a-84cf-883d69e24a62, vol_name:cephfs) < ""
Oct 01 17:02:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:02:00 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:02:01 compute-0 ceph-mon[74273]: pgmap v963: 305 pgs: 305 active+clean; 43 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 474 B/s rd, 27 KiB/s wr, 4 op/s
Oct 01 17:02:01 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:02:01 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:02:01 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:02:01 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 43 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 23 KiB/s wr, 3 op/s
Oct 01 17:02:02 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a5f570e8-f76e-482d-b0fa-146c6f6d8dd3", "format": "json"}]: dispatch
Oct 01 17:02:02 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "29bcbd73-54c4-429e-b29a-68e1af0b3512", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:02:02 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "29bcbd73-54c4-429e-b29a-68e1af0b3512", "format": "json"}]: dispatch
Oct 01 17:02:02 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "52ee1ac4-a3b4-464a-84cf-883d69e24a62", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:02:02 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "52ee1ac4-a3b4-464a-84cf-883d69e24a62", "format": "json"}]: dispatch
Oct 01 17:02:02 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "52ee1ac4-a3b4-464a-84cf-883d69e24a62", "format": "json"}]: dispatch
Oct 01 17:02:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:52ee1ac4-a3b4-464a-84cf-883d69e24a62, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:02:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:52ee1ac4-a3b4-464a-84cf-883d69e24a62, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:02:02 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:02:02.613+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '52ee1ac4-a3b4-464a-84cf-883d69e24a62' of type subvolume
Oct 01 17:02:02 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '52ee1ac4-a3b4-464a-84cf-883d69e24a62' of type subvolume
Oct 01 17:02:02 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "52ee1ac4-a3b4-464a-84cf-883d69e24a62", "force": true, "format": "json"}]: dispatch
Oct 01 17:02:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:52ee1ac4-a3b4-464a-84cf-883d69e24a62, vol_name:cephfs) < ""
Oct 01 17:02:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/52ee1ac4-a3b4-464a-84cf-883d69e24a62'' moved to trashcan
Oct 01 17:02:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:02:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:52ee1ac4-a3b4-464a-84cf-883d69e24a62, vol_name:cephfs) < ""
Oct 01 17:02:02 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cd48652d-071f-468a-8c02-77f78f7bd8e7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:02:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cd48652d-071f-468a-8c02-77f78f7bd8e7, vol_name:cephfs) < ""
Oct 01 17:02:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cd48652d-071f-468a-8c02-77f78f7bd8e7/.meta.tmp'
Oct 01 17:02:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cd48652d-071f-468a-8c02-77f78f7bd8e7/.meta.tmp' to config b'/volumes/_nogroup/cd48652d-071f-468a-8c02-77f78f7bd8e7/.meta'
Oct 01 17:02:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cd48652d-071f-468a-8c02-77f78f7bd8e7, vol_name:cephfs) < ""
Oct 01 17:02:02 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cd48652d-071f-468a-8c02-77f78f7bd8e7", "format": "json"}]: dispatch
Oct 01 17:02:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cd48652d-071f-468a-8c02-77f78f7bd8e7, vol_name:cephfs) < ""
Oct 01 17:02:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cd48652d-071f-468a-8c02-77f78f7bd8e7, vol_name:cephfs) < ""
Oct 01 17:02:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:02:02 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:02:03 compute-0 ceph-mon[74273]: pgmap v964: 305 pgs: 305 active+clean; 43 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 23 KiB/s wr, 3 op/s
Oct 01 17:02:03 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:02:03 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 39 KiB/s wr, 4 op/s
Oct 01 17:02:04 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "52ee1ac4-a3b4-464a-84cf-883d69e24a62", "format": "json"}]: dispatch
Oct 01 17:02:04 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "52ee1ac4-a3b4-464a-84cf-883d69e24a62", "force": true, "format": "json"}]: dispatch
Oct 01 17:02:04 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cd48652d-071f-468a-8c02-77f78f7bd8e7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:02:04 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cd48652d-071f-468a-8c02-77f78f7bd8e7", "format": "json"}]: dispatch
Oct 01 17:02:05 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a5f570e8-f76e-482d-b0fa-146c6f6d8dd3", "format": "json"}]: dispatch
Oct 01 17:02:05 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a5f570e8-f76e-482d-b0fa-146c6f6d8dd3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:02:05 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a5f570e8-f76e-482d-b0fa-146c6f6d8dd3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:02:05 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a5f570e8-f76e-482d-b0fa-146c6f6d8dd3", "force": true, "format": "json"}]: dispatch
Oct 01 17:02:05 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a5f570e8-f76e-482d-b0fa-146c6f6d8dd3, vol_name:cephfs) < ""
Oct 01 17:02:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:02:05 compute-0 ceph-mon[74273]: pgmap v965: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 39 KiB/s wr, 4 op/s
Oct 01 17:02:05 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a5f570e8-f76e-482d-b0fa-146c6f6d8dd3'' moved to trashcan
Oct 01 17:02:05 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:02:05 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a5f570e8-f76e-482d-b0fa-146c6f6d8dd3, vol_name:cephfs) < ""
Oct 01 17:02:05 compute-0 podman[268218]: 2025-10-01 17:02:05.753457878 +0000 UTC m=+0.077997338 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 01 17:02:05 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 96 B/s rd, 36 KiB/s wr, 4 op/s
Oct 01 17:02:06 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9cd665eb-c288-4c00-89a5-5693af445149", "format": "json"}]: dispatch
Oct 01 17:02:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9cd665eb-c288-4c00-89a5-5693af445149, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:02:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9cd665eb-c288-4c00-89a5-5693af445149, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:02:06 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:02:06.241+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9cd665eb-c288-4c00-89a5-5693af445149' of type subvolume
Oct 01 17:02:06 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9cd665eb-c288-4c00-89a5-5693af445149' of type subvolume
Oct 01 17:02:06 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9cd665eb-c288-4c00-89a5-5693af445149", "force": true, "format": "json"}]: dispatch
Oct 01 17:02:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9cd665eb-c288-4c00-89a5-5693af445149, vol_name:cephfs) < ""
Oct 01 17:02:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9cd665eb-c288-4c00-89a5-5693af445149'' moved to trashcan
Oct 01 17:02:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:02:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9cd665eb-c288-4c00-89a5-5693af445149, vol_name:cephfs) < ""
Oct 01 17:02:06 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a5f570e8-f76e-482d-b0fa-146c6f6d8dd3", "format": "json"}]: dispatch
Oct 01 17:02:06 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a5f570e8-f76e-482d-b0fa-146c6f6d8dd3", "force": true, "format": "json"}]: dispatch
Oct 01 17:02:06 compute-0 ceph-mon[74273]: pgmap v966: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 96 B/s rd, 36 KiB/s wr, 4 op/s
Oct 01 17:02:07 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cd48652d-071f-468a-8c02-77f78f7bd8e7", "format": "json"}]: dispatch
Oct 01 17:02:07 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cd48652d-071f-468a-8c02-77f78f7bd8e7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:02:07 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cd48652d-071f-468a-8c02-77f78f7bd8e7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:02:07 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:02:07.210+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cd48652d-071f-468a-8c02-77f78f7bd8e7' of type subvolume
Oct 01 17:02:07 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cd48652d-071f-468a-8c02-77f78f7bd8e7' of type subvolume
Oct 01 17:02:07 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cd48652d-071f-468a-8c02-77f78f7bd8e7", "force": true, "format": "json"}]: dispatch
Oct 01 17:02:07 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cd48652d-071f-468a-8c02-77f78f7bd8e7, vol_name:cephfs) < ""
Oct 01 17:02:07 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/cd48652d-071f-468a-8c02-77f78f7bd8e7'' moved to trashcan
Oct 01 17:02:07 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:02:07 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cd48652d-071f-468a-8c02-77f78f7bd8e7, vol_name:cephfs) < ""
Oct 01 17:02:07 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9cd665eb-c288-4c00-89a5-5693af445149", "format": "json"}]: dispatch
Oct 01 17:02:07 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9cd665eb-c288-4c00-89a5-5693af445149", "force": true, "format": "json"}]: dispatch
Oct 01 17:02:07 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cd48652d-071f-468a-8c02-77f78f7bd8e7", "format": "json"}]: dispatch
Oct 01 17:02:07 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 32 KiB/s wr, 3 op/s
Oct 01 17:02:08 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cd48652d-071f-468a-8c02-77f78f7bd8e7", "force": true, "format": "json"}]: dispatch
Oct 01 17:02:08 compute-0 ceph-mon[74273]: pgmap v967: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 32 KiB/s wr, 3 op/s
Oct 01 17:02:08 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "93b513ed-8592-46d2-ae45-b578c4708055", "snap_name": "0836034c-448e-4a12-a3fa-b569fb708078_a1863850-c70c-4354-b0f9-49d9f8e0d5cf", "force": true, "format": "json"}]: dispatch
Oct 01 17:02:08 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0836034c-448e-4a12-a3fa-b569fb708078_a1863850-c70c-4354-b0f9-49d9f8e0d5cf, sub_name:93b513ed-8592-46d2-ae45-b578c4708055, vol_name:cephfs) < ""
Oct 01 17:02:08 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/93b513ed-8592-46d2-ae45-b578c4708055/.meta.tmp'
Oct 01 17:02:08 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/93b513ed-8592-46d2-ae45-b578c4708055/.meta.tmp' to config b'/volumes/_nogroup/93b513ed-8592-46d2-ae45-b578c4708055/.meta'
Oct 01 17:02:08 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0836034c-448e-4a12-a3fa-b569fb708078_a1863850-c70c-4354-b0f9-49d9f8e0d5cf, sub_name:93b513ed-8592-46d2-ae45-b578c4708055, vol_name:cephfs) < ""
Oct 01 17:02:08 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "93b513ed-8592-46d2-ae45-b578c4708055", "snap_name": "0836034c-448e-4a12-a3fa-b569fb708078", "force": true, "format": "json"}]: dispatch
Oct 01 17:02:08 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0836034c-448e-4a12-a3fa-b569fb708078, sub_name:93b513ed-8592-46d2-ae45-b578c4708055, vol_name:cephfs) < ""
Oct 01 17:02:08 compute-0 podman[268244]: 2025-10-01 17:02:08.770474421 +0000 UTC m=+0.089388052 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 01 17:02:08 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/93b513ed-8592-46d2-ae45-b578c4708055/.meta.tmp'
Oct 01 17:02:08 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/93b513ed-8592-46d2-ae45-b578c4708055/.meta.tmp' to config b'/volumes/_nogroup/93b513ed-8592-46d2-ae45-b578c4708055/.meta'
Oct 01 17:02:08 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0836034c-448e-4a12-a3fa-b569fb708078, sub_name:93b513ed-8592-46d2-ae45-b578c4708055, vol_name:cephfs) < ""
Oct 01 17:02:09 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "93b513ed-8592-46d2-ae45-b578c4708055", "snap_name": "0836034c-448e-4a12-a3fa-b569fb708078_a1863850-c70c-4354-b0f9-49d9f8e0d5cf", "force": true, "format": "json"}]: dispatch
Oct 01 17:02:09 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "93b513ed-8592-46d2-ae45-b578c4708055", "snap_name": "0836034c-448e-4a12-a3fa-b569fb708078", "force": true, "format": "json"}]: dispatch
Oct 01 17:02:09 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3096acfb-193c-4e04-82fb-a593edc05063", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:02:09 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3096acfb-193c-4e04-82fb-a593edc05063, vol_name:cephfs) < ""
Oct 01 17:02:09 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 54 KiB/s wr, 5 op/s
Oct 01 17:02:10 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3096acfb-193c-4e04-82fb-a593edc05063/.meta.tmp'
Oct 01 17:02:10 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3096acfb-193c-4e04-82fb-a593edc05063/.meta.tmp' to config b'/volumes/_nogroup/3096acfb-193c-4e04-82fb-a593edc05063/.meta'
Oct 01 17:02:10 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3096acfb-193c-4e04-82fb-a593edc05063, vol_name:cephfs) < ""
Oct 01 17:02:10 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3096acfb-193c-4e04-82fb-a593edc05063", "format": "json"}]: dispatch
Oct 01 17:02:10 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3096acfb-193c-4e04-82fb-a593edc05063, vol_name:cephfs) < ""
Oct 01 17:02:10 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3096acfb-193c-4e04-82fb-a593edc05063, vol_name:cephfs) < ""
Oct 01 17:02:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:02:10 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:02:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:02:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Oct 01 17:02:10 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3096acfb-193c-4e04-82fb-a593edc05063", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:02:10 compute-0 ceph-mon[74273]: pgmap v968: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 54 KiB/s wr, 5 op/s
Oct 01 17:02:10 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3096acfb-193c-4e04-82fb-a593edc05063", "format": "json"}]: dispatch
Oct 01 17:02:10 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:02:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Oct 01 17:02:10 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Oct 01 17:02:10 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "127b3cca-8ccf-4e68-a7e5-ce47f71b0e6b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:02:10 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:127b3cca-8ccf-4e68-a7e5-ce47f71b0e6b, vol_name:cephfs) < ""
Oct 01 17:02:10 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/127b3cca-8ccf-4e68-a7e5-ce47f71b0e6b/.meta.tmp'
Oct 01 17:02:10 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/127b3cca-8ccf-4e68-a7e5-ce47f71b0e6b/.meta.tmp' to config b'/volumes/_nogroup/127b3cca-8ccf-4e68-a7e5-ce47f71b0e6b/.meta'
Oct 01 17:02:10 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:127b3cca-8ccf-4e68-a7e5-ce47f71b0e6b, vol_name:cephfs) < ""
Oct 01 17:02:10 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "127b3cca-8ccf-4e68-a7e5-ce47f71b0e6b", "format": "json"}]: dispatch
Oct 01 17:02:10 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:127b3cca-8ccf-4e68-a7e5-ce47f71b0e6b, vol_name:cephfs) < ""
Oct 01 17:02:10 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:127b3cca-8ccf-4e68-a7e5-ce47f71b0e6b, vol_name:cephfs) < ""
Oct 01 17:02:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:02:10 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:02:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:02:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:02:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_17:02:11
Oct 01 17:02:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 17:02:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 17:02:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'default.rgw.log', 'volumes', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', 'vms', '.mgr', 'cephfs.cephfs.data']
Oct 01 17:02:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 17:02:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:02:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:02:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:02:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:02:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 17:02:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 17:02:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:02:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:02:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:02:11 compute-0 ceph-mon[74273]: osdmap e139: 3 total, 3 up, 3 in
Oct 01 17:02:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:02:11 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "127b3cca-8ccf-4e68-a7e5-ce47f71b0e6b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:02:11 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "127b3cca-8ccf-4e68-a7e5-ce47f71b0e6b", "format": "json"}]: dispatch
Oct 01 17:02:11 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:02:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:02:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:02:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:02:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:02:11 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 51 KiB/s wr, 5 op/s
Oct 01 17:02:12 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "aef1d813-6fa0-431e-85ff-6739a4057903", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:02:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:aef1d813-6fa0-431e-85ff-6739a4057903, vol_name:cephfs) < ""
Oct 01 17:02:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/aef1d813-6fa0-431e-85ff-6739a4057903/.meta.tmp'
Oct 01 17:02:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/aef1d813-6fa0-431e-85ff-6739a4057903/.meta.tmp' to config b'/volumes/_nogroup/aef1d813-6fa0-431e-85ff-6739a4057903/.meta'
Oct 01 17:02:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:aef1d813-6fa0-431e-85ff-6739a4057903, vol_name:cephfs) < ""
Oct 01 17:02:12 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "aef1d813-6fa0-431e-85ff-6739a4057903", "format": "json"}]: dispatch
Oct 01 17:02:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:aef1d813-6fa0-431e-85ff-6739a4057903, vol_name:cephfs) < ""
Oct 01 17:02:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:aef1d813-6fa0-431e-85ff-6739a4057903, vol_name:cephfs) < ""
Oct 01 17:02:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:02:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:02:12 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "93b513ed-8592-46d2-ae45-b578c4708055", "format": "json"}]: dispatch
Oct 01 17:02:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:93b513ed-8592-46d2-ae45-b578c4708055, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:02:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:93b513ed-8592-46d2-ae45-b578c4708055, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:02:12 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:02:12.503+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '93b513ed-8592-46d2-ae45-b578c4708055' of type subvolume
Oct 01 17:02:12 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '93b513ed-8592-46d2-ae45-b578c4708055' of type subvolume
Oct 01 17:02:12 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "93b513ed-8592-46d2-ae45-b578c4708055", "force": true, "format": "json"}]: dispatch
Oct 01 17:02:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:93b513ed-8592-46d2-ae45-b578c4708055, vol_name:cephfs) < ""
Oct 01 17:02:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/93b513ed-8592-46d2-ae45-b578c4708055'' moved to trashcan
Oct 01 17:02:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:02:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:93b513ed-8592-46d2-ae45-b578c4708055, vol_name:cephfs) < ""
Oct 01 17:02:12 compute-0 ceph-mon[74273]: pgmap v970: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 51 KiB/s wr, 5 op/s
Oct 01 17:02:12 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:02:13 compute-0 nova_compute[259504]: 2025-10-01 17:02:13.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:02:13 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "aef1d813-6fa0-431e-85ff-6739a4057903", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:02:13 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "aef1d813-6fa0-431e-85ff-6739a4057903", "format": "json"}]: dispatch
Oct 01 17:02:13 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "93b513ed-8592-46d2-ae45-b578c4708055", "format": "json"}]: dispatch
Oct 01 17:02:13 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "93b513ed-8592-46d2-ae45-b578c4708055", "force": true, "format": "json"}]: dispatch
Oct 01 17:02:13 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 64 KiB/s wr, 7 op/s
Oct 01 17:02:14 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3096acfb-193c-4e04-82fb-a593edc05063", "format": "json"}]: dispatch
Oct 01 17:02:14 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3096acfb-193c-4e04-82fb-a593edc05063, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:02:14 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3096acfb-193c-4e04-82fb-a593edc05063, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:02:14 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:02:14.236+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3096acfb-193c-4e04-82fb-a593edc05063' of type subvolume
Oct 01 17:02:14 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3096acfb-193c-4e04-82fb-a593edc05063' of type subvolume
Oct 01 17:02:14 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3096acfb-193c-4e04-82fb-a593edc05063", "force": true, "format": "json"}]: dispatch
Oct 01 17:02:14 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3096acfb-193c-4e04-82fb-a593edc05063, vol_name:cephfs) < ""
Oct 01 17:02:14 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3096acfb-193c-4e04-82fb-a593edc05063'' moved to trashcan
Oct 01 17:02:14 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:02:14 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3096acfb-193c-4e04-82fb-a593edc05063, vol_name:cephfs) < ""
Oct 01 17:02:14 compute-0 ceph-mon[74273]: pgmap v971: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 64 KiB/s wr, 7 op/s
Oct 01 17:02:14 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3096acfb-193c-4e04-82fb-a593edc05063", "format": "json"}]: dispatch
Oct 01 17:02:14 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3096acfb-193c-4e04-82fb-a593edc05063", "force": true, "format": "json"}]: dispatch
Oct 01 17:02:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:02:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Oct 01 17:02:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Oct 01 17:02:15 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "127b3cca-8ccf-4e68-a7e5-ce47f71b0e6b", "format": "json"}]: dispatch
Oct 01 17:02:15 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:127b3cca-8ccf-4e68-a7e5-ce47f71b0e6b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:02:15 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:127b3cca-8ccf-4e68-a7e5-ce47f71b0e6b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:02:15 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:02:15.543+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '127b3cca-8ccf-4e68-a7e5-ce47f71b0e6b' of type subvolume
Oct 01 17:02:15 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '127b3cca-8ccf-4e68-a7e5-ce47f71b0e6b' of type subvolume
Oct 01 17:02:15 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Oct 01 17:02:15 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "127b3cca-8ccf-4e68-a7e5-ce47f71b0e6b", "force": true, "format": "json"}]: dispatch
Oct 01 17:02:15 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:127b3cca-8ccf-4e68-a7e5-ce47f71b0e6b, vol_name:cephfs) < ""
Oct 01 17:02:15 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/127b3cca-8ccf-4e68-a7e5-ce47f71b0e6b'' moved to trashcan
Oct 01 17:02:15 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:02:15 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:127b3cca-8ccf-4e68-a7e5-ce47f71b0e6b, vol_name:cephfs) < ""
Oct 01 17:02:15 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "aef1d813-6fa0-431e-85ff-6739a4057903", "snap_name": "586d43f1-a8b2-44e0-b7f4-42b2e4ab8edf", "format": "json"}]: dispatch
Oct 01 17:02:15 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:586d43f1-a8b2-44e0-b7f4-42b2e4ab8edf, sub_name:aef1d813-6fa0-431e-85ff-6739a4057903, vol_name:cephfs) < ""
Oct 01 17:02:15 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:586d43f1-a8b2-44e0-b7f4-42b2e4ab8edf, sub_name:aef1d813-6fa0-431e-85ff-6739a4057903, vol_name:cephfs) < ""
Oct 01 17:02:15 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 80 KiB/s wr, 9 op/s
Oct 01 17:02:16 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "127b3cca-8ccf-4e68-a7e5-ce47f71b0e6b", "format": "json"}]: dispatch
Oct 01 17:02:16 compute-0 ceph-mon[74273]: osdmap e140: 3 total, 3 up, 3 in
Oct 01 17:02:16 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "127b3cca-8ccf-4e68-a7e5-ce47f71b0e6b", "force": true, "format": "json"}]: dispatch
Oct 01 17:02:16 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "aef1d813-6fa0-431e-85ff-6739a4057903", "snap_name": "586d43f1-a8b2-44e0-b7f4-42b2e4ab8edf", "format": "json"}]: dispatch
Oct 01 17:02:16 compute-0 ceph-mon[74273]: pgmap v973: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 80 KiB/s wr, 9 op/s
Oct 01 17:02:16 compute-0 nova_compute[259504]: 2025-10-01 17:02:16.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:02:16 compute-0 nova_compute[259504]: 2025-10-01 17:02:16.779 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:02:16 compute-0 nova_compute[259504]: 2025-10-01 17:02:16.780 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:02:16 compute-0 nova_compute[259504]: 2025-10-01 17:02:16.780 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:02:16 compute-0 nova_compute[259504]: 2025-10-01 17:02:16.781 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 17:02:16 compute-0 nova_compute[259504]: 2025-10-01 17:02:16.781 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:02:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:02:17 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1014104433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:02:17 compute-0 nova_compute[259504]: 2025-10-01 17:02:17.206 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:02:17 compute-0 nova_compute[259504]: 2025-10-01 17:02:17.392 2 WARNING nova.virt.libvirt.driver [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 17:02:17 compute-0 nova_compute[259504]: 2025-10-01 17:02:17.394 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5131MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 17:02:17 compute-0 nova_compute[259504]: 2025-10-01 17:02:17.394 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:02:17 compute-0 nova_compute[259504]: 2025-10-01 17:02:17.395 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:02:17 compute-0 nova_compute[259504]: 2025-10-01 17:02:17.455 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 17:02:17 compute-0 nova_compute[259504]: 2025-10-01 17:02:17.456 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 17:02:17 compute-0 nova_compute[259504]: 2025-10-01 17:02:17.474 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:02:17 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1014104433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:02:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:02:17 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/806596675' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:02:17 compute-0 nova_compute[259504]: 2025-10-01 17:02:17.942 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:02:17 compute-0 nova_compute[259504]: 2025-10-01 17:02:17.952 2 DEBUG nova.compute.provider_tree [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed in ProviderTree for provider: 2417da73-53f1-4edf-ae4c-fbd9fa470d6b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 17:02:17 compute-0 nova_compute[259504]: 2025-10-01 17:02:17.967 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 17:02:17 compute-0 nova_compute[259504]: 2025-10-01 17:02:17.970 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 17:02:17 compute-0 nova_compute[259504]: 2025-10-01 17:02:17.970 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:02:17 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 47 KiB/s wr, 6 op/s
Oct 01 17:02:18 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/806596675' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:02:18 compute-0 ceph-mon[74273]: pgmap v974: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 47 KiB/s wr, 6 op/s
Oct 01 17:02:18 compute-0 nova_compute[259504]: 2025-10-01 17:02:18.966 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:02:18 compute-0 nova_compute[259504]: 2025-10-01 17:02:18.966 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:02:18 compute-0 nova_compute[259504]: 2025-10-01 17:02:18.985 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:02:18 compute-0 nova_compute[259504]: 2025-10-01 17:02:18.986 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 17:02:18 compute-0 nova_compute[259504]: 2025-10-01 17:02:18.986 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 17:02:19 compute-0 nova_compute[259504]: 2025-10-01 17:02:19.002 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 17:02:19 compute-0 nova_compute[259504]: 2025-10-01 17:02:19.003 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:02:19 compute-0 nova_compute[259504]: 2025-10-01 17:02:19.003 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:02:19 compute-0 nova_compute[259504]: 2025-10-01 17:02:19.004 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 17:02:19 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "46dae3a3-bd69-43bd-9813-cd1117c66c7c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:02:19 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:46dae3a3-bd69-43bd-9813-cd1117c66c7c, vol_name:cephfs) < ""
Oct 01 17:02:19 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/46dae3a3-bd69-43bd-9813-cd1117c66c7c/.meta.tmp'
Oct 01 17:02:19 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/46dae3a3-bd69-43bd-9813-cd1117c66c7c/.meta.tmp' to config b'/volumes/_nogroup/46dae3a3-bd69-43bd-9813-cd1117c66c7c/.meta'
Oct 01 17:02:19 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:46dae3a3-bd69-43bd-9813-cd1117c66c7c, vol_name:cephfs) < ""
Oct 01 17:02:19 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "46dae3a3-bd69-43bd-9813-cd1117c66c7c", "format": "json"}]: dispatch
Oct 01 17:02:19 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:46dae3a3-bd69-43bd-9813-cd1117c66c7c, vol_name:cephfs) < ""
Oct 01 17:02:19 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:46dae3a3-bd69-43bd-9813-cd1117c66c7c, vol_name:cephfs) < ""
Oct 01 17:02:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:02:19 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:02:19 compute-0 nova_compute[259504]: 2025-10-01 17:02:19.751 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:02:19 compute-0 nova_compute[259504]: 2025-10-01 17:02:19.751 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:02:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:02:19.970 162304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:02:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:02:19.970 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:02:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:02:19.971 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:02:19 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 44 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 65 KiB/s wr, 9 op/s
Oct 01 17:02:20 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "46dae3a3-bd69-43bd-9813-cd1117c66c7c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:02:20 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:02:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:02:20 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "aef1d813-6fa0-431e-85ff-6739a4057903", "snap_name": "586d43f1-a8b2-44e0-b7f4-42b2e4ab8edf_573d6d11-e29f-4de8-902a-26c52c887e40", "force": true, "format": "json"}]: dispatch
Oct 01 17:02:20 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:586d43f1-a8b2-44e0-b7f4-42b2e4ab8edf_573d6d11-e29f-4de8-902a-26c52c887e40, sub_name:aef1d813-6fa0-431e-85ff-6739a4057903, vol_name:cephfs) < ""
Oct 01 17:02:20 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/aef1d813-6fa0-431e-85ff-6739a4057903/.meta.tmp'
Oct 01 17:02:20 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/aef1d813-6fa0-431e-85ff-6739a4057903/.meta.tmp' to config b'/volumes/_nogroup/aef1d813-6fa0-431e-85ff-6739a4057903/.meta'
Oct 01 17:02:20 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:586d43f1-a8b2-44e0-b7f4-42b2e4ab8edf_573d6d11-e29f-4de8-902a-26c52c887e40, sub_name:aef1d813-6fa0-431e-85ff-6739a4057903, vol_name:cephfs) < ""
Oct 01 17:02:20 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "aef1d813-6fa0-431e-85ff-6739a4057903", "snap_name": "586d43f1-a8b2-44e0-b7f4-42b2e4ab8edf", "force": true, "format": "json"}]: dispatch
Oct 01 17:02:20 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:586d43f1-a8b2-44e0-b7f4-42b2e4ab8edf, sub_name:aef1d813-6fa0-431e-85ff-6739a4057903, vol_name:cephfs) < ""
Oct 01 17:02:20 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/aef1d813-6fa0-431e-85ff-6739a4057903/.meta.tmp'
Oct 01 17:02:20 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/aef1d813-6fa0-431e-85ff-6739a4057903/.meta.tmp' to config b'/volumes/_nogroup/aef1d813-6fa0-431e-85ff-6739a4057903/.meta'
Oct 01 17:02:20 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:586d43f1-a8b2-44e0-b7f4-42b2e4ab8edf, sub_name:aef1d813-6fa0-431e-85ff-6739a4057903, vol_name:cephfs) < ""
Oct 01 17:02:21 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "46dae3a3-bd69-43bd-9813-cd1117c66c7c", "format": "json"}]: dispatch
Oct 01 17:02:21 compute-0 ceph-mon[74273]: pgmap v975: 305 pgs: 305 active+clean; 44 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 65 KiB/s wr, 9 op/s
Oct 01 17:02:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 17:02:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:02:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 17:02:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:02:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:02:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:02:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:02:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:02:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:02:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:02:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 01 17:02:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:02:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.227156182848212e-05 of space, bias 4.0, pg target 0.06272587419417855 quantized to 16 (current 16)
Oct 01 17:02:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:02:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Oct 01 17:02:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:02:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 17:02:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:02:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 17:02:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:02:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:02:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:02:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 17:02:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:02:21.504 162304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:71:db', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:60:3f:78:bd:29'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 17:02:21 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:02:21.505 162304 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 17:02:21 compute-0 nova_compute[259504]: 2025-10-01 17:02:21.749 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:02:21 compute-0 sudo[268307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:02:21 compute-0 sudo[268307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:02:21 compute-0 sudo[268307]: pam_unix(sudo:session): session closed for user root
Oct 01 17:02:21 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 44 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 62 KiB/s wr, 8 op/s
Oct 01 17:02:22 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "aef1d813-6fa0-431e-85ff-6739a4057903", "snap_name": "586d43f1-a8b2-44e0-b7f4-42b2e4ab8edf_573d6d11-e29f-4de8-902a-26c52c887e40", "force": true, "format": "json"}]: dispatch
Oct 01 17:02:22 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "aef1d813-6fa0-431e-85ff-6739a4057903", "snap_name": "586d43f1-a8b2-44e0-b7f4-42b2e4ab8edf", "force": true, "format": "json"}]: dispatch
Oct 01 17:02:22 compute-0 sudo[268338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:02:22 compute-0 sudo[268338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:02:22 compute-0 sudo[268338]: pam_unix(sudo:session): session closed for user root
Oct 01 17:02:22 compute-0 podman[268331]: 2025-10-01 17:02:22.079921645 +0000 UTC m=+0.097341448 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 01 17:02:22 compute-0 sudo[268376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:02:22 compute-0 sudo[268376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:02:22 compute-0 sudo[268376]: pam_unix(sudo:session): session closed for user root
Oct 01 17:02:22 compute-0 sudo[268401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 17:02:22 compute-0 sudo[268401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:02:22 compute-0 sudo[268401]: pam_unix(sudo:session): session closed for user root
Oct 01 17:02:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 17:02:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:02:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 17:02:22 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 17:02:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 17:02:22 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:02:22 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 199cf503-ca27-4527-9be4-b792f9f2b088 does not exist
Oct 01 17:02:22 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 42cd696c-030b-4ae3-8447-dac4c4d7f782 does not exist
Oct 01 17:02:22 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 722ee3de-33b2-4dfd-ac18-89d8df3726de does not exist
Oct 01 17:02:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 17:02:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 17:02:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 17:02:22 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 17:02:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 17:02:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:02:22 compute-0 sudo[268455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:02:22 compute-0 sudo[268455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:02:22 compute-0 sudo[268455]: pam_unix(sudo:session): session closed for user root
Oct 01 17:02:22 compute-0 sudo[268480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:02:22 compute-0 sudo[268480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:02:22 compute-0 sudo[268480]: pam_unix(sudo:session): session closed for user root
Oct 01 17:02:22 compute-0 sudo[268505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:02:22 compute-0 sudo[268505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:02:22 compute-0 sudo[268505]: pam_unix(sudo:session): session closed for user root
Oct 01 17:02:22 compute-0 sudo[268530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 17:02:22 compute-0 sudo[268530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:02:23 compute-0 ceph-mon[74273]: pgmap v976: 305 pgs: 305 active+clean; 44 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 62 KiB/s wr, 8 op/s
Oct 01 17:02:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:02:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 17:02:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:02:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 17:02:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 17:02:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:02:23 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "46dae3a3-bd69-43bd-9813-cd1117c66c7c", "format": "json"}]: dispatch
Oct 01 17:02:23 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:46dae3a3-bd69-43bd-9813-cd1117c66c7c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:02:23 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:46dae3a3-bd69-43bd-9813-cd1117c66c7c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:02:23 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:02:23.061+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '46dae3a3-bd69-43bd-9813-cd1117c66c7c' of type subvolume
Oct 01 17:02:23 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '46dae3a3-bd69-43bd-9813-cd1117c66c7c' of type subvolume
Oct 01 17:02:23 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "46dae3a3-bd69-43bd-9813-cd1117c66c7c", "force": true, "format": "json"}]: dispatch
Oct 01 17:02:23 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:46dae3a3-bd69-43bd-9813-cd1117c66c7c, vol_name:cephfs) < ""
Oct 01 17:02:23 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/46dae3a3-bd69-43bd-9813-cd1117c66c7c'' moved to trashcan
Oct 01 17:02:23 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:02:23 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:46dae3a3-bd69-43bd-9813-cd1117c66c7c, vol_name:cephfs) < ""
Oct 01 17:02:23 compute-0 podman[268597]: 2025-10-01 17:02:23.33077813 +0000 UTC m=+0.041585638 container create 990404b128997fa2e76e20d18b2433e0b89b8bbb82317ae94d709f426adf59f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:02:23 compute-0 systemd[1]: Started libpod-conmon-990404b128997fa2e76e20d18b2433e0b89b8bbb82317ae94d709f426adf59f5.scope.
Oct 01 17:02:23 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:02:23 compute-0 podman[268597]: 2025-10-01 17:02:23.407880841 +0000 UTC m=+0.118688389 container init 990404b128997fa2e76e20d18b2433e0b89b8bbb82317ae94d709f426adf59f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_meitner, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:02:23 compute-0 podman[268597]: 2025-10-01 17:02:23.313928312 +0000 UTC m=+0.024735840 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:02:23 compute-0 podman[268597]: 2025-10-01 17:02:23.417660556 +0000 UTC m=+0.128468064 container start 990404b128997fa2e76e20d18b2433e0b89b8bbb82317ae94d709f426adf59f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_meitner, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:02:23 compute-0 podman[268597]: 2025-10-01 17:02:23.420492306 +0000 UTC m=+0.131299874 container attach 990404b128997fa2e76e20d18b2433e0b89b8bbb82317ae94d709f426adf59f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_meitner, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 01 17:02:23 compute-0 nice_meitner[268613]: 167 167
Oct 01 17:02:23 compute-0 podman[268597]: 2025-10-01 17:02:23.423291969 +0000 UTC m=+0.134099507 container died 990404b128997fa2e76e20d18b2433e0b89b8bbb82317ae94d709f426adf59f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:02:23 compute-0 systemd[1]: libpod-990404b128997fa2e76e20d18b2433e0b89b8bbb82317ae94d709f426adf59f5.scope: Deactivated successfully.
Oct 01 17:02:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-442e63742da83d964066a704e84f355f02f10d98dfb7fcc6510850d46b2e884e-merged.mount: Deactivated successfully.
Oct 01 17:02:23 compute-0 podman[268597]: 2025-10-01 17:02:23.469571514 +0000 UTC m=+0.180379022 container remove 990404b128997fa2e76e20d18b2433e0b89b8bbb82317ae94d709f426adf59f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_meitner, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:02:23 compute-0 systemd[1]: libpod-conmon-990404b128997fa2e76e20d18b2433e0b89b8bbb82317ae94d709f426adf59f5.scope: Deactivated successfully.
Oct 01 17:02:23 compute-0 podman[268637]: 2025-10-01 17:02:23.664433859 +0000 UTC m=+0.068704104 container create 2c9871f4e18b0c00bec76d9289fe3d8db5a4bd5f9ac5c948331261176f6922f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bartik, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 01 17:02:23 compute-0 systemd[1]: Started libpod-conmon-2c9871f4e18b0c00bec76d9289fe3d8db5a4bd5f9ac5c948331261176f6922f7.scope.
Oct 01 17:02:23 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:02:23 compute-0 podman[268637]: 2025-10-01 17:02:23.640291587 +0000 UTC m=+0.044561902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:02:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f73fce6723cff69940a44cd9ea671090424ff8ac15a74513d880dbd348213fd2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:02:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f73fce6723cff69940a44cd9ea671090424ff8ac15a74513d880dbd348213fd2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:02:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f73fce6723cff69940a44cd9ea671090424ff8ac15a74513d880dbd348213fd2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:02:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f73fce6723cff69940a44cd9ea671090424ff8ac15a74513d880dbd348213fd2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:02:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f73fce6723cff69940a44cd9ea671090424ff8ac15a74513d880dbd348213fd2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 17:02:23 compute-0 podman[268637]: 2025-10-01 17:02:23.747102148 +0000 UTC m=+0.151372393 container init 2c9871f4e18b0c00bec76d9289fe3d8db5a4bd5f9ac5c948331261176f6922f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bartik, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:02:23 compute-0 podman[268637]: 2025-10-01 17:02:23.759215679 +0000 UTC m=+0.163485914 container start 2c9871f4e18b0c00bec76d9289fe3d8db5a4bd5f9ac5c948331261176f6922f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bartik, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:02:23 compute-0 podman[268637]: 2025-10-01 17:02:23.762475927 +0000 UTC m=+0.166746192 container attach 2c9871f4e18b0c00bec76d9289fe3d8db5a4bd5f9ac5c948331261176f6922f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bartik, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 01 17:02:23 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 53 KiB/s wr, 5 op/s
Oct 01 17:02:24 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "46dae3a3-bd69-43bd-9813-cd1117c66c7c", "format": "json"}]: dispatch
Oct 01 17:02:24 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "46dae3a3-bd69-43bd-9813-cd1117c66c7c", "force": true, "format": "json"}]: dispatch
Oct 01 17:02:24 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "aef1d813-6fa0-431e-85ff-6739a4057903", "format": "json"}]: dispatch
Oct 01 17:02:24 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:aef1d813-6fa0-431e-85ff-6739a4057903, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:02:24 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:aef1d813-6fa0-431e-85ff-6739a4057903, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:02:24 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'aef1d813-6fa0-431e-85ff-6739a4057903' of type subvolume
Oct 01 17:02:24 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:02:24.548+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'aef1d813-6fa0-431e-85ff-6739a4057903' of type subvolume
Oct 01 17:02:24 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "aef1d813-6fa0-431e-85ff-6739a4057903", "force": true, "format": "json"}]: dispatch
Oct 01 17:02:24 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:aef1d813-6fa0-431e-85ff-6739a4057903, vol_name:cephfs) < ""
Oct 01 17:02:24 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/aef1d813-6fa0-431e-85ff-6739a4057903'' moved to trashcan
Oct 01 17:02:24 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:02:24 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:aef1d813-6fa0-431e-85ff-6739a4057903, vol_name:cephfs) < ""
Oct 01 17:02:24 compute-0 magical_bartik[268653]: --> passed data devices: 0 physical, 3 LVM
Oct 01 17:02:24 compute-0 magical_bartik[268653]: --> relative data size: 1.0
Oct 01 17:02:24 compute-0 magical_bartik[268653]: --> All data devices are unavailable
Oct 01 17:02:24 compute-0 systemd[1]: libpod-2c9871f4e18b0c00bec76d9289fe3d8db5a4bd5f9ac5c948331261176f6922f7.scope: Deactivated successfully.
Oct 01 17:02:24 compute-0 systemd[1]: libpod-2c9871f4e18b0c00bec76d9289fe3d8db5a4bd5f9ac5c948331261176f6922f7.scope: Consumed 1.111s CPU time.
Oct 01 17:02:24 compute-0 podman[268683]: 2025-10-01 17:02:24.96614137 +0000 UTC m=+0.032757782 container died 2c9871f4e18b0c00bec76d9289fe3d8db5a4bd5f9ac5c948331261176f6922f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bartik, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 01 17:02:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-f73fce6723cff69940a44cd9ea671090424ff8ac15a74513d880dbd348213fd2-merged.mount: Deactivated successfully.
Oct 01 17:02:25 compute-0 podman[268683]: 2025-10-01 17:02:25.028929444 +0000 UTC m=+0.095545836 container remove 2c9871f4e18b0c00bec76d9289fe3d8db5a4bd5f9ac5c948331261176f6922f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:02:25 compute-0 systemd[1]: libpod-conmon-2c9871f4e18b0c00bec76d9289fe3d8db5a4bd5f9ac5c948331261176f6922f7.scope: Deactivated successfully.
Oct 01 17:02:25 compute-0 podman[268682]: 2025-10-01 17:02:25.040040835 +0000 UTC m=+0.084648604 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 01 17:02:25 compute-0 sudo[268530]: pam_unix(sudo:session): session closed for user root
Oct 01 17:02:25 compute-0 ceph-mon[74273]: pgmap v977: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 53 KiB/s wr, 5 op/s
Oct 01 17:02:25 compute-0 sudo[268716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:02:25 compute-0 sudo[268716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:02:25 compute-0 sudo[268716]: pam_unix(sudo:session): session closed for user root
Oct 01 17:02:25 compute-0 sudo[268741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:02:25 compute-0 sudo[268741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:02:25 compute-0 sudo[268741]: pam_unix(sudo:session): session closed for user root
Oct 01 17:02:25 compute-0 sudo[268766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:02:25 compute-0 sudo[268766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:02:25 compute-0 sudo[268766]: pam_unix(sudo:session): session closed for user root
Oct 01 17:02:25 compute-0 sudo[268791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 17:02:25 compute-0 sudo[268791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:02:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:02:25 compute-0 podman[268856]: 2025-10-01 17:02:25.745400609 +0000 UTC m=+0.054000889 container create 543001bd604e7b41e6befd95386f66a3495008d0416fea4ec444cb14f7f75607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 01 17:02:25 compute-0 systemd[1]: Started libpod-conmon-543001bd604e7b41e6befd95386f66a3495008d0416fea4ec444cb14f7f75607.scope.
Oct 01 17:02:25 compute-0 podman[268856]: 2025-10-01 17:02:25.718031915 +0000 UTC m=+0.026632275 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:02:25 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:02:25 compute-0 podman[268856]: 2025-10-01 17:02:25.840449288 +0000 UTC m=+0.149049558 container init 543001bd604e7b41e6befd95386f66a3495008d0416fea4ec444cb14f7f75607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:02:25 compute-0 podman[268856]: 2025-10-01 17:02:25.852023116 +0000 UTC m=+0.160623386 container start 543001bd604e7b41e6befd95386f66a3495008d0416fea4ec444cb14f7f75607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_mendeleev, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:02:25 compute-0 podman[268856]: 2025-10-01 17:02:25.855431648 +0000 UTC m=+0.164031948 container attach 543001bd604e7b41e6befd95386f66a3495008d0416fea4ec444cb14f7f75607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_mendeleev, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 17:02:25 compute-0 jolly_mendeleev[268873]: 167 167
Oct 01 17:02:25 compute-0 systemd[1]: libpod-543001bd604e7b41e6befd95386f66a3495008d0416fea4ec444cb14f7f75607.scope: Deactivated successfully.
Oct 01 17:02:25 compute-0 podman[268856]: 2025-10-01 17:02:25.859616629 +0000 UTC m=+0.168216929 container died 543001bd604e7b41e6befd95386f66a3495008d0416fea4ec444cb14f7f75607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_mendeleev, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 01 17:02:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-c823918d2989b33e229ba43b7cf9c5fb2cac3ff4a2ba447fa8dbca593d07f65f-merged.mount: Deactivated successfully.
Oct 01 17:02:25 compute-0 podman[268856]: 2025-10-01 17:02:25.911291452 +0000 UTC m=+0.219891722 container remove 543001bd604e7b41e6befd95386f66a3495008d0416fea4ec444cb14f7f75607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_mendeleev, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:02:25 compute-0 systemd[1]: libpod-conmon-543001bd604e7b41e6befd95386f66a3495008d0416fea4ec444cb14f7f75607.scope: Deactivated successfully.
Oct 01 17:02:25 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 587 B/s rd, 50 KiB/s wr, 5 op/s
Oct 01 17:02:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Oct 01 17:02:26 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "aef1d813-6fa0-431e-85ff-6739a4057903", "format": "json"}]: dispatch
Oct 01 17:02:26 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "aef1d813-6fa0-431e-85ff-6739a4057903", "force": true, "format": "json"}]: dispatch
Oct 01 17:02:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Oct 01 17:02:26 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Oct 01 17:02:26 compute-0 podman[268896]: 2025-10-01 17:02:26.123844099 +0000 UTC m=+0.051421536 container create ccebcafc30242554f8bee0b959c5a87ff3ea69bd26132e111262702b0801052f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_brattain, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:02:26 compute-0 systemd[1]: Started libpod-conmon-ccebcafc30242554f8bee0b959c5a87ff3ea69bd26132e111262702b0801052f.scope.
Oct 01 17:02:26 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:02:26 compute-0 podman[268896]: 2025-10-01 17:02:26.100362108 +0000 UTC m=+0.027939635 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64bc228d7130ce3d72b24d342164efa988ed41a25db30dc4bf3fbf566793684b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64bc228d7130ce3d72b24d342164efa988ed41a25db30dc4bf3fbf566793684b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64bc228d7130ce3d72b24d342164efa988ed41a25db30dc4bf3fbf566793684b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64bc228d7130ce3d72b24d342164efa988ed41a25db30dc4bf3fbf566793684b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:02:26 compute-0 podman[268896]: 2025-10-01 17:02:26.213265028 +0000 UTC m=+0.140842515 container init ccebcafc30242554f8bee0b959c5a87ff3ea69bd26132e111262702b0801052f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_brattain, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 01 17:02:26 compute-0 podman[268896]: 2025-10-01 17:02:26.220498478 +0000 UTC m=+0.148075915 container start ccebcafc30242554f8bee0b959c5a87ff3ea69bd26132e111262702b0801052f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_brattain, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 17:02:26 compute-0 podman[268896]: 2025-10-01 17:02:26.223626937 +0000 UTC m=+0.151204474 container attach ccebcafc30242554f8bee0b959c5a87ff3ea69bd26132e111262702b0801052f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_brattain, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:02:26 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ecd5683e-7c6f-403f-a027-c9731cd1c5fb", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:02:26 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ecd5683e-7c6f-403f-a027-c9731cd1c5fb, vol_name:cephfs) < ""
Oct 01 17:02:26 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ecd5683e-7c6f-403f-a027-c9731cd1c5fb/.meta.tmp'
Oct 01 17:02:26 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ecd5683e-7c6f-403f-a027-c9731cd1c5fb/.meta.tmp' to config b'/volumes/_nogroup/ecd5683e-7c6f-403f-a027-c9731cd1c5fb/.meta'
Oct 01 17:02:26 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ecd5683e-7c6f-403f-a027-c9731cd1c5fb, vol_name:cephfs) < ""
Oct 01 17:02:26 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ecd5683e-7c6f-403f-a027-c9731cd1c5fb", "format": "json"}]: dispatch
Oct 01 17:02:26 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ecd5683e-7c6f-403f-a027-c9731cd1c5fb, vol_name:cephfs) < ""
Oct 01 17:02:26 compute-0 youthful_brattain[268913]: {
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:     "0": [
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:         {
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             "devices": [
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "/dev/loop3"
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             ],
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             "lv_name": "ceph_lv0",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             "lv_size": "21470642176",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             "name": "ceph_lv0",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             "tags": {
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "ceph.cluster_name": "ceph",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "ceph.crush_device_class": "",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "ceph.encrypted": "0",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "ceph.osd_id": "0",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "ceph.type": "block",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "ceph.vdo": "0"
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             },
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             "type": "block",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             "vg_name": "ceph_vg0"
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:         }
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:     ],
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:     "1": [
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:         {
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             "devices": [
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "/dev/loop4"
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             ],
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             "lv_name": "ceph_lv1",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             "lv_size": "21470642176",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             "name": "ceph_lv1",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             "tags": {
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "ceph.cluster_name": "ceph",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "ceph.crush_device_class": "",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "ceph.encrypted": "0",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "ceph.osd_id": "1",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "ceph.type": "block",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "ceph.vdo": "0"
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             },
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             "type": "block",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             "vg_name": "ceph_vg1"
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:         }
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:     ],
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:     "2": [
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:         {
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             "devices": [
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "/dev/loop5"
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             ],
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             "lv_name": "ceph_lv2",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             "lv_size": "21470642176",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             "name": "ceph_lv2",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             "tags": {
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "ceph.cluster_name": "ceph",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "ceph.crush_device_class": "",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "ceph.encrypted": "0",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "ceph.osd_id": "2",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "ceph.type": "block",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:                 "ceph.vdo": "0"
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             },
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             "type": "block",
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:             "vg_name": "ceph_vg2"
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:         }
Oct 01 17:02:26 compute-0 youthful_brattain[268913]:     ]
Oct 01 17:02:26 compute-0 youthful_brattain[268913]: }
Oct 01 17:02:26 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ecd5683e-7c6f-403f-a027-c9731cd1c5fb, vol_name:cephfs) < ""
Oct 01 17:02:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:02:26 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:02:26 compute-0 systemd[1]: libpod-ccebcafc30242554f8bee0b959c5a87ff3ea69bd26132e111262702b0801052f.scope: Deactivated successfully.
Oct 01 17:02:26 compute-0 podman[268896]: 2025-10-01 17:02:26.960663282 +0000 UTC m=+0.888240759 container died ccebcafc30242554f8bee0b959c5a87ff3ea69bd26132e111262702b0801052f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_brattain, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 01 17:02:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-64bc228d7130ce3d72b24d342164efa988ed41a25db30dc4bf3fbf566793684b-merged.mount: Deactivated successfully.
Oct 01 17:02:27 compute-0 podman[268896]: 2025-10-01 17:02:27.019034413 +0000 UTC m=+0.946611850 container remove ccebcafc30242554f8bee0b959c5a87ff3ea69bd26132e111262702b0801052f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_brattain, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:02:27 compute-0 systemd[1]: libpod-conmon-ccebcafc30242554f8bee0b959c5a87ff3ea69bd26132e111262702b0801052f.scope: Deactivated successfully.
Oct 01 17:02:27 compute-0 sudo[268791]: pam_unix(sudo:session): session closed for user root
Oct 01 17:02:27 compute-0 ceph-mon[74273]: pgmap v978: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 587 B/s rd, 50 KiB/s wr, 5 op/s
Oct 01 17:02:27 compute-0 ceph-mon[74273]: osdmap e141: 3 total, 3 up, 3 in
Oct 01 17:02:27 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:02:27 compute-0 sudo[268936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:02:27 compute-0 sudo[268936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:02:27 compute-0 sudo[268936]: pam_unix(sudo:session): session closed for user root
Oct 01 17:02:27 compute-0 sudo[268961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:02:27 compute-0 sudo[268961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:02:27 compute-0 sudo[268961]: pam_unix(sudo:session): session closed for user root
Oct 01 17:02:27 compute-0 sudo[268986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:02:27 compute-0 sudo[268986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:02:27 compute-0 sudo[268986]: pam_unix(sudo:session): session closed for user root
Oct 01 17:02:27 compute-0 sudo[269011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 17:02:27 compute-0 sudo[269011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:02:27 compute-0 podman[269076]: 2025-10-01 17:02:27.665612472 +0000 UTC m=+0.036742117 container create 91dcd5b9fb2402d062ef1650605a448a5171d7a5537e7a21c3d165d668719f3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_haslett, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:02:27 compute-0 systemd[1]: Started libpod-conmon-91dcd5b9fb2402d062ef1650605a448a5171d7a5537e7a21c3d165d668719f3f.scope.
Oct 01 17:02:27 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:02:27 compute-0 podman[269076]: 2025-10-01 17:02:27.735811177 +0000 UTC m=+0.106940872 container init 91dcd5b9fb2402d062ef1650605a448a5171d7a5537e7a21c3d165d668719f3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 01 17:02:27 compute-0 podman[269076]: 2025-10-01 17:02:27.741954098 +0000 UTC m=+0.113083773 container start 91dcd5b9fb2402d062ef1650605a448a5171d7a5537e7a21c3d165d668719f3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_haslett, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 01 17:02:27 compute-0 podman[269076]: 2025-10-01 17:02:27.745325212 +0000 UTC m=+0.116454917 container attach 91dcd5b9fb2402d062ef1650605a448a5171d7a5537e7a21c3d165d668719f3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:02:27 compute-0 brave_haslett[269093]: 167 167
Oct 01 17:02:27 compute-0 systemd[1]: libpod-91dcd5b9fb2402d062ef1650605a448a5171d7a5537e7a21c3d165d668719f3f.scope: Deactivated successfully.
Oct 01 17:02:27 compute-0 podman[269076]: 2025-10-01 17:02:27.746458692 +0000 UTC m=+0.117588347 container died 91dcd5b9fb2402d062ef1650605a448a5171d7a5537e7a21c3d165d668719f3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_haslett, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:02:27 compute-0 podman[269076]: 2025-10-01 17:02:27.65175345 +0000 UTC m=+0.022883125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:02:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1630bf621a9e128581b243952b436a6f8154e7164e7806954c76b56271fd25a-merged.mount: Deactivated successfully.
Oct 01 17:02:27 compute-0 podman[269076]: 2025-10-01 17:02:27.781961332 +0000 UTC m=+0.153091007 container remove 91dcd5b9fb2402d062ef1650605a448a5171d7a5537e7a21c3d165d668719f3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 01 17:02:27 compute-0 systemd[1]: libpod-conmon-91dcd5b9fb2402d062ef1650605a448a5171d7a5537e7a21c3d165d668719f3f.scope: Deactivated successfully.
Oct 01 17:02:27 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 53 KiB/s wr, 5 op/s
Oct 01 17:02:28 compute-0 podman[269116]: 2025-10-01 17:02:28.030474055 +0000 UTC m=+0.067055895 container create 1fe60078c4788005cd6cb04583144ca36d09dc189ebf7c9325fd27d2afc4d1ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:02:28 compute-0 systemd[1]: Started libpod-conmon-1fe60078c4788005cd6cb04583144ca36d09dc189ebf7c9325fd27d2afc4d1ed.scope.
Oct 01 17:02:28 compute-0 podman[269116]: 2025-10-01 17:02:28.00287995 +0000 UTC m=+0.039461820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:02:28 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:02:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/921c52def8003090cf7eb7c4aa6ffa19cf8309e620ab027e753a3188bb83a02d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:02:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/921c52def8003090cf7eb7c4aa6ffa19cf8309e620ab027e753a3188bb83a02d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:02:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/921c52def8003090cf7eb7c4aa6ffa19cf8309e620ab027e753a3188bb83a02d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:02:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/921c52def8003090cf7eb7c4aa6ffa19cf8309e620ab027e753a3188bb83a02d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:02:28 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ecd5683e-7c6f-403f-a027-c9731cd1c5fb", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:02:28 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ecd5683e-7c6f-403f-a027-c9731cd1c5fb", "format": "json"}]: dispatch
Oct 01 17:02:28 compute-0 podman[269116]: 2025-10-01 17:02:28.121387086 +0000 UTC m=+0.157968896 container init 1fe60078c4788005cd6cb04583144ca36d09dc189ebf7c9325fd27d2afc4d1ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chandrasekhar, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 01 17:02:28 compute-0 podman[269116]: 2025-10-01 17:02:28.129057417 +0000 UTC m=+0.165639207 container start 1fe60078c4788005cd6cb04583144ca36d09dc189ebf7c9325fd27d2afc4d1ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chandrasekhar, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 01 17:02:28 compute-0 podman[269116]: 2025-10-01 17:02:28.133020427 +0000 UTC m=+0.169602217 container attach 1fe60078c4788005cd6cb04583144ca36d09dc189ebf7c9325fd27d2afc4d1ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chandrasekhar, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 01 17:02:28 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:02:28.507 162304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d2971fc2-5b75-459a-98a0-6e626d0d4d99, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 17:02:29 compute-0 ceph-mon[74273]: pgmap v980: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 53 KiB/s wr, 5 op/s
Oct 01 17:02:29 compute-0 competent_chandrasekhar[269132]: {
Oct 01 17:02:29 compute-0 competent_chandrasekhar[269132]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 17:02:29 compute-0 competent_chandrasekhar[269132]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:02:29 compute-0 competent_chandrasekhar[269132]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 17:02:29 compute-0 competent_chandrasekhar[269132]:         "osd_id": 2,
Oct 01 17:02:29 compute-0 competent_chandrasekhar[269132]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 17:02:29 compute-0 competent_chandrasekhar[269132]:         "type": "bluestore"
Oct 01 17:02:29 compute-0 competent_chandrasekhar[269132]:     },
Oct 01 17:02:29 compute-0 competent_chandrasekhar[269132]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 17:02:29 compute-0 competent_chandrasekhar[269132]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:02:29 compute-0 competent_chandrasekhar[269132]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 17:02:29 compute-0 competent_chandrasekhar[269132]:         "osd_id": 0,
Oct 01 17:02:29 compute-0 competent_chandrasekhar[269132]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 17:02:29 compute-0 competent_chandrasekhar[269132]:         "type": "bluestore"
Oct 01 17:02:29 compute-0 competent_chandrasekhar[269132]:     },
Oct 01 17:02:29 compute-0 competent_chandrasekhar[269132]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 17:02:29 compute-0 competent_chandrasekhar[269132]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:02:29 compute-0 competent_chandrasekhar[269132]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 17:02:29 compute-0 competent_chandrasekhar[269132]:         "osd_id": 1,
Oct 01 17:02:29 compute-0 competent_chandrasekhar[269132]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 17:02:29 compute-0 competent_chandrasekhar[269132]:         "type": "bluestore"
Oct 01 17:02:29 compute-0 competent_chandrasekhar[269132]:     }
Oct 01 17:02:29 compute-0 competent_chandrasekhar[269132]: }
Oct 01 17:02:29 compute-0 systemd[1]: libpod-1fe60078c4788005cd6cb04583144ca36d09dc189ebf7c9325fd27d2afc4d1ed.scope: Deactivated successfully.
Oct 01 17:02:29 compute-0 podman[269116]: 2025-10-01 17:02:29.16894287 +0000 UTC m=+1.205524700 container died 1fe60078c4788005cd6cb04583144ca36d09dc189ebf7c9325fd27d2afc4d1ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chandrasekhar, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:02:29 compute-0 systemd[1]: libpod-1fe60078c4788005cd6cb04583144ca36d09dc189ebf7c9325fd27d2afc4d1ed.scope: Consumed 1.042s CPU time.
Oct 01 17:02:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-921c52def8003090cf7eb7c4aa6ffa19cf8309e620ab027e753a3188bb83a02d-merged.mount: Deactivated successfully.
Oct 01 17:02:29 compute-0 podman[269116]: 2025-10-01 17:02:29.245480671 +0000 UTC m=+1.282062501 container remove 1fe60078c4788005cd6cb04583144ca36d09dc189ebf7c9325fd27d2afc4d1ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chandrasekhar, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 01 17:02:29 compute-0 systemd[1]: libpod-conmon-1fe60078c4788005cd6cb04583144ca36d09dc189ebf7c9325fd27d2afc4d1ed.scope: Deactivated successfully.
Oct 01 17:02:29 compute-0 sudo[269011]: pam_unix(sudo:session): session closed for user root
Oct 01 17:02:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 17:02:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:02:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 17:02:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:02:29 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 606c4faa-bd83-4cfa-a84a-354a152c3b5e does not exist
Oct 01 17:02:29 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 0035b7d9-c8c1-47ab-bfcf-de5fa9347243 does not exist
Oct 01 17:02:29 compute-0 sudo[269176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:02:29 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b8ad6a25-2c76-4709-bb24-c02c6f27169d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:02:29 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b8ad6a25-2c76-4709-bb24-c02c6f27169d, vol_name:cephfs) < ""
Oct 01 17:02:29 compute-0 sudo[269176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:02:29 compute-0 sudo[269176]: pam_unix(sudo:session): session closed for user root
Oct 01 17:02:29 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b8ad6a25-2c76-4709-bb24-c02c6f27169d/.meta.tmp'
Oct 01 17:02:29 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b8ad6a25-2c76-4709-bb24-c02c6f27169d/.meta.tmp' to config b'/volumes/_nogroup/b8ad6a25-2c76-4709-bb24-c02c6f27169d/.meta'
Oct 01 17:02:29 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b8ad6a25-2c76-4709-bb24-c02c6f27169d, vol_name:cephfs) < ""
Oct 01 17:02:29 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b8ad6a25-2c76-4709-bb24-c02c6f27169d", "format": "json"}]: dispatch
Oct 01 17:02:29 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b8ad6a25-2c76-4709-bb24-c02c6f27169d, vol_name:cephfs) < ""
Oct 01 17:02:29 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b8ad6a25-2c76-4709-bb24-c02c6f27169d, vol_name:cephfs) < ""
Oct 01 17:02:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:02:29 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:02:29 compute-0 sudo[269201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 17:02:29 compute-0 sudo[269201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:02:29 compute-0 sudo[269201]: pam_unix(sudo:session): session closed for user root
Oct 01 17:02:29 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 51 KiB/s wr, 5 op/s
Oct 01 17:02:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:02:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:02:30 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b8ad6a25-2c76-4709-bb24-c02c6f27169d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:02:30 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b8ad6a25-2c76-4709-bb24-c02c6f27169d", "format": "json"}]: dispatch
Oct 01 17:02:30 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:02:30 compute-0 ceph-mon[74273]: pgmap v981: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 51 KiB/s wr, 5 op/s
Oct 01 17:02:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:02:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Oct 01 17:02:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Oct 01 17:02:30 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Oct 01 17:02:31 compute-0 ceph-mon[74273]: osdmap e142: 3 total, 3 up, 3 in
Oct 01 17:02:32 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 27 KiB/s wr, 3 op/s
Oct 01 17:02:32 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ecd5683e-7c6f-403f-a027-c9731cd1c5fb", "format": "json"}]: dispatch
Oct 01 17:02:32 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ecd5683e-7c6f-403f-a027-c9731cd1c5fb, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:02:32 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ecd5683e-7c6f-403f-a027-c9731cd1c5fb, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:02:32 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:02:32.207+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ecd5683e-7c6f-403f-a027-c9731cd1c5fb' of type subvolume
Oct 01 17:02:32 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ecd5683e-7c6f-403f-a027-c9731cd1c5fb' of type subvolume
Oct 01 17:02:32 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ecd5683e-7c6f-403f-a027-c9731cd1c5fb", "force": true, "format": "json"}]: dispatch
Oct 01 17:02:32 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ecd5683e-7c6f-403f-a027-c9731cd1c5fb, vol_name:cephfs) < ""
Oct 01 17:02:32 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ecd5683e-7c6f-403f-a027-c9731cd1c5fb'' moved to trashcan
Oct 01 17:02:32 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:02:32 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ecd5683e-7c6f-403f-a027-c9731cd1c5fb, vol_name:cephfs) < ""
Oct 01 17:02:32 compute-0 ceph-mon[74273]: pgmap v983: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 27 KiB/s wr, 3 op/s
Oct 01 17:02:32 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ecd5683e-7c6f-403f-a027-c9731cd1c5fb", "format": "json"}]: dispatch
Oct 01 17:02:32 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ecd5683e-7c6f-403f-a027-c9731cd1c5fb", "force": true, "format": "json"}]: dispatch
Oct 01 17:02:34 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 45 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 56 KiB/s wr, 6 op/s
Oct 01 17:02:34 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b8ad6a25-2c76-4709-bb24-c02c6f27169d", "format": "json"}]: dispatch
Oct 01 17:02:34 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b8ad6a25-2c76-4709-bb24-c02c6f27169d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:02:34 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b8ad6a25-2c76-4709-bb24-c02c6f27169d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:02:34 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:02:34.354+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b8ad6a25-2c76-4709-bb24-c02c6f27169d' of type subvolume
Oct 01 17:02:34 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b8ad6a25-2c76-4709-bb24-c02c6f27169d' of type subvolume
Oct 01 17:02:34 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b8ad6a25-2c76-4709-bb24-c02c6f27169d", "force": true, "format": "json"}]: dispatch
Oct 01 17:02:34 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b8ad6a25-2c76-4709-bb24-c02c6f27169d, vol_name:cephfs) < ""
Oct 01 17:02:34 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b8ad6a25-2c76-4709-bb24-c02c6f27169d'' moved to trashcan
Oct 01 17:02:34 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:02:34 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b8ad6a25-2c76-4709-bb24-c02c6f27169d, vol_name:cephfs) < ""
Oct 01 17:02:35 compute-0 ceph-mon[74273]: pgmap v984: 305 pgs: 305 active+clean; 45 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 56 KiB/s wr, 6 op/s
Oct 01 17:02:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:02:36 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 45 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 620 B/s rd, 46 KiB/s wr, 4 op/s
Oct 01 17:02:36 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b8ad6a25-2c76-4709-bb24-c02c6f27169d", "format": "json"}]: dispatch
Oct 01 17:02:36 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b8ad6a25-2c76-4709-bb24-c02c6f27169d", "force": true, "format": "json"}]: dispatch
Oct 01 17:02:36 compute-0 podman[269226]: 2025-10-01 17:02:36.816273436 +0000 UTC m=+0.120704511 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 01 17:02:37 compute-0 ceph-mon[74273]: pgmap v985: 305 pgs: 305 active+clean; 45 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 620 B/s rd, 46 KiB/s wr, 4 op/s
Oct 01 17:02:38 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 45 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 45 KiB/s wr, 4 op/s
Oct 01 17:02:38 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "dbc2114d-a89a-4156-8432-271ef7164acc", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:02:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dbc2114d-a89a-4156-8432-271ef7164acc, vol_name:cephfs) < ""
Oct 01 17:02:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/dbc2114d-a89a-4156-8432-271ef7164acc/.meta.tmp'
Oct 01 17:02:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/dbc2114d-a89a-4156-8432-271ef7164acc/.meta.tmp' to config b'/volumes/_nogroup/dbc2114d-a89a-4156-8432-271ef7164acc/.meta'
Oct 01 17:02:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dbc2114d-a89a-4156-8432-271ef7164acc, vol_name:cephfs) < ""
Oct 01 17:02:38 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "dbc2114d-a89a-4156-8432-271ef7164acc", "format": "json"}]: dispatch
Oct 01 17:02:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dbc2114d-a89a-4156-8432-271ef7164acc, vol_name:cephfs) < ""
Oct 01 17:02:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dbc2114d-a89a-4156-8432-271ef7164acc, vol_name:cephfs) < ""
Oct 01 17:02:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:02:38 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:02:38 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:02:39 compute-0 ceph-mon[74273]: pgmap v986: 305 pgs: 305 active+clean; 45 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 45 KiB/s wr, 4 op/s
Oct 01 17:02:39 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "dbc2114d-a89a-4156-8432-271ef7164acc", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:02:39 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "dbc2114d-a89a-4156-8432-271ef7164acc", "format": "json"}]: dispatch
Oct 01 17:02:39 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Oct 01 17:02:39 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:02:39.287720) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 17:02:39 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Oct 01 17:02:39 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338159287745, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1739, "num_deletes": 258, "total_data_size": 2475460, "memory_usage": 2511728, "flush_reason": "Manual Compaction"}
Oct 01 17:02:39 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Oct 01 17:02:39 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338159306848, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 2437201, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19468, "largest_seqno": 21206, "table_properties": {"data_size": 2428917, "index_size": 4906, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 19287, "raw_average_key_size": 21, "raw_value_size": 2411571, "raw_average_value_size": 2661, "num_data_blocks": 218, "num_entries": 906, "num_filter_entries": 906, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759338030, "oldest_key_time": 1759338030, "file_creation_time": 1759338159, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Oct 01 17:02:39 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 19237 microseconds, and 7349 cpu microseconds.
Oct 01 17:02:39 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 17:02:39 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:02:39.306949) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 2437201 bytes OK
Oct 01 17:02:39 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:02:39.306973) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Oct 01 17:02:39 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:02:39.308551) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Oct 01 17:02:39 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:02:39.308572) EVENT_LOG_v1 {"time_micros": 1759338159308564, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 17:02:39 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:02:39.308594) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 17:02:39 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2467505, prev total WAL file size 2467505, number of live WAL files 2.
Oct 01 17:02:39 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 17:02:39 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:02:39.309850) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Oct 01 17:02:39 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 17:02:39 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(2380KB)], [47(6857KB)]
Oct 01 17:02:39 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338159309885, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9459520, "oldest_snapshot_seqno": -1}
Oct 01 17:02:39 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4516 keys, 7721849 bytes, temperature: kUnknown
Oct 01 17:02:39 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338159366698, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7721849, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7690612, "index_size": 18839, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11333, "raw_key_size": 111991, "raw_average_key_size": 24, "raw_value_size": 7607992, "raw_average_value_size": 1684, "num_data_blocks": 785, "num_entries": 4516, "num_filter_entries": 4516, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336399, "oldest_key_time": 0, "file_creation_time": 1759338159, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Oct 01 17:02:39 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 17:02:39 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:02:39.366951) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7721849 bytes
Oct 01 17:02:39 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:02:39.367995) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.3 rd, 135.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 6.7 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(7.0) write-amplify(3.2) OK, records in: 5044, records dropped: 528 output_compression: NoCompression
Oct 01 17:02:39 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:02:39.368012) EVENT_LOG_v1 {"time_micros": 1759338159368003, "job": 24, "event": "compaction_finished", "compaction_time_micros": 56884, "compaction_time_cpu_micros": 27047, "output_level": 6, "num_output_files": 1, "total_output_size": 7721849, "num_input_records": 5044, "num_output_records": 4516, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 17:02:39 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 17:02:39 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338159368559, "job": 24, "event": "table_file_deletion", "file_number": 49}
Oct 01 17:02:39 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 17:02:39 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338159369985, "job": 24, "event": "table_file_deletion", "file_number": 47}
Oct 01 17:02:39 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:02:39.309760) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:02:39 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:02:39.370069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:02:39 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:02:39.370075) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:02:39 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:02:39.370078) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:02:39 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:02:39.370081) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:02:39 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:02:39.370084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:02:39 compute-0 podman[269252]: 2025-10-01 17:02:39.779321741 +0000 UTC m=+0.084723481 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 01 17:02:40 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 45 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 46 KiB/s wr, 4 op/s
Oct 01 17:02:40 compute-0 ceph-mon[74273]: pgmap v987: 305 pgs: 305 active+clean; 45 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 46 KiB/s wr, 4 op/s
Oct 01 17:02:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:02:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:02:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:02:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:02:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f816c5e9b50>)]
Oct 01 17:02:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Oct 01 17:02:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:02:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:02:42 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 45 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 356 B/s rd, 40 KiB/s wr, 4 op/s
Oct 01 17:02:43 compute-0 ceph-mon[74273]: pgmap v988: 305 pgs: 305 active+clean; 45 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 356 B/s rd, 40 KiB/s wr, 4 op/s
Oct 01 17:02:43 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.pmbdpj(active, since 28m)
Oct 01 17:02:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 17:02:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/251716039' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 17:02:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 17:02:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/251716039' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 17:02:44 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 45 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 45 KiB/s wr, 4 op/s
Oct 01 17:02:44 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dbc2114d-a89a-4156-8432-271ef7164acc", "format": "json"}]: dispatch
Oct 01 17:02:44 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:dbc2114d-a89a-4156-8432-271ef7164acc, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:02:44 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:dbc2114d-a89a-4156-8432-271ef7164acc, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:02:44 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dbc2114d-a89a-4156-8432-271ef7164acc' of type subvolume
Oct 01 17:02:44 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:02:44.323+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dbc2114d-a89a-4156-8432-271ef7164acc' of type subvolume
Oct 01 17:02:44 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "dbc2114d-a89a-4156-8432-271ef7164acc", "force": true, "format": "json"}]: dispatch
Oct 01 17:02:44 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dbc2114d-a89a-4156-8432-271ef7164acc, vol_name:cephfs) < ""
Oct 01 17:02:44 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/dbc2114d-a89a-4156-8432-271ef7164acc'' moved to trashcan
Oct 01 17:02:44 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:02:44 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dbc2114d-a89a-4156-8432-271ef7164acc, vol_name:cephfs) < ""
Oct 01 17:02:44 compute-0 ceph-mon[74273]: mgrmap e13: compute-0.pmbdpj(active, since 28m)
Oct 01 17:02:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/251716039' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 17:02:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/251716039' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 17:02:44 compute-0 ceph-mon[74273]: pgmap v989: 305 pgs: 305 active+clean; 45 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 45 KiB/s wr, 4 op/s
Oct 01 17:02:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:02:45 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dbc2114d-a89a-4156-8432-271ef7164acc", "format": "json"}]: dispatch
Oct 01 17:02:45 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "dbc2114d-a89a-4156-8432-271ef7164acc", "force": true, "format": "json"}]: dispatch
Oct 01 17:02:46 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 45 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 25 KiB/s wr, 3 op/s
Oct 01 17:02:46 compute-0 ceph-mon[74273]: pgmap v990: 305 pgs: 305 active+clean; 45 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 25 KiB/s wr, 3 op/s
Oct 01 17:02:48 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 45 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 25 KiB/s wr, 3 op/s
Oct 01 17:02:49 compute-0 ceph-mon[74273]: pgmap v991: 305 pgs: 305 active+clean; 45 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 25 KiB/s wr, 3 op/s
Oct 01 17:02:49 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:02:49 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:02:49 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/.meta.tmp'
Oct 01 17:02:49 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/.meta.tmp' to config b'/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/.meta'
Oct 01 17:02:49 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:02:49 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "format": "json"}]: dispatch
Oct 01 17:02:49 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:02:49 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:02:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:02:49 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:02:50 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 37 KiB/s wr, 4 op/s
Oct 01 17:02:50 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:02:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:02:50 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "29bcbd73-54c4-429e-b29a-68e1af0b3512", "format": "json"}]: dispatch
Oct 01 17:02:50 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:29bcbd73-54c4-429e-b29a-68e1af0b3512, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:02:50 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:29bcbd73-54c4-429e-b29a-68e1af0b3512, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:02:50 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:02:50.802+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '29bcbd73-54c4-429e-b29a-68e1af0b3512' of type subvolume
Oct 01 17:02:50 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '29bcbd73-54c4-429e-b29a-68e1af0b3512' of type subvolume
Oct 01 17:02:50 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "29bcbd73-54c4-429e-b29a-68e1af0b3512", "force": true, "format": "json"}]: dispatch
Oct 01 17:02:50 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:29bcbd73-54c4-429e-b29a-68e1af0b3512, vol_name:cephfs) < ""
Oct 01 17:02:50 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/29bcbd73-54c4-429e-b29a-68e1af0b3512'' moved to trashcan
Oct 01 17:02:50 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:02:50 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:29bcbd73-54c4-429e-b29a-68e1af0b3512, vol_name:cephfs) < ""
Oct 01 17:02:51 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:02:51 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "format": "json"}]: dispatch
Oct 01 17:02:51 compute-0 ceph-mon[74273]: pgmap v992: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 37 KiB/s wr, 4 op/s
Oct 01 17:02:51 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d82a03d3-ac0e-4c77-9123-db50d46e8e91", "size": 4294967296, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:02:51 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:d82a03d3-ac0e-4c77-9123-db50d46e8e91, vol_name:cephfs) < ""
Oct 01 17:02:51 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d82a03d3-ac0e-4c77-9123-db50d46e8e91/.meta.tmp'
Oct 01 17:02:51 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d82a03d3-ac0e-4c77-9123-db50d46e8e91/.meta.tmp' to config b'/volumes/_nogroup/d82a03d3-ac0e-4c77-9123-db50d46e8e91/.meta'
Oct 01 17:02:51 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:d82a03d3-ac0e-4c77-9123-db50d46e8e91, vol_name:cephfs) < ""
Oct 01 17:02:51 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d82a03d3-ac0e-4c77-9123-db50d46e8e91", "format": "json"}]: dispatch
Oct 01 17:02:51 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d82a03d3-ac0e-4c77-9123-db50d46e8e91, vol_name:cephfs) < ""
Oct 01 17:02:51 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d82a03d3-ac0e-4c77-9123-db50d46e8e91, vol_name:cephfs) < ""
Oct 01 17:02:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:02:51 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:02:52 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 18 KiB/s wr, 1 op/s
Oct 01 17:02:52 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "29bcbd73-54c4-429e-b29a-68e1af0b3512", "format": "json"}]: dispatch
Oct 01 17:02:52 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "29bcbd73-54c4-429e-b29a-68e1af0b3512", "force": true, "format": "json"}]: dispatch
Oct 01 17:02:52 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:02:52 compute-0 podman[269275]: 2025-10-01 17:02:52.775486758 +0000 UTC m=+0.086118595 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 01 17:02:53 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d82a03d3-ac0e-4c77-9123-db50d46e8e91", "size": 4294967296, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:02:53 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d82a03d3-ac0e-4c77-9123-db50d46e8e91", "format": "json"}]: dispatch
Oct 01 17:02:53 compute-0 ceph-mon[74273]: pgmap v993: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 18 KiB/s wr, 1 op/s
Oct 01 17:02:53 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:02:53 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:02:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Oct 01 17:02:53 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Oct 01 17:02:53 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID alice with tenant 1841221f332340a299707d253063659f
Oct 01 17:02:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:02:53 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:02:53 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:02:53 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:02:54 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 38 KiB/s wr, 4 op/s
Oct 01 17:02:54 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Oct 01 17:02:54 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:02:54 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:02:55 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:02:55 compute-0 ceph-mon[74273]: pgmap v994: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 38 KiB/s wr, 4 op/s
Oct 01 17:02:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:02:55 compute-0 podman[269297]: 2025-10-01 17:02:55.77724001 +0000 UTC m=+0.085237188 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 01 17:02:56 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 31 KiB/s wr, 3 op/s
Oct 01 17:02:57 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cfcadb50-45a4-4b1d-8651-7ec7bda466b1", "size": 3221225472, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:02:57 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:cfcadb50-45a4-4b1d-8651-7ec7bda466b1, vol_name:cephfs) < ""
Oct 01 17:02:57 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cfcadb50-45a4-4b1d-8651-7ec7bda466b1/.meta.tmp'
Oct 01 17:02:57 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cfcadb50-45a4-4b1d-8651-7ec7bda466b1/.meta.tmp' to config b'/volumes/_nogroup/cfcadb50-45a4-4b1d-8651-7ec7bda466b1/.meta'
Oct 01 17:02:57 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:cfcadb50-45a4-4b1d-8651-7ec7bda466b1, vol_name:cephfs) < ""
Oct 01 17:02:57 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cfcadb50-45a4-4b1d-8651-7ec7bda466b1", "format": "json"}]: dispatch
Oct 01 17:02:57 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cfcadb50-45a4-4b1d-8651-7ec7bda466b1, vol_name:cephfs) < ""
Oct 01 17:02:57 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cfcadb50-45a4-4b1d-8651-7ec7bda466b1, vol_name:cephfs) < ""
Oct 01 17:02:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:02:57 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:02:57 compute-0 ceph-mon[74273]: pgmap v995: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 31 KiB/s wr, 3 op/s
Oct 01 17:02:57 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:02:57 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "format": "json"}]: dispatch
Oct 01 17:02:57 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:02:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Oct 01 17:02:57 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Oct 01 17:02:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Oct 01 17:02:57 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Oct 01 17:02:57 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Oct 01 17:02:57 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:02:57 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "format": "json"}]: dispatch
Oct 01 17:02:57 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:02:57 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1
Oct 01 17:02:57 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1],prefix=session evict} (starting...)
Oct 01 17:02:57 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:02:57 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:02:58 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 31 KiB/s wr, 3 op/s
Oct 01 17:02:58 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cfcadb50-45a4-4b1d-8651-7ec7bda466b1", "size": 3221225472, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:02:58 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cfcadb50-45a4-4b1d-8651-7ec7bda466b1", "format": "json"}]: dispatch
Oct 01 17:02:58 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Oct 01 17:02:58 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Oct 01 17:02:58 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Oct 01 17:02:59 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "format": "json"}]: dispatch
Oct 01 17:02:59 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "format": "json"}]: dispatch
Oct 01 17:02:59 compute-0 ceph-mon[74273]: pgmap v996: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 31 KiB/s wr, 3 op/s
Oct 01 17:03:00 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 59 KiB/s wr, 6 op/s
Oct 01 17:03:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:03:00 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d82a03d3-ac0e-4c77-9123-db50d46e8e91", "format": "json"}]: dispatch
Oct 01 17:03:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d82a03d3-ac0e-4c77-9123-db50d46e8e91, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:03:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d82a03d3-ac0e-4c77-9123-db50d46e8e91, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:03:00 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:03:00.833+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd82a03d3-ac0e-4c77-9123-db50d46e8e91' of type subvolume
Oct 01 17:03:00 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd82a03d3-ac0e-4c77-9123-db50d46e8e91' of type subvolume
Oct 01 17:03:00 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d82a03d3-ac0e-4c77-9123-db50d46e8e91", "force": true, "format": "json"}]: dispatch
Oct 01 17:03:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d82a03d3-ac0e-4c77-9123-db50d46e8e91, vol_name:cephfs) < ""
Oct 01 17:03:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d82a03d3-ac0e-4c77-9123-db50d46e8e91'' moved to trashcan
Oct 01 17:03:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:03:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d82a03d3-ac0e-4c77-9123-db50d46e8e91, vol_name:cephfs) < ""
Oct 01 17:03:01 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "r", "format": "json"}]: dispatch
Oct 01 17:03:01 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:03:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Oct 01 17:03:01 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Oct 01 17:03:01 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID alice with tenant 1841221f332340a299707d253063659f
Oct 01 17:03:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:03:01 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:03:01 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:03:01 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:03:01 compute-0 ceph-mon[74273]: pgmap v997: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 59 KiB/s wr, 6 op/s
Oct 01 17:03:01 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Oct 01 17:03:01 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:03:01 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:03:02 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 47 KiB/s wr, 5 op/s
Oct 01 17:03:02 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d82a03d3-ac0e-4c77-9123-db50d46e8e91", "format": "json"}]: dispatch
Oct 01 17:03:02 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d82a03d3-ac0e-4c77-9123-db50d46e8e91", "force": true, "format": "json"}]: dispatch
Oct 01 17:03:02 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "r", "format": "json"}]: dispatch
Oct 01 17:03:03 compute-0 ceph-mon[74273]: pgmap v998: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 47 KiB/s wr, 5 op/s
Oct 01 17:03:04 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 46 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 70 KiB/s wr, 8 op/s
Oct 01 17:03:04 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cfcadb50-45a4-4b1d-8651-7ec7bda466b1", "format": "json"}]: dispatch
Oct 01 17:03:04 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cfcadb50-45a4-4b1d-8651-7ec7bda466b1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:03:04 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cfcadb50-45a4-4b1d-8651-7ec7bda466b1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:03:04 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cfcadb50-45a4-4b1d-8651-7ec7bda466b1' of type subvolume
Oct 01 17:03:04 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:03:04.338+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cfcadb50-45a4-4b1d-8651-7ec7bda466b1' of type subvolume
Oct 01 17:03:04 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cfcadb50-45a4-4b1d-8651-7ec7bda466b1", "force": true, "format": "json"}]: dispatch
Oct 01 17:03:04 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cfcadb50-45a4-4b1d-8651-7ec7bda466b1, vol_name:cephfs) < ""
Oct 01 17:03:04 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/cfcadb50-45a4-4b1d-8651-7ec7bda466b1'' moved to trashcan
Oct 01 17:03:04 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:03:04 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cfcadb50-45a4-4b1d-8651-7ec7bda466b1, vol_name:cephfs) < ""
Oct 01 17:03:04 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "format": "json"}]: dispatch
Oct 01 17:03:04 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:03:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Oct 01 17:03:04 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Oct 01 17:03:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Oct 01 17:03:04 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Oct 01 17:03:04 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Oct 01 17:03:04 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:03:04 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "format": "json"}]: dispatch
Oct 01 17:03:04 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:03:04 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1
Oct 01 17:03:04 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1],prefix=session evict} (starting...)
Oct 01 17:03:04 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:03:04 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:03:05 compute-0 ceph-mon[74273]: pgmap v999: 305 pgs: 305 active+clean; 46 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 70 KiB/s wr, 8 op/s
Oct 01 17:03:05 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Oct 01 17:03:05 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Oct 01 17:03:05 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Oct 01 17:03:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:03:06 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 46 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 50 KiB/s wr, 6 op/s
Oct 01 17:03:06 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cfcadb50-45a4-4b1d-8651-7ec7bda466b1", "format": "json"}]: dispatch
Oct 01 17:03:06 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cfcadb50-45a4-4b1d-8651-7ec7bda466b1", "force": true, "format": "json"}]: dispatch
Oct 01 17:03:06 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "format": "json"}]: dispatch
Oct 01 17:03:06 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "format": "json"}]: dispatch
Oct 01 17:03:07 compute-0 ceph-mon[74273]: pgmap v1000: 305 pgs: 305 active+clean; 46 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 50 KiB/s wr, 6 op/s
Oct 01 17:03:07 compute-0 podman[269321]: 2025-10-01 17:03:07.825271779 +0000 UTC m=+0.133091097 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 01 17:03:08 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 46 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 50 KiB/s wr, 6 op/s
Oct 01 17:03:08 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:03:08 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:03:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Oct 01 17:03:08 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Oct 01 17:03:08 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID alice_bob with tenant 1841221f332340a299707d253063659f
Oct 01 17:03:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:03:08 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:03:08 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:03:08 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:03:08 compute-0 ceph-mon[74273]: pgmap v1001: 305 pgs: 305 active+clean; 46 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 50 KiB/s wr, 6 op/s
Oct 01 17:03:08 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:03:08 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Oct 01 17:03:08 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:03:08 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:03:08 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4554af13-83dd-4199-b167-2a7f1fff2e32", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:03:08 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4554af13-83dd-4199-b167-2a7f1fff2e32, vol_name:cephfs) < ""
Oct 01 17:03:08 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4554af13-83dd-4199-b167-2a7f1fff2e32/.meta.tmp'
Oct 01 17:03:08 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4554af13-83dd-4199-b167-2a7f1fff2e32/.meta.tmp' to config b'/volumes/_nogroup/4554af13-83dd-4199-b167-2a7f1fff2e32/.meta'
Oct 01 17:03:08 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4554af13-83dd-4199-b167-2a7f1fff2e32, vol_name:cephfs) < ""
Oct 01 17:03:08 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4554af13-83dd-4199-b167-2a7f1fff2e32", "format": "json"}]: dispatch
Oct 01 17:03:08 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4554af13-83dd-4199-b167-2a7f1fff2e32, vol_name:cephfs) < ""
Oct 01 17:03:08 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4554af13-83dd-4199-b167-2a7f1fff2e32, vol_name:cephfs) < ""
Oct 01 17:03:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:03:08 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:03:09 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4554af13-83dd-4199-b167-2a7f1fff2e32", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:03:09 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4554af13-83dd-4199-b167-2a7f1fff2e32", "format": "json"}]: dispatch
Oct 01 17:03:09 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:03:10 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 47 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 81 KiB/s wr, 10 op/s
Oct 01 17:03:10 compute-0 ceph-mon[74273]: pgmap v1002: 305 pgs: 305 active+clean; 47 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 81 KiB/s wr, 10 op/s
Oct 01 17:03:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:03:10 compute-0 podman[269348]: 2025-10-01 17:03:10.792738588 +0000 UTC m=+0.086067007 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:03:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:03:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:03:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_17:03:11
Oct 01 17:03:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 17:03:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 17:03:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', 'backups', 'default.rgw.control', '.mgr', 'vms', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root']
Oct 01 17:03:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 17:03:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:03:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:03:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:03:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f816c23c520>)]
Oct 01 17:03:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Oct 01 17:03:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 17:03:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:03:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 17:03:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:03:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:03:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:03:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:03:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:03:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:03:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:03:11 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "format": "json"}]: dispatch
Oct 01 17:03:11 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:03:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Oct 01 17:03:11 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Oct 01 17:03:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Oct 01 17:03:11 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Oct 01 17:03:11 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Oct 01 17:03:11 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:03:11 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "format": "json"}]: dispatch
Oct 01 17:03:11 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:03:11 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1
Oct 01 17:03:11 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1],prefix=session evict} (starting...)
Oct 01 17:03:11 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:03:11 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:03:11 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Oct 01 17:03:11 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Oct 01 17:03:11 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Oct 01 17:03:12 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 47 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 53 KiB/s wr, 7 op/s
Oct 01 17:03:12 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b596d505-7972-465b-8b8f-82b926477693", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:03:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b596d505-7972-465b-8b8f-82b926477693, vol_name:cephfs) < ""
Oct 01 17:03:12 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "format": "json"}]: dispatch
Oct 01 17:03:12 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "format": "json"}]: dispatch
Oct 01 17:03:12 compute-0 ceph-mon[74273]: pgmap v1003: 305 pgs: 305 active+clean; 47 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 53 KiB/s wr, 7 op/s
Oct 01 17:03:12 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.pmbdpj(active, since 29m)
Oct 01 17:03:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b596d505-7972-465b-8b8f-82b926477693/.meta.tmp'
Oct 01 17:03:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b596d505-7972-465b-8b8f-82b926477693/.meta.tmp' to config b'/volumes/_nogroup/b596d505-7972-465b-8b8f-82b926477693/.meta'
Oct 01 17:03:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b596d505-7972-465b-8b8f-82b926477693, vol_name:cephfs) < ""
Oct 01 17:03:12 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b596d505-7972-465b-8b8f-82b926477693", "format": "json"}]: dispatch
Oct 01 17:03:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b596d505-7972-465b-8b8f-82b926477693, vol_name:cephfs) < ""
Oct 01 17:03:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b596d505-7972-465b-8b8f-82b926477693, vol_name:cephfs) < ""
Oct 01 17:03:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:03:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:03:13 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b596d505-7972-465b-8b8f-82b926477693", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:03:13 compute-0 ceph-mon[74273]: mgrmap e14: compute-0.pmbdpj(active, since 29m)
Oct 01 17:03:13 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b596d505-7972-465b-8b8f-82b926477693", "format": "json"}]: dispatch
Oct 01 17:03:13 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:03:14 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 47 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 79 KiB/s wr, 10 op/s
Oct 01 17:03:14 compute-0 nova_compute[259504]: 2025-10-01 17:03:14.749 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:03:14 compute-0 ceph-mon[74273]: pgmap v1004: 305 pgs: 305 active+clean; 47 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 79 KiB/s wr, 10 op/s
Oct 01 17:03:15 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "r", "format": "json"}]: dispatch
Oct 01 17:03:15 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:03:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Oct 01 17:03:15 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Oct 01 17:03:15 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID alice_bob with tenant 1841221f332340a299707d253063659f
Oct 01 17:03:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:03:15 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:03:15 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:03:15 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:03:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:03:15 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "r", "format": "json"}]: dispatch
Oct 01 17:03:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Oct 01 17:03:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:03:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:03:16 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 47 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 57 KiB/s wr, 7 op/s
Oct 01 17:03:16 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "b596d505-7972-465b-8b8f-82b926477693", "auth_id": "tempest-cephx-id-26224483", "tenant_id": "a0ac8ec815504b8dae62c40a55008f52", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:03:16 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume authorize, sub_name:b596d505-7972-465b-8b8f-82b926477693, tenant_id:a0ac8ec815504b8dae62c40a55008f52, vol_name:cephfs) < ""
Oct 01 17:03:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"} v 0) v1
Oct 01 17:03:16 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:16 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID tempest-cephx-id-26224483 with tenant a0ac8ec815504b8dae62c40a55008f52
Oct 01 17:03:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-26224483", "caps": ["mds", "allow rw path=/volumes/_nogroup/b596d505-7972-465b-8b8f-82b926477693/dec8c348-a88e-4722-a1a2-25b1f0501b74", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_b596d505-7972-465b-8b8f-82b926477693", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:03:16 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-26224483", "caps": ["mds", "allow rw path=/volumes/_nogroup/b596d505-7972-465b-8b8f-82b926477693/dec8c348-a88e-4722-a1a2-25b1f0501b74", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_b596d505-7972-465b-8b8f-82b926477693", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:03:16 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-26224483", "caps": ["mds", "allow rw path=/volumes/_nogroup/b596d505-7972-465b-8b8f-82b926477693/dec8c348-a88e-4722-a1a2-25b1f0501b74", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_b596d505-7972-465b-8b8f-82b926477693", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:03:16 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume authorize, sub_name:b596d505-7972-465b-8b8f-82b926477693, tenant_id:a0ac8ec815504b8dae62c40a55008f52, vol_name:cephfs) < ""
Oct 01 17:03:16 compute-0 nova_compute[259504]: 2025-10-01 17:03:16.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:03:16 compute-0 nova_compute[259504]: 2025-10-01 17:03:16.775 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:03:16 compute-0 nova_compute[259504]: 2025-10-01 17:03:16.775 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:03:16 compute-0 nova_compute[259504]: 2025-10-01 17:03:16.775 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:03:16 compute-0 nova_compute[259504]: 2025-10-01 17:03:16.776 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 17:03:16 compute-0 nova_compute[259504]: 2025-10-01 17:03:16.776 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:03:16 compute-0 ceph-mon[74273]: pgmap v1005: 305 pgs: 305 active+clean; 47 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 57 KiB/s wr, 7 op/s
Oct 01 17:03:16 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:16 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-26224483", "caps": ["mds", "allow rw path=/volumes/_nogroup/b596d505-7972-465b-8b8f-82b926477693/dec8c348-a88e-4722-a1a2-25b1f0501b74", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_b596d505-7972-465b-8b8f-82b926477693", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:03:16 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-26224483", "caps": ["mds", "allow rw path=/volumes/_nogroup/b596d505-7972-465b-8b8f-82b926477693/dec8c348-a88e-4722-a1a2-25b1f0501b74", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_b596d505-7972-465b-8b8f-82b926477693", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:03:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:03:17 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/137699243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:03:17 compute-0 nova_compute[259504]: 2025-10-01 17:03:17.234 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:03:17 compute-0 nova_compute[259504]: 2025-10-01 17:03:17.423 2 WARNING nova.virt.libvirt.driver [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 17:03:17 compute-0 nova_compute[259504]: 2025-10-01 17:03:17.424 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5128MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 17:03:17 compute-0 nova_compute[259504]: 2025-10-01 17:03:17.424 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:03:17 compute-0 nova_compute[259504]: 2025-10-01 17:03:17.424 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:03:17 compute-0 nova_compute[259504]: 2025-10-01 17:03:17.536 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 17:03:17 compute-0 nova_compute[259504]: 2025-10-01 17:03:17.536 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 17:03:17 compute-0 nova_compute[259504]: 2025-10-01 17:03:17.558 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:03:17 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "b596d505-7972-465b-8b8f-82b926477693", "auth_id": "tempest-cephx-id-26224483", "tenant_id": "a0ac8ec815504b8dae62c40a55008f52", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:03:17 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/137699243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:03:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:03:17 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4040502504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:03:17 compute-0 nova_compute[259504]: 2025-10-01 17:03:17.962 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:03:17 compute-0 nova_compute[259504]: 2025-10-01 17:03:17.970 2 DEBUG nova.compute.provider_tree [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed in ProviderTree for provider: 2417da73-53f1-4edf-ae4c-fbd9fa470d6b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 17:03:17 compute-0 nova_compute[259504]: 2025-10-01 17:03:17.987 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 17:03:17 compute-0 nova_compute[259504]: 2025-10-01 17:03:17.990 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 17:03:17 compute-0 nova_compute[259504]: 2025-10-01 17:03:17.991 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:03:18 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 47 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 57 KiB/s wr, 7 op/s
Oct 01 17:03:18 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "format": "json"}]: dispatch
Oct 01 17:03:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:03:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Oct 01 17:03:18 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Oct 01 17:03:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Oct 01 17:03:18 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Oct 01 17:03:18 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Oct 01 17:03:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:03:18 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "format": "json"}]: dispatch
Oct 01 17:03:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:03:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1
Oct 01 17:03:18 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1],prefix=session evict} (starting...)
Oct 01 17:03:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:03:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:03:18 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4040502504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:03:18 compute-0 ceph-mon[74273]: pgmap v1006: 305 pgs: 305 active+clean; 47 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 57 KiB/s wr, 7 op/s
Oct 01 17:03:18 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Oct 01 17:03:18 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Oct 01 17:03:18 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Oct 01 17:03:19 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "format": "json"}]: dispatch
Oct 01 17:03:19 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "format": "json"}]: dispatch
Oct 01 17:03:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:03:19.971 162304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:03:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:03:19.971 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:03:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:03:19.971 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:03:19 compute-0 nova_compute[259504]: 2025-10-01 17:03:19.992 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:03:19 compute-0 nova_compute[259504]: 2025-10-01 17:03:19.993 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:03:19 compute-0 nova_compute[259504]: 2025-10-01 17:03:19.993 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 17:03:19 compute-0 nova_compute[259504]: 2025-10-01 17:03:19.993 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 17:03:20 compute-0 nova_compute[259504]: 2025-10-01 17:03:20.012 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 17:03:20 compute-0 nova_compute[259504]: 2025-10-01 17:03:20.013 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:03:20 compute-0 nova_compute[259504]: 2025-10-01 17:03:20.014 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:03:20 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 47 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 KiB/s wr, 10 op/s
Oct 01 17:03:20 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "b596d505-7972-465b-8b8f-82b926477693", "auth_id": "tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:20 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume deauthorize, sub_name:b596d505-7972-465b-8b8f-82b926477693, vol_name:cephfs) < ""
Oct 01 17:03:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"} v 0) v1
Oct 01 17:03:20 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-26224483"} v 0) v1
Oct 01 17:03:20 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-26224483"}]: dispatch
Oct 01 17:03:20 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-26224483"}]': finished
Oct 01 17:03:20 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume deauthorize, sub_name:b596d505-7972-465b-8b8f-82b926477693, vol_name:cephfs) < ""
Oct 01 17:03:20 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "b596d505-7972-465b-8b8f-82b926477693", "auth_id": "tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:20 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume evict, sub_name:b596d505-7972-465b-8b8f-82b926477693, vol_name:cephfs) < ""
Oct 01 17:03:20 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-26224483, client_metadata.root=/volumes/_nogroup/b596d505-7972-465b-8b8f-82b926477693/dec8c348-a88e-4722-a1a2-25b1f0501b74
Oct 01 17:03:20 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=tempest-cephx-id-26224483,client_metadata.root=/volumes/_nogroup/b596d505-7972-465b-8b8f-82b926477693/dec8c348-a88e-4722-a1a2-25b1f0501b74],prefix=session evict} (starting...)
Oct 01 17:03:20 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:03:20 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume evict, sub_name:b596d505-7972-465b-8b8f-82b926477693, vol_name:cephfs) < ""
Oct 01 17:03:20 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b596d505-7972-465b-8b8f-82b926477693", "format": "json"}]: dispatch
Oct 01 17:03:20 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b596d505-7972-465b-8b8f-82b926477693, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:03:20 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b596d505-7972-465b-8b8f-82b926477693, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:03:20 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:03:20.370+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b596d505-7972-465b-8b8f-82b926477693' of type subvolume
Oct 01 17:03:20 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b596d505-7972-465b-8b8f-82b926477693' of type subvolume
Oct 01 17:03:20 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b596d505-7972-465b-8b8f-82b926477693", "force": true, "format": "json"}]: dispatch
Oct 01 17:03:20 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b596d505-7972-465b-8b8f-82b926477693, vol_name:cephfs) < ""
Oct 01 17:03:20 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b596d505-7972-465b-8b8f-82b926477693'' moved to trashcan
Oct 01 17:03:20 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:03:20 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b596d505-7972-465b-8b8f-82b926477693, vol_name:cephfs) < ""
Oct 01 17:03:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:03:20 compute-0 nova_compute[259504]: 2025-10-01 17:03:20.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:03:20 compute-0 nova_compute[259504]: 2025-10-01 17:03:20.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:03:20 compute-0 nova_compute[259504]: 2025-10-01 17:03:20.750 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 17:03:20 compute-0 ceph-mon[74273]: pgmap v1007: 305 pgs: 305 active+clean; 47 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 KiB/s wr, 10 op/s
Oct 01 17:03:20 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "b596d505-7972-465b-8b8f-82b926477693", "auth_id": "tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:20 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:20 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-26224483"}]: dispatch
Oct 01 17:03:20 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-26224483"}]': finished
Oct 01 17:03:20 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "b596d505-7972-465b-8b8f-82b926477693", "auth_id": "tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 17:03:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:03:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 17:03:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:03:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:03:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:03:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:03:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:03:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:03:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:03:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 01 17:03:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:03:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.767532721234616e-05 of space, bias 4.0, pg target 0.11721039265481539 quantized to 16 (current 16)
Oct 01 17:03:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:03:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00013354048642312953 quantized to 32 (current 32)
Oct 01 17:03:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:03:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 17:03:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:03:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 17:03:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:03:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:03:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:03:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 17:03:21 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b596d505-7972-465b-8b8f-82b926477693", "format": "json"}]: dispatch
Oct 01 17:03:21 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b596d505-7972-465b-8b8f-82b926477693", "force": true, "format": "json"}]: dispatch
Oct 01 17:03:22 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 47 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 54 KiB/s wr, 6 op/s
Oct 01 17:03:22 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:03:22 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:03:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Oct 01 17:03:22 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Oct 01 17:03:22 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID alice bob with tenant 1841221f332340a299707d253063659f
Oct 01 17:03:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:03:22 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:03:22 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:03:22 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:03:22 compute-0 nova_compute[259504]: 2025-10-01 17:03:22.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:03:22 compute-0 ceph-mon[74273]: pgmap v1008: 305 pgs: 305 active+clean; 47 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 54 KiB/s wr, 6 op/s
Oct 01 17:03:22 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:03:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Oct 01 17:03:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:03:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:03:23 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0919941a-1ddb-4e3c-91a1-6d4450c6b2dd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:03:23 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0919941a-1ddb-4e3c-91a1-6d4450c6b2dd, vol_name:cephfs) < ""
Oct 01 17:03:23 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0919941a-1ddb-4e3c-91a1-6d4450c6b2dd/.meta.tmp'
Oct 01 17:03:23 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0919941a-1ddb-4e3c-91a1-6d4450c6b2dd/.meta.tmp' to config b'/volumes/_nogroup/0919941a-1ddb-4e3c-91a1-6d4450c6b2dd/.meta'
Oct 01 17:03:23 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0919941a-1ddb-4e3c-91a1-6d4450c6b2dd, vol_name:cephfs) < ""
Oct 01 17:03:23 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0919941a-1ddb-4e3c-91a1-6d4450c6b2dd", "format": "json"}]: dispatch
Oct 01 17:03:23 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0919941a-1ddb-4e3c-91a1-6d4450c6b2dd, vol_name:cephfs) < ""
Oct 01 17:03:23 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0919941a-1ddb-4e3c-91a1-6d4450c6b2dd, vol_name:cephfs) < ""
Oct 01 17:03:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:03:23 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:03:23 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 17:03:23 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.3 total, 600.0 interval
                                           Cumulative writes: 4785 writes, 21K keys, 4785 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 4785 writes, 4785 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1480 writes, 6817 keys, 1480 commit groups, 1.0 writes per commit group, ingest: 9.65 MB, 0.02 MB/s
                                           Interval WAL: 1480 writes, 1480 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     36.4      0.67              0.09        12    0.056       0      0       0.0       0.0
                                             L6      1/0    7.36 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    127.8    104.6      0.74              0.29        11    0.067     48K   5788       0.0       0.0
                                            Sum      1/0    7.36 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2     66.9     72.1      1.41              0.38        23    0.062     48K   5788       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.1     48.6     49.2      0.92              0.14        10    0.092     23K   2590       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    127.8    104.6      0.74              0.29        11    0.067     48K   5788       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     41.2      0.59              0.09        11    0.054       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.08              0.00         1    0.079       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.024, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.10 GB write, 0.06 MB/s write, 0.09 GB read, 0.05 MB/s read, 1.4 seconds
                                           Interval compaction: 0.04 GB write, 0.08 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.9 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5647d11d91f0#2 capacity: 304.00 MB usage: 8.70 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 9.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(552,8.30 MB,2.72903%) FilterBlock(24,142.11 KB,0.0456509%) IndexBlock(24,266.98 KB,0.0857654%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 01 17:03:23 compute-0 podman[269415]: 2025-10-01 17:03:23.774338066 +0000 UTC m=+0.082651573 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 01 17:03:23 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:03:24 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 48 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 94 KiB/s wr, 12 op/s
Oct 01 17:03:25 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0919941a-1ddb-4e3c-91a1-6d4450c6b2dd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:03:25 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0919941a-1ddb-4e3c-91a1-6d4450c6b2dd", "format": "json"}]: dispatch
Oct 01 17:03:25 compute-0 ceph-mon[74273]: pgmap v1009: 305 pgs: 305 active+clean; 48 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 94 KiB/s wr, 12 op/s
Oct 01 17:03:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:03:25 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "format": "json"}]: dispatch
Oct 01 17:03:25 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:03:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Oct 01 17:03:25 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Oct 01 17:03:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Oct 01 17:03:25 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Oct 01 17:03:25 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Oct 01 17:03:25 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:03:25 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "format": "json"}]: dispatch
Oct 01 17:03:25 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:03:25 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1
Oct 01 17:03:25 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1],prefix=session evict} (starting...)
Oct 01 17:03:25 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:03:25 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:03:26 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Oct 01 17:03:26 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Oct 01 17:03:26 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Oct 01 17:03:26 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 48 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 67 KiB/s wr, 9 op/s
Oct 01 17:03:26 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:03:26.452 162304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:71:db', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:60:3f:78:bd:29'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 17:03:26 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:03:26.453 162304 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 17:03:26 compute-0 podman[269436]: 2025-10-01 17:03:26.791918992 +0000 UTC m=+0.094594624 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 01 17:03:27 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "format": "json"}]: dispatch
Oct 01 17:03:27 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "format": "json"}]: dispatch
Oct 01 17:03:27 compute-0 ceph-mon[74273]: pgmap v1010: 305 pgs: 305 active+clean; 48 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 67 KiB/s wr, 9 op/s
Oct 01 17:03:27 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0919941a-1ddb-4e3c-91a1-6d4450c6b2dd", "auth_id": "tempest-cephx-id-26224483", "tenant_id": "a0ac8ec815504b8dae62c40a55008f52", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:03:27 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume authorize, sub_name:0919941a-1ddb-4e3c-91a1-6d4450c6b2dd, tenant_id:a0ac8ec815504b8dae62c40a55008f52, vol_name:cephfs) < ""
Oct 01 17:03:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"} v 0) v1
Oct 01 17:03:27 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:27 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID tempest-cephx-id-26224483 with tenant a0ac8ec815504b8dae62c40a55008f52
Oct 01 17:03:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-26224483", "caps": ["mds", "allow rw path=/volumes/_nogroup/0919941a-1ddb-4e3c-91a1-6d4450c6b2dd/b02fbf2a-0398-4e24-a337-d8cbc1e3781f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0919941a-1ddb-4e3c-91a1-6d4450c6b2dd", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:03:27 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-26224483", "caps": ["mds", "allow rw path=/volumes/_nogroup/0919941a-1ddb-4e3c-91a1-6d4450c6b2dd/b02fbf2a-0398-4e24-a337-d8cbc1e3781f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0919941a-1ddb-4e3c-91a1-6d4450c6b2dd", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:03:27 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-26224483", "caps": ["mds", "allow rw path=/volumes/_nogroup/0919941a-1ddb-4e3c-91a1-6d4450c6b2dd/b02fbf2a-0398-4e24-a337-d8cbc1e3781f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0919941a-1ddb-4e3c-91a1-6d4450c6b2dd", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:03:27 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume authorize, sub_name:0919941a-1ddb-4e3c-91a1-6d4450c6b2dd, tenant_id:a0ac8ec815504b8dae62c40a55008f52, vol_name:cephfs) < ""
Oct 01 17:03:28 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 48 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 67 KiB/s wr, 9 op/s
Oct 01 17:03:28 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0919941a-1ddb-4e3c-91a1-6d4450c6b2dd", "auth_id": "tempest-cephx-id-26224483", "tenant_id": "a0ac8ec815504b8dae62c40a55008f52", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:03:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-26224483", "caps": ["mds", "allow rw path=/volumes/_nogroup/0919941a-1ddb-4e3c-91a1-6d4450c6b2dd/b02fbf2a-0398-4e24-a337-d8cbc1e3781f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0919941a-1ddb-4e3c-91a1-6d4450c6b2dd", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:03:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-26224483", "caps": ["mds", "allow rw path=/volumes/_nogroup/0919941a-1ddb-4e3c-91a1-6d4450c6b2dd/b02fbf2a-0398-4e24-a337-d8cbc1e3781f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0919941a-1ddb-4e3c-91a1-6d4450c6b2dd", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:03:29 compute-0 ceph-mon[74273]: pgmap v1011: 305 pgs: 305 active+clean; 48 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 67 KiB/s wr, 9 op/s
Oct 01 17:03:29 compute-0 sudo[269456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:03:29 compute-0 sudo[269456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:03:29 compute-0 sudo[269456]: pam_unix(sudo:session): session closed for user root
Oct 01 17:03:29 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "r", "format": "json"}]: dispatch
Oct 01 17:03:29 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:03:29 compute-0 sudo[269481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:03:29 compute-0 sudo[269481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:03:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Oct 01 17:03:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Oct 01 17:03:29 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID alice bob with tenant 1841221f332340a299707d253063659f
Oct 01 17:03:29 compute-0 sudo[269481]: pam_unix(sudo:session): session closed for user root
Oct 01 17:03:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:03:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:03:29 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:03:29 compute-0 sudo[269506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:03:29 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:03:29 compute-0 sudo[269506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:03:29 compute-0 sudo[269506]: pam_unix(sudo:session): session closed for user root
Oct 01 17:03:29 compute-0 sudo[269531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 17:03:29 compute-0 sudo[269531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:03:30 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 48 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 94 KiB/s wr, 12 op/s
Oct 01 17:03:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Oct 01 17:03:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:03:30 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:03:30 compute-0 sudo[269531]: pam_unix(sudo:session): session closed for user root
Oct 01 17:03:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:03:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 17:03:30 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:03:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 17:03:30 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 17:03:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 17:03:30 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:03:30 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev c31a5822-1d3a-4ec5-9a51-20550c573dc7 does not exist
Oct 01 17:03:30 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 44fd2c56-0bc4-4a62-8a0d-c7b77b248570 does not exist
Oct 01 17:03:30 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 8b4b3e1c-bceb-44f9-acb5-2d6f3e3d1ee1 does not exist
Oct 01 17:03:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 17:03:30 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 17:03:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 17:03:30 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 17:03:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 17:03:30 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:03:30 compute-0 sudo[269587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:03:30 compute-0 sudo[269587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:03:30 compute-0 sudo[269587]: pam_unix(sudo:session): session closed for user root
Oct 01 17:03:30 compute-0 sudo[269612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:03:30 compute-0 sudo[269612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:03:30 compute-0 sudo[269612]: pam_unix(sudo:session): session closed for user root
Oct 01 17:03:30 compute-0 sudo[269637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:03:30 compute-0 sudo[269637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:03:30 compute-0 sudo[269637]: pam_unix(sudo:session): session closed for user root
Oct 01 17:03:30 compute-0 sudo[269662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 17:03:30 compute-0 sudo[269662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:03:31 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "r", "format": "json"}]: dispatch
Oct 01 17:03:31 compute-0 ceph-mon[74273]: pgmap v1012: 305 pgs: 305 active+clean; 48 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 94 KiB/s wr, 12 op/s
Oct 01 17:03:31 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:03:31 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 17:03:31 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:03:31 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 17:03:31 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 17:03:31 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:03:31 compute-0 podman[269727]: 2025-10-01 17:03:31.272163405 +0000 UTC m=+0.059965556 container create 66e3f9b91541a192b4aee640e290d47f2bdb7e5b6cab3b1f53a3d47c8f8d9308 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_burnell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:03:31 compute-0 systemd[1]: Started libpod-conmon-66e3f9b91541a192b4aee640e290d47f2bdb7e5b6cab3b1f53a3d47c8f8d9308.scope.
Oct 01 17:03:31 compute-0 podman[269727]: 2025-10-01 17:03:31.2514157 +0000 UTC m=+0.039217891 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:03:31 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:03:31 compute-0 podman[269727]: 2025-10-01 17:03:31.403496138 +0000 UTC m=+0.191298319 container init 66e3f9b91541a192b4aee640e290d47f2bdb7e5b6cab3b1f53a3d47c8f8d9308 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:03:31 compute-0 podman[269727]: 2025-10-01 17:03:31.41520355 +0000 UTC m=+0.203005701 container start 66e3f9b91541a192b4aee640e290d47f2bdb7e5b6cab3b1f53a3d47c8f8d9308 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_burnell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:03:31 compute-0 podman[269727]: 2025-10-01 17:03:31.41886322 +0000 UTC m=+0.206665381 container attach 66e3f9b91541a192b4aee640e290d47f2bdb7e5b6cab3b1f53a3d47c8f8d9308 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_burnell, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 17:03:31 compute-0 vigilant_burnell[269744]: 167 167
Oct 01 17:03:31 compute-0 podman[269727]: 2025-10-01 17:03:31.423198831 +0000 UTC m=+0.211000982 container died 66e3f9b91541a192b4aee640e290d47f2bdb7e5b6cab3b1f53a3d47c8f8d9308 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 17:03:31 compute-0 systemd[1]: libpod-66e3f9b91541a192b4aee640e290d47f2bdb7e5b6cab3b1f53a3d47c8f8d9308.scope: Deactivated successfully.
Oct 01 17:03:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-792bf5ccf94b832a7611a80b083e30c29083af549b05115ef4dd13a6b5eccb2f-merged.mount: Deactivated successfully.
Oct 01 17:03:31 compute-0 podman[269727]: 2025-10-01 17:03:31.476541176 +0000 UTC m=+0.264343327 container remove 66e3f9b91541a192b4aee640e290d47f2bdb7e5b6cab3b1f53a3d47c8f8d9308 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Oct 01 17:03:31 compute-0 systemd[1]: libpod-conmon-66e3f9b91541a192b4aee640e290d47f2bdb7e5b6cab3b1f53a3d47c8f8d9308.scope: Deactivated successfully.
Oct 01 17:03:31 compute-0 podman[269768]: 2025-10-01 17:03:31.738533142 +0000 UTC m=+0.075151563 container create 5b0ceecbce7b6fef20a0a19ed3ab1722a8c6993bb4bf9f0f6455cb1205754733 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 17:03:31 compute-0 podman[269768]: 2025-10-01 17:03:31.707735045 +0000 UTC m=+0.044353506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:03:31 compute-0 systemd[1]: Started libpod-conmon-5b0ceecbce7b6fef20a0a19ed3ab1722a8c6993bb4bf9f0f6455cb1205754733.scope.
Oct 01 17:03:31 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:03:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d1d43f4dea704b476a32dc29c164a4da642240c44d5c9b58f4ff7b148b52804/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:03:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d1d43f4dea704b476a32dc29c164a4da642240c44d5c9b58f4ff7b148b52804/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:03:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d1d43f4dea704b476a32dc29c164a4da642240c44d5c9b58f4ff7b148b52804/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:03:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d1d43f4dea704b476a32dc29c164a4da642240c44d5c9b58f4ff7b148b52804/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:03:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d1d43f4dea704b476a32dc29c164a4da642240c44d5c9b58f4ff7b148b52804/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 17:03:31 compute-0 podman[269768]: 2025-10-01 17:03:31.868191365 +0000 UTC m=+0.204809816 container init 5b0ceecbce7b6fef20a0a19ed3ab1722a8c6993bb4bf9f0f6455cb1205754733 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 01 17:03:31 compute-0 podman[269768]: 2025-10-01 17:03:31.880022141 +0000 UTC m=+0.216640522 container start 5b0ceecbce7b6fef20a0a19ed3ab1722a8c6993bb4bf9f0f6455cb1205754733 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_payne, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 01 17:03:31 compute-0 podman[269768]: 2025-10-01 17:03:31.885493286 +0000 UTC m=+0.222111667 container attach 5b0ceecbce7b6fef20a0a19ed3ab1722a8c6993bb4bf9f0f6455cb1205754733 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_payne, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:03:32 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 48 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 66 KiB/s wr, 9 op/s
Oct 01 17:03:32 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0919941a-1ddb-4e3c-91a1-6d4450c6b2dd", "auth_id": "tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:32 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume deauthorize, sub_name:0919941a-1ddb-4e3c-91a1-6d4450c6b2dd, vol_name:cephfs) < ""
Oct 01 17:03:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"} v 0) v1
Oct 01 17:03:32 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-26224483"} v 0) v1
Oct 01 17:03:32 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-26224483"}]: dispatch
Oct 01 17:03:32 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-26224483"}]': finished
Oct 01 17:03:32 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume deauthorize, sub_name:0919941a-1ddb-4e3c-91a1-6d4450c6b2dd, vol_name:cephfs) < ""
Oct 01 17:03:32 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0919941a-1ddb-4e3c-91a1-6d4450c6b2dd", "auth_id": "tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:32 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume evict, sub_name:0919941a-1ddb-4e3c-91a1-6d4450c6b2dd, vol_name:cephfs) < ""
Oct 01 17:03:32 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-26224483, client_metadata.root=/volumes/_nogroup/0919941a-1ddb-4e3c-91a1-6d4450c6b2dd/b02fbf2a-0398-4e24-a337-d8cbc1e3781f
Oct 01 17:03:32 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=tempest-cephx-id-26224483,client_metadata.root=/volumes/_nogroup/0919941a-1ddb-4e3c-91a1-6d4450c6b2dd/b02fbf2a-0398-4e24-a337-d8cbc1e3781f],prefix=session evict} (starting...)
Oct 01 17:03:32 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:03:32 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume evict, sub_name:0919941a-1ddb-4e3c-91a1-6d4450c6b2dd, vol_name:cephfs) < ""
Oct 01 17:03:32 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0919941a-1ddb-4e3c-91a1-6d4450c6b2dd", "format": "json"}]: dispatch
Oct 01 17:03:32 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0919941a-1ddb-4e3c-91a1-6d4450c6b2dd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:03:32 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0919941a-1ddb-4e3c-91a1-6d4450c6b2dd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:03:32 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:03:32.627+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0919941a-1ddb-4e3c-91a1-6d4450c6b2dd' of type subvolume
Oct 01 17:03:32 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0919941a-1ddb-4e3c-91a1-6d4450c6b2dd' of type subvolume
Oct 01 17:03:32 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0919941a-1ddb-4e3c-91a1-6d4450c6b2dd", "force": true, "format": "json"}]: dispatch
Oct 01 17:03:32 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0919941a-1ddb-4e3c-91a1-6d4450c6b2dd, vol_name:cephfs) < ""
Oct 01 17:03:32 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0919941a-1ddb-4e3c-91a1-6d4450c6b2dd'' moved to trashcan
Oct 01 17:03:32 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:03:32 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0919941a-1ddb-4e3c-91a1-6d4450c6b2dd, vol_name:cephfs) < ""
Oct 01 17:03:33 compute-0 interesting_payne[269784]: --> passed data devices: 0 physical, 3 LVM
Oct 01 17:03:33 compute-0 interesting_payne[269784]: --> relative data size: 1.0
Oct 01 17:03:33 compute-0 interesting_payne[269784]: --> All data devices are unavailable
Oct 01 17:03:33 compute-0 systemd[1]: libpod-5b0ceecbce7b6fef20a0a19ed3ab1722a8c6993bb4bf9f0f6455cb1205754733.scope: Deactivated successfully.
Oct 01 17:03:33 compute-0 systemd[1]: libpod-5b0ceecbce7b6fef20a0a19ed3ab1722a8c6993bb4bf9f0f6455cb1205754733.scope: Consumed 1.096s CPU time.
Oct 01 17:03:33 compute-0 podman[269768]: 2025-10-01 17:03:33.062699569 +0000 UTC m=+1.399317950 container died 5b0ceecbce7b6fef20a0a19ed3ab1722a8c6993bb4bf9f0f6455cb1205754733 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 01 17:03:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d1d43f4dea704b476a32dc29c164a4da642240c44d5c9b58f4ff7b148b52804-merged.mount: Deactivated successfully.
Oct 01 17:03:33 compute-0 podman[269768]: 2025-10-01 17:03:33.118247391 +0000 UTC m=+1.454865772 container remove 5b0ceecbce7b6fef20a0a19ed3ab1722a8c6993bb4bf9f0f6455cb1205754733 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_payne, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 01 17:03:33 compute-0 systemd[1]: libpod-conmon-5b0ceecbce7b6fef20a0a19ed3ab1722a8c6993bb4bf9f0f6455cb1205754733.scope: Deactivated successfully.
Oct 01 17:03:33 compute-0 sudo[269662]: pam_unix(sudo:session): session closed for user root
Oct 01 17:03:33 compute-0 sudo[269827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:03:33 compute-0 sudo[269827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:03:33 compute-0 sudo[269827]: pam_unix(sudo:session): session closed for user root
Oct 01 17:03:33 compute-0 ceph-mon[74273]: pgmap v1013: 305 pgs: 305 active+clean; 48 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 66 KiB/s wr, 9 op/s
Oct 01 17:03:33 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:33 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-26224483"}]: dispatch
Oct 01 17:03:33 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-26224483"}]': finished
Oct 01 17:03:33 compute-0 sudo[269852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:03:33 compute-0 sudo[269852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:03:33 compute-0 sudo[269852]: pam_unix(sudo:session): session closed for user root
Oct 01 17:03:33 compute-0 sudo[269877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:03:33 compute-0 sudo[269877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:03:33 compute-0 sudo[269877]: pam_unix(sudo:session): session closed for user root
Oct 01 17:03:33 compute-0 sudo[269902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 17:03:33 compute-0 sudo[269902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:03:33 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "format": "json"}]: dispatch
Oct 01 17:03:33 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:03:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Oct 01 17:03:33 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Oct 01 17:03:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Oct 01 17:03:33 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Oct 01 17:03:33 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Oct 01 17:03:33 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:03:33 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "format": "json"}]: dispatch
Oct 01 17:03:33 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:03:33 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1
Oct 01 17:03:33 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1],prefix=session evict} (starting...)
Oct 01 17:03:33 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:03:33 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:03:33 compute-0 podman[269969]: 2025-10-01 17:03:33.874985317 +0000 UTC m=+0.065925724 container create cbce440ffd6e582b3e2d2c35a82057fa69a1f7736c5de1f54f81f239bbe7a309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_solomon, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:03:33 compute-0 systemd[1]: Started libpod-conmon-cbce440ffd6e582b3e2d2c35a82057fa69a1f7736c5de1f54f81f239bbe7a309.scope.
Oct 01 17:03:33 compute-0 podman[269969]: 2025-10-01 17:03:33.848461719 +0000 UTC m=+0.039402166 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:03:33 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:03:33 compute-0 podman[269969]: 2025-10-01 17:03:33.98170321 +0000 UTC m=+0.172643617 container init cbce440ffd6e582b3e2d2c35a82057fa69a1f7736c5de1f54f81f239bbe7a309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_solomon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:03:33 compute-0 podman[269969]: 2025-10-01 17:03:33.992321339 +0000 UTC m=+0.183261746 container start cbce440ffd6e582b3e2d2c35a82057fa69a1f7736c5de1f54f81f239bbe7a309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_solomon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:03:33 compute-0 podman[269969]: 2025-10-01 17:03:33.997778133 +0000 UTC m=+0.188718540 container attach cbce440ffd6e582b3e2d2c35a82057fa69a1f7736c5de1f54f81f239bbe7a309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_solomon, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 01 17:03:33 compute-0 epic_solomon[269985]: 167 167
Oct 01 17:03:33 compute-0 systemd[1]: libpod-cbce440ffd6e582b3e2d2c35a82057fa69a1f7736c5de1f54f81f239bbe7a309.scope: Deactivated successfully.
Oct 01 17:03:34 compute-0 podman[269969]: 2025-10-01 17:03:34.000051222 +0000 UTC m=+0.190991649 container died cbce440ffd6e582b3e2d2c35a82057fa69a1f7736c5de1f54f81f239bbe7a309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_solomon, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:03:34 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 48 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 106 KiB/s wr, 13 op/s
Oct 01 17:03:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-4dcb538cc5ee2879b2622d517f0aa3d6c211f6068a36a488c0a65e5be37c0c88-merged.mount: Deactivated successfully.
Oct 01 17:03:34 compute-0 podman[269969]: 2025-10-01 17:03:34.052885632 +0000 UTC m=+0.243826039 container remove cbce440ffd6e582b3e2d2c35a82057fa69a1f7736c5de1f54f81f239bbe7a309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:03:34 compute-0 systemd[1]: libpod-conmon-cbce440ffd6e582b3e2d2c35a82057fa69a1f7736c5de1f54f81f239bbe7a309.scope: Deactivated successfully.
Oct 01 17:03:34 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0919941a-1ddb-4e3c-91a1-6d4450c6b2dd", "auth_id": "tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:34 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0919941a-1ddb-4e3c-91a1-6d4450c6b2dd", "auth_id": "tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:34 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0919941a-1ddb-4e3c-91a1-6d4450c6b2dd", "format": "json"}]: dispatch
Oct 01 17:03:34 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0919941a-1ddb-4e3c-91a1-6d4450c6b2dd", "force": true, "format": "json"}]: dispatch
Oct 01 17:03:34 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Oct 01 17:03:34 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Oct 01 17:03:34 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Oct 01 17:03:34 compute-0 podman[270009]: 2025-10-01 17:03:34.298153655 +0000 UTC m=+0.072442312 container create 495b783d93d93cea73694d290ae48f88ac93a557e7570db0297290968eb27d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_nash, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 01 17:03:34 compute-0 systemd[1]: Started libpod-conmon-495b783d93d93cea73694d290ae48f88ac93a557e7570db0297290968eb27d41.scope.
Oct 01 17:03:34 compute-0 podman[270009]: 2025-10-01 17:03:34.270235764 +0000 UTC m=+0.044524421 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:03:34 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:03:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8eb2712a29e570c48936f4b911452b27743c3208673238e83a8c9119a2143897/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:03:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8eb2712a29e570c48936f4b911452b27743c3208673238e83a8c9119a2143897/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:03:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8eb2712a29e570c48936f4b911452b27743c3208673238e83a8c9119a2143897/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:03:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8eb2712a29e570c48936f4b911452b27743c3208673238e83a8c9119a2143897/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:03:34 compute-0 podman[270009]: 2025-10-01 17:03:34.430074545 +0000 UTC m=+0.204363212 container init 495b783d93d93cea73694d290ae48f88ac93a557e7570db0297290968eb27d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_nash, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 01 17:03:34 compute-0 podman[270009]: 2025-10-01 17:03:34.441098727 +0000 UTC m=+0.215387384 container start 495b783d93d93cea73694d290ae48f88ac93a557e7570db0297290968eb27d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_nash, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 01 17:03:34 compute-0 podman[270009]: 2025-10-01 17:03:34.445008775 +0000 UTC m=+0.219297422 container attach 495b783d93d93cea73694d290ae48f88ac93a557e7570db0297290968eb27d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_nash, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 01 17:03:35 compute-0 gracious_nash[270027]: {
Oct 01 17:03:35 compute-0 gracious_nash[270027]:     "0": [
Oct 01 17:03:35 compute-0 gracious_nash[270027]:         {
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             "devices": [
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "/dev/loop3"
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             ],
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             "lv_name": "ceph_lv0",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             "lv_size": "21470642176",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             "name": "ceph_lv0",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             "tags": {
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "ceph.cluster_name": "ceph",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "ceph.crush_device_class": "",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "ceph.encrypted": "0",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "ceph.osd_id": "0",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "ceph.type": "block",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "ceph.vdo": "0"
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             },
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             "type": "block",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             "vg_name": "ceph_vg0"
Oct 01 17:03:35 compute-0 gracious_nash[270027]:         }
Oct 01 17:03:35 compute-0 gracious_nash[270027]:     ],
Oct 01 17:03:35 compute-0 gracious_nash[270027]:     "1": [
Oct 01 17:03:35 compute-0 gracious_nash[270027]:         {
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             "devices": [
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "/dev/loop4"
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             ],
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             "lv_name": "ceph_lv1",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             "lv_size": "21470642176",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             "name": "ceph_lv1",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             "tags": {
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "ceph.cluster_name": "ceph",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "ceph.crush_device_class": "",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "ceph.encrypted": "0",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "ceph.osd_id": "1",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "ceph.type": "block",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "ceph.vdo": "0"
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             },
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             "type": "block",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             "vg_name": "ceph_vg1"
Oct 01 17:03:35 compute-0 gracious_nash[270027]:         }
Oct 01 17:03:35 compute-0 gracious_nash[270027]:     ],
Oct 01 17:03:35 compute-0 gracious_nash[270027]:     "2": [
Oct 01 17:03:35 compute-0 gracious_nash[270027]:         {
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             "devices": [
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "/dev/loop5"
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             ],
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             "lv_name": "ceph_lv2",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             "lv_size": "21470642176",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             "name": "ceph_lv2",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             "tags": {
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "ceph.cluster_name": "ceph",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "ceph.crush_device_class": "",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "ceph.encrypted": "0",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "ceph.osd_id": "2",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "ceph.type": "block",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:                 "ceph.vdo": "0"
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             },
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             "type": "block",
Oct 01 17:03:35 compute-0 gracious_nash[270027]:             "vg_name": "ceph_vg2"
Oct 01 17:03:35 compute-0 gracious_nash[270027]:         }
Oct 01 17:03:35 compute-0 gracious_nash[270027]:     ]
Oct 01 17:03:35 compute-0 gracious_nash[270027]: }
Oct 01 17:03:35 compute-0 systemd[1]: libpod-495b783d93d93cea73694d290ae48f88ac93a557e7570db0297290968eb27d41.scope: Deactivated successfully.
Oct 01 17:03:35 compute-0 podman[270009]: 2025-10-01 17:03:35.2023579 +0000 UTC m=+0.976646567 container died 495b783d93d93cea73694d290ae48f88ac93a557e7570db0297290968eb27d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_nash, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 01 17:03:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-8eb2712a29e570c48936f4b911452b27743c3208673238e83a8c9119a2143897-merged.mount: Deactivated successfully.
Oct 01 17:03:35 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "format": "json"}]: dispatch
Oct 01 17:03:35 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "format": "json"}]: dispatch
Oct 01 17:03:35 compute-0 ceph-mon[74273]: pgmap v1014: 305 pgs: 305 active+clean; 48 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 106 KiB/s wr, 13 op/s
Oct 01 17:03:35 compute-0 podman[270009]: 2025-10-01 17:03:35.275850162 +0000 UTC m=+1.050138790 container remove 495b783d93d93cea73694d290ae48f88ac93a557e7570db0297290968eb27d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_nash, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:03:35 compute-0 systemd[1]: libpod-conmon-495b783d93d93cea73694d290ae48f88ac93a557e7570db0297290968eb27d41.scope: Deactivated successfully.
Oct 01 17:03:35 compute-0 sudo[269902]: pam_unix(sudo:session): session closed for user root
Oct 01 17:03:35 compute-0 sudo[270048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:03:35 compute-0 sudo[270048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:03:35 compute-0 sudo[270048]: pam_unix(sudo:session): session closed for user root
Oct 01 17:03:35 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:03:35.456 162304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d2971fc2-5b75-459a-98a0-6e626d0d4d99, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 17:03:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:03:35 compute-0 sudo[270073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:03:35 compute-0 sudo[270073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:03:35 compute-0 sudo[270073]: pam_unix(sudo:session): session closed for user root
Oct 01 17:03:35 compute-0 sudo[270098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:03:35 compute-0 sudo[270098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:03:35 compute-0 sudo[270098]: pam_unix(sudo:session): session closed for user root
Oct 01 17:03:35 compute-0 sudo[270123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 17:03:35 compute-0 sudo[270123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:03:36 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 48 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 66 KiB/s wr, 7 op/s
Oct 01 17:03:36 compute-0 podman[270189]: 2025-10-01 17:03:36.135459176 +0000 UTC m=+0.052812260 container create 8044cf6d66d9834fcc6075c89d6edf480b452ed066f0cbf0bcb240ba0fa09083 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_turing, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:03:36 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e14aa593-425a-47dd-b300-8569859e6275", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:03:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e14aa593-425a-47dd-b300-8569859e6275, vol_name:cephfs) < ""
Oct 01 17:03:36 compute-0 systemd[1]: Started libpod-conmon-8044cf6d66d9834fcc6075c89d6edf480b452ed066f0cbf0bcb240ba0fa09083.scope.
Oct 01 17:03:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e14aa593-425a-47dd-b300-8569859e6275/.meta.tmp'
Oct 01 17:03:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e14aa593-425a-47dd-b300-8569859e6275/.meta.tmp' to config b'/volumes/_nogroup/e14aa593-425a-47dd-b300-8569859e6275/.meta'
Oct 01 17:03:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e14aa593-425a-47dd-b300-8569859e6275, vol_name:cephfs) < ""
Oct 01 17:03:36 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e14aa593-425a-47dd-b300-8569859e6275", "format": "json"}]: dispatch
Oct 01 17:03:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e14aa593-425a-47dd-b300-8569859e6275, vol_name:cephfs) < ""
Oct 01 17:03:36 compute-0 podman[270189]: 2025-10-01 17:03:36.109308149 +0000 UTC m=+0.026661283 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:03:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e14aa593-425a-47dd-b300-8569859e6275, vol_name:cephfs) < ""
Oct 01 17:03:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:03:36 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:03:36 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:03:36 compute-0 podman[270189]: 2025-10-01 17:03:36.251844089 +0000 UTC m=+0.169197223 container init 8044cf6d66d9834fcc6075c89d6edf480b452ed066f0cbf0bcb240ba0fa09083 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:03:36 compute-0 podman[270189]: 2025-10-01 17:03:36.266842921 +0000 UTC m=+0.184195995 container start 8044cf6d66d9834fcc6075c89d6edf480b452ed066f0cbf0bcb240ba0fa09083 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_turing, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:03:36 compute-0 podman[270189]: 2025-10-01 17:03:36.272499471 +0000 UTC m=+0.189852555 container attach 8044cf6d66d9834fcc6075c89d6edf480b452ed066f0cbf0bcb240ba0fa09083 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 01 17:03:36 compute-0 clever_turing[270206]: 167 167
Oct 01 17:03:36 compute-0 systemd[1]: libpod-8044cf6d66d9834fcc6075c89d6edf480b452ed066f0cbf0bcb240ba0fa09083.scope: Deactivated successfully.
Oct 01 17:03:36 compute-0 podman[270189]: 2025-10-01 17:03:36.275524342 +0000 UTC m=+0.192877416 container died 8044cf6d66d9834fcc6075c89d6edf480b452ed066f0cbf0bcb240ba0fa09083 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_turing, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 01 17:03:36 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:03:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-690fdac08ce119831c18535f4ca6c0ac798c6fa350aeeb07728fcf2a72ad971f-merged.mount: Deactivated successfully.
Oct 01 17:03:36 compute-0 podman[270189]: 2025-10-01 17:03:36.333410934 +0000 UTC m=+0.250764018 container remove 8044cf6d66d9834fcc6075c89d6edf480b452ed066f0cbf0bcb240ba0fa09083 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_turing, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 17:03:36 compute-0 systemd[1]: libpod-conmon-8044cf6d66d9834fcc6075c89d6edf480b452ed066f0cbf0bcb240ba0fa09083.scope: Deactivated successfully.
Oct 01 17:03:36 compute-0 podman[270230]: 2025-10-01 17:03:36.606210516 +0000 UTC m=+0.091462844 container create f80f675f1a91e15062fbd6907d4af6b896f9df241bb728670275ce832ab7613b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 01 17:03:36 compute-0 podman[270230]: 2025-10-01 17:03:36.568660955 +0000 UTC m=+0.053913333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:03:36 compute-0 systemd[1]: Started libpod-conmon-f80f675f1a91e15062fbd6907d4af6b896f9df241bb728670275ce832ab7613b.scope.
Oct 01 17:03:36 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:03:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d05ee69f44da9e66336aa9e1469f4bd82ca168cb1cc0d66d14f90629b65a4d47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:03:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d05ee69f44da9e66336aa9e1469f4bd82ca168cb1cc0d66d14f90629b65a4d47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:03:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d05ee69f44da9e66336aa9e1469f4bd82ca168cb1cc0d66d14f90629b65a4d47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:03:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d05ee69f44da9e66336aa9e1469f4bd82ca168cb1cc0d66d14f90629b65a4d47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:03:36 compute-0 podman[270230]: 2025-10-01 17:03:36.723522237 +0000 UTC m=+0.208774625 container init f80f675f1a91e15062fbd6907d4af6b896f9df241bb728670275ce832ab7613b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_greider, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 01 17:03:36 compute-0 podman[270230]: 2025-10-01 17:03:36.743654132 +0000 UTC m=+0.228906470 container start f80f675f1a91e15062fbd6907d4af6b896f9df241bb728670275ce832ab7613b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_greider, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 01 17:03:36 compute-0 podman[270230]: 2025-10-01 17:03:36.748957702 +0000 UTC m=+0.234210030 container attach f80f675f1a91e15062fbd6907d4af6b896f9df241bb728670275ce832ab7613b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_greider, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 01 17:03:37 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:03:37 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:03:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Oct 01 17:03:37 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Oct 01 17:03:37 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID alice with tenant 1841221f332340a299707d253063659f
Oct 01 17:03:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:03:37 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:03:37 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:03:37 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:03:37 compute-0 ceph-mon[74273]: pgmap v1015: 305 pgs: 305 active+clean; 48 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 66 KiB/s wr, 7 op/s
Oct 01 17:03:37 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e14aa593-425a-47dd-b300-8569859e6275", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:03:37 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e14aa593-425a-47dd-b300-8569859e6275", "format": "json"}]: dispatch
Oct 01 17:03:37 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Oct 01 17:03:37 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:03:37 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:03:37 compute-0 optimistic_greider[270246]: {
Oct 01 17:03:37 compute-0 optimistic_greider[270246]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 17:03:37 compute-0 optimistic_greider[270246]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:03:37 compute-0 optimistic_greider[270246]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 17:03:37 compute-0 optimistic_greider[270246]:         "osd_id": 2,
Oct 01 17:03:37 compute-0 optimistic_greider[270246]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 17:03:37 compute-0 optimistic_greider[270246]:         "type": "bluestore"
Oct 01 17:03:37 compute-0 optimistic_greider[270246]:     },
Oct 01 17:03:37 compute-0 optimistic_greider[270246]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 17:03:37 compute-0 optimistic_greider[270246]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:03:37 compute-0 optimistic_greider[270246]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 17:03:37 compute-0 optimistic_greider[270246]:         "osd_id": 0,
Oct 01 17:03:37 compute-0 optimistic_greider[270246]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 17:03:37 compute-0 optimistic_greider[270246]:         "type": "bluestore"
Oct 01 17:03:37 compute-0 optimistic_greider[270246]:     },
Oct 01 17:03:37 compute-0 optimistic_greider[270246]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 17:03:37 compute-0 optimistic_greider[270246]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:03:37 compute-0 optimistic_greider[270246]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 17:03:37 compute-0 optimistic_greider[270246]:         "osd_id": 1,
Oct 01 17:03:37 compute-0 optimistic_greider[270246]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 17:03:37 compute-0 optimistic_greider[270246]:         "type": "bluestore"
Oct 01 17:03:37 compute-0 optimistic_greider[270246]:     }
Oct 01 17:03:37 compute-0 optimistic_greider[270246]: }
Oct 01 17:03:37 compute-0 systemd[1]: libpod-f80f675f1a91e15062fbd6907d4af6b896f9df241bb728670275ce832ab7613b.scope: Deactivated successfully.
Oct 01 17:03:37 compute-0 systemd[1]: libpod-f80f675f1a91e15062fbd6907d4af6b896f9df241bb728670275ce832ab7613b.scope: Consumed 1.113s CPU time.
Oct 01 17:03:37 compute-0 podman[270230]: 2025-10-01 17:03:37.900698368 +0000 UTC m=+1.385950726 container died f80f675f1a91e15062fbd6907d4af6b896f9df241bb728670275ce832ab7613b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 01 17:03:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-d05ee69f44da9e66336aa9e1469f4bd82ca168cb1cc0d66d14f90629b65a4d47-merged.mount: Deactivated successfully.
Oct 01 17:03:38 compute-0 podman[270230]: 2025-10-01 17:03:38.023405091 +0000 UTC m=+1.508657399 container remove f80f675f1a91e15062fbd6907d4af6b896f9df241bb728670275ce832ab7613b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_greider, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 01 17:03:38 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 48 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 66 KiB/s wr, 7 op/s
Oct 01 17:03:38 compute-0 systemd[1]: libpod-conmon-f80f675f1a91e15062fbd6907d4af6b896f9df241bb728670275ce832ab7613b.scope: Deactivated successfully.
Oct 01 17:03:38 compute-0 sudo[270123]: pam_unix(sudo:session): session closed for user root
Oct 01 17:03:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 17:03:38 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:03:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 17:03:38 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:03:38 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev bd77926f-45ba-4c12-8c61-dcd85cc093f2 does not exist
Oct 01 17:03:38 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 887a4322-7680-443b-a7dc-d80a9c5a31fc does not exist
Oct 01 17:03:38 compute-0 podman[270280]: 2025-10-01 17:03:38.14328821 +0000 UTC m=+0.199718883 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 01 17:03:38 compute-0 sudo[270317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:03:38 compute-0 sudo[270317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:03:38 compute-0 sudo[270317]: pam_unix(sudo:session): session closed for user root
Oct 01 17:03:38 compute-0 sudo[270346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 17:03:38 compute-0 sudo[270346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:03:38 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:03:38 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:03:38 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:03:38 compute-0 sudo[270346]: pam_unix(sudo:session): session closed for user root
Oct 01 17:03:39 compute-0 ceph-mon[74273]: pgmap v1016: 305 pgs: 305 active+clean; 48 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 66 KiB/s wr, 7 op/s
Oct 01 17:03:39 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "e14aa593-425a-47dd-b300-8569859e6275", "auth_id": "tempest-cephx-id-26224483", "tenant_id": "a0ac8ec815504b8dae62c40a55008f52", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:03:39 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume authorize, sub_name:e14aa593-425a-47dd-b300-8569859e6275, tenant_id:a0ac8ec815504b8dae62c40a55008f52, vol_name:cephfs) < ""
Oct 01 17:03:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"} v 0) v1
Oct 01 17:03:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:39 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID tempest-cephx-id-26224483 with tenant a0ac8ec815504b8dae62c40a55008f52
Oct 01 17:03:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-26224483", "caps": ["mds", "allow rw path=/volumes/_nogroup/e14aa593-425a-47dd-b300-8569859e6275/4e937636-ec4f-4e12-8ac3-3c57313b7bf8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_e14aa593-425a-47dd-b300-8569859e6275", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:03:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-26224483", "caps": ["mds", "allow rw path=/volumes/_nogroup/e14aa593-425a-47dd-b300-8569859e6275/4e937636-ec4f-4e12-8ac3-3c57313b7bf8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_e14aa593-425a-47dd-b300-8569859e6275", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:03:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-26224483", "caps": ["mds", "allow rw path=/volumes/_nogroup/e14aa593-425a-47dd-b300-8569859e6275/4e937636-ec4f-4e12-8ac3-3c57313b7bf8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_e14aa593-425a-47dd-b300-8569859e6275", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:03:39 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume authorize, sub_name:e14aa593-425a-47dd-b300-8569859e6275, tenant_id:a0ac8ec815504b8dae62c40a55008f52, vol_name:cephfs) < ""
Oct 01 17:03:40 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 49 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 98 KiB/s wr, 12 op/s
Oct 01 17:03:40 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "e14aa593-425a-47dd-b300-8569859e6275", "auth_id": "tempest-cephx-id-26224483", "tenant_id": "a0ac8ec815504b8dae62c40a55008f52", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:03:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-26224483", "caps": ["mds", "allow rw path=/volumes/_nogroup/e14aa593-425a-47dd-b300-8569859e6275/4e937636-ec4f-4e12-8ac3-3c57313b7bf8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_e14aa593-425a-47dd-b300-8569859e6275", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:03:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-26224483", "caps": ["mds", "allow rw path=/volumes/_nogroup/e14aa593-425a-47dd-b300-8569859e6275/4e937636-ec4f-4e12-8ac3-3c57313b7bf8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_e14aa593-425a-47dd-b300-8569859e6275", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:03:40 compute-0 ceph-mon[74273]: pgmap v1017: 305 pgs: 305 active+clean; 49 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 98 KiB/s wr, 12 op/s
Oct 01 17:03:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:03:40 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "format": "json"}]: dispatch
Oct 01 17:03:40 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:03:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Oct 01 17:03:40 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Oct 01 17:03:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Oct 01 17:03:40 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Oct 01 17:03:40 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Oct 01 17:03:40 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:03:40 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "format": "json"}]: dispatch
Oct 01 17:03:40 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:03:40 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1
Oct 01 17:03:40 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1],prefix=session evict} (starting...)
Oct 01 17:03:40 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:03:40 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:03:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:03:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:03:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:03:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:03:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:03:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:03:41 compute-0 podman[270372]: 2025-10-01 17:03:41.765370901 +0000 UTC m=+0.072244725 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 01 17:03:41 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "format": "json"}]: dispatch
Oct 01 17:03:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Oct 01 17:03:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Oct 01 17:03:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Oct 01 17:03:41 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "format": "json"}]: dispatch
Oct 01 17:03:42 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 49 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 71 KiB/s wr, 9 op/s
Oct 01 17:03:42 compute-0 ceph-mon[74273]: pgmap v1018: 305 pgs: 305 active+clean; 49 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 71 KiB/s wr, 9 op/s
Oct 01 17:03:43 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "e14aa593-425a-47dd-b300-8569859e6275", "auth_id": "tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:43 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume deauthorize, sub_name:e14aa593-425a-47dd-b300-8569859e6275, vol_name:cephfs) < ""
Oct 01 17:03:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"} v 0) v1
Oct 01 17:03:43 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-26224483"} v 0) v1
Oct 01 17:03:43 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-26224483"}]: dispatch
Oct 01 17:03:43 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-26224483"}]': finished
Oct 01 17:03:43 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume deauthorize, sub_name:e14aa593-425a-47dd-b300-8569859e6275, vol_name:cephfs) < ""
Oct 01 17:03:43 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "e14aa593-425a-47dd-b300-8569859e6275", "auth_id": "tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:43 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume evict, sub_name:e14aa593-425a-47dd-b300-8569859e6275, vol_name:cephfs) < ""
Oct 01 17:03:43 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-26224483, client_metadata.root=/volumes/_nogroup/e14aa593-425a-47dd-b300-8569859e6275/4e937636-ec4f-4e12-8ac3-3c57313b7bf8
Oct 01 17:03:43 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=tempest-cephx-id-26224483,client_metadata.root=/volumes/_nogroup/e14aa593-425a-47dd-b300-8569859e6275/4e937636-ec4f-4e12-8ac3-3c57313b7bf8],prefix=session evict} (starting...)
Oct 01 17:03:43 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:03:43 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume evict, sub_name:e14aa593-425a-47dd-b300-8569859e6275, vol_name:cephfs) < ""
Oct 01 17:03:43 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e14aa593-425a-47dd-b300-8569859e6275", "format": "json"}]: dispatch
Oct 01 17:03:43 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e14aa593-425a-47dd-b300-8569859e6275, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:03:43 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e14aa593-425a-47dd-b300-8569859e6275, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:03:43 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:03:43.671+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e14aa593-425a-47dd-b300-8569859e6275' of type subvolume
Oct 01 17:03:43 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e14aa593-425a-47dd-b300-8569859e6275' of type subvolume
Oct 01 17:03:43 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e14aa593-425a-47dd-b300-8569859e6275", "force": true, "format": "json"}]: dispatch
Oct 01 17:03:43 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e14aa593-425a-47dd-b300-8569859e6275, vol_name:cephfs) < ""
Oct 01 17:03:43 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e14aa593-425a-47dd-b300-8569859e6275'' moved to trashcan
Oct 01 17:03:43 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:03:43 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e14aa593-425a-47dd-b300-8569859e6275, vol_name:cephfs) < ""
Oct 01 17:03:43 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:43 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-26224483"}]: dispatch
Oct 01 17:03:43 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-26224483"}]': finished
Oct 01 17:03:44 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 49 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 93 KiB/s wr, 11 op/s
Oct 01 17:03:44 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "r", "format": "json"}]: dispatch
Oct 01 17:03:44 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:03:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Oct 01 17:03:44 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Oct 01 17:03:44 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID alice with tenant 1841221f332340a299707d253063659f
Oct 01 17:03:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:03:44 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:03:44 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:03:44 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:03:44 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "e14aa593-425a-47dd-b300-8569859e6275", "auth_id": "tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:44 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "e14aa593-425a-47dd-b300-8569859e6275", "auth_id": "tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:44 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e14aa593-425a-47dd-b300-8569859e6275", "format": "json"}]: dispatch
Oct 01 17:03:44 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e14aa593-425a-47dd-b300-8569859e6275", "force": true, "format": "json"}]: dispatch
Oct 01 17:03:44 compute-0 ceph-mon[74273]: pgmap v1019: 305 pgs: 305 active+clean; 49 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 93 KiB/s wr, 11 op/s
Oct 01 17:03:44 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Oct 01 17:03:44 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:03:44 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:03:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:03:45 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "r", "format": "json"}]: dispatch
Oct 01 17:03:46 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 49 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 53 KiB/s wr, 7 op/s
Oct 01 17:03:46 compute-0 ceph-mon[74273]: pgmap v1020: 305 pgs: 305 active+clean; 49 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 53 KiB/s wr, 7 op/s
Oct 01 17:03:46 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4554af13-83dd-4199-b167-2a7f1fff2e32", "auth_id": "tempest-cephx-id-26224483", "tenant_id": "a0ac8ec815504b8dae62c40a55008f52", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:03:46 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume authorize, sub_name:4554af13-83dd-4199-b167-2a7f1fff2e32, tenant_id:a0ac8ec815504b8dae62c40a55008f52, vol_name:cephfs) < ""
Oct 01 17:03:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"} v 0) v1
Oct 01 17:03:47 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:47 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID tempest-cephx-id-26224483 with tenant a0ac8ec815504b8dae62c40a55008f52
Oct 01 17:03:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-26224483", "caps": ["mds", "allow rw path=/volumes/_nogroup/4554af13-83dd-4199-b167-2a7f1fff2e32/de9708af-1bcd-4a03-baf2-2fa9bfaca9fe", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_4554af13-83dd-4199-b167-2a7f1fff2e32", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:03:47 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-26224483", "caps": ["mds", "allow rw path=/volumes/_nogroup/4554af13-83dd-4199-b167-2a7f1fff2e32/de9708af-1bcd-4a03-baf2-2fa9bfaca9fe", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_4554af13-83dd-4199-b167-2a7f1fff2e32", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:03:47 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-26224483", "caps": ["mds", "allow rw path=/volumes/_nogroup/4554af13-83dd-4199-b167-2a7f1fff2e32/de9708af-1bcd-4a03-baf2-2fa9bfaca9fe", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_4554af13-83dd-4199-b167-2a7f1fff2e32", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:03:47 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume authorize, sub_name:4554af13-83dd-4199-b167-2a7f1fff2e32, tenant_id:a0ac8ec815504b8dae62c40a55008f52, vol_name:cephfs) < ""
Oct 01 17:03:47 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4554af13-83dd-4199-b167-2a7f1fff2e32", "auth_id": "tempest-cephx-id-26224483", "tenant_id": "a0ac8ec815504b8dae62c40a55008f52", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:03:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-26224483", "caps": ["mds", "allow rw path=/volumes/_nogroup/4554af13-83dd-4199-b167-2a7f1fff2e32/de9708af-1bcd-4a03-baf2-2fa9bfaca9fe", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_4554af13-83dd-4199-b167-2a7f1fff2e32", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:03:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-26224483", "caps": ["mds", "allow rw path=/volumes/_nogroup/4554af13-83dd-4199-b167-2a7f1fff2e32/de9708af-1bcd-4a03-baf2-2fa9bfaca9fe", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_4554af13-83dd-4199-b167-2a7f1fff2e32", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:03:47 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "format": "json"}]: dispatch
Oct 01 17:03:47 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:03:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Oct 01 17:03:47 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Oct 01 17:03:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Oct 01 17:03:47 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Oct 01 17:03:47 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Oct 01 17:03:47 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:03:47 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "format": "json"}]: dispatch
Oct 01 17:03:47 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:03:47 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1
Oct 01 17:03:47 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1],prefix=session evict} (starting...)
Oct 01 17:03:47 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:03:47 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:03:48 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 49 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 53 KiB/s wr, 7 op/s
Oct 01 17:03:48 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "format": "json"}]: dispatch
Oct 01 17:03:48 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Oct 01 17:03:48 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Oct 01 17:03:48 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Oct 01 17:03:48 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "format": "json"}]: dispatch
Oct 01 17:03:48 compute-0 ceph-mon[74273]: pgmap v1021: 305 pgs: 305 active+clean; 49 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 53 KiB/s wr, 7 op/s
Oct 01 17:03:50 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 50 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 103 KiB/s wr, 14 op/s
Oct 01 17:03:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:03:50 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4554af13-83dd-4199-b167-2a7f1fff2e32", "auth_id": "tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:50 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume deauthorize, sub_name:4554af13-83dd-4199-b167-2a7f1fff2e32, vol_name:cephfs) < ""
Oct 01 17:03:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"} v 0) v1
Oct 01 17:03:50 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-26224483"} v 0) v1
Oct 01 17:03:50 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-26224483"}]: dispatch
Oct 01 17:03:50 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-26224483"}]': finished
Oct 01 17:03:50 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume deauthorize, sub_name:4554af13-83dd-4199-b167-2a7f1fff2e32, vol_name:cephfs) < ""
Oct 01 17:03:50 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4554af13-83dd-4199-b167-2a7f1fff2e32", "auth_id": "tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:50 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume evict, sub_name:4554af13-83dd-4199-b167-2a7f1fff2e32, vol_name:cephfs) < ""
Oct 01 17:03:50 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-26224483, client_metadata.root=/volumes/_nogroup/4554af13-83dd-4199-b167-2a7f1fff2e32/de9708af-1bcd-4a03-baf2-2fa9bfaca9fe
Oct 01 17:03:50 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 17:03:50 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=tempest-cephx-id-26224483,client_metadata.root=/volumes/_nogroup/4554af13-83dd-4199-b167-2a7f1fff2e32/de9708af-1bcd-4a03-baf2-2fa9bfaca9fe],prefix=session evict} (starting...)
Oct 01 17:03:50 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:03:50 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume evict, sub_name:4554af13-83dd-4199-b167-2a7f1fff2e32, vol_name:cephfs) < ""
Oct 01 17:03:51 compute-0 ceph-mon[74273]: pgmap v1022: 305 pgs: 305 active+clean; 50 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 103 KiB/s wr, 14 op/s
Oct 01 17:03:51 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:51 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-26224483"}]: dispatch
Oct 01 17:03:51 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-26224483"}]': finished
Oct 01 17:03:51 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:03:51 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:03:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Oct 01 17:03:51 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Oct 01 17:03:51 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID alice_bob with tenant 1841221f332340a299707d253063659f
Oct 01 17:03:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:03:51 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:03:51 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:03:51 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:03:52 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 50 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 72 KiB/s wr, 9 op/s
Oct 01 17:03:52 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4554af13-83dd-4199-b167-2a7f1fff2e32", "auth_id": "tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:52 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4554af13-83dd-4199-b167-2a7f1fff2e32", "auth_id": "tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:52 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Oct 01 17:03:52 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:03:52 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:03:53 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:03:53 compute-0 ceph-mon[74273]: pgmap v1023: 305 pgs: 305 active+clean; 50 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 72 KiB/s wr, 9 op/s
Oct 01 17:03:53 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "05ea150c-46e1-4db5-bef4-041e811a35f2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:03:53 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:05ea150c-46e1-4db5-bef4-041e811a35f2, vol_name:cephfs) < ""
Oct 01 17:03:53 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/05ea150c-46e1-4db5-bef4-041e811a35f2/.meta.tmp'
Oct 01 17:03:53 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/05ea150c-46e1-4db5-bef4-041e811a35f2/.meta.tmp' to config b'/volumes/_nogroup/05ea150c-46e1-4db5-bef4-041e811a35f2/.meta'
Oct 01 17:03:53 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:05ea150c-46e1-4db5-bef4-041e811a35f2, vol_name:cephfs) < ""
Oct 01 17:03:53 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "05ea150c-46e1-4db5-bef4-041e811a35f2", "format": "json"}]: dispatch
Oct 01 17:03:53 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:05ea150c-46e1-4db5-bef4-041e811a35f2, vol_name:cephfs) < ""
Oct 01 17:03:53 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:05ea150c-46e1-4db5-bef4-041e811a35f2, vol_name:cephfs) < ""
Oct 01 17:03:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:03:53 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:03:54 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 50 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 100 KiB/s wr, 13 op/s
Oct 01 17:03:54 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:03:54 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4554af13-83dd-4199-b167-2a7f1fff2e32", "auth_id": "tempest-cephx-id-26224483", "tenant_id": "a0ac8ec815504b8dae62c40a55008f52", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:03:54 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume authorize, sub_name:4554af13-83dd-4199-b167-2a7f1fff2e32, tenant_id:a0ac8ec815504b8dae62c40a55008f52, vol_name:cephfs) < ""
Oct 01 17:03:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"} v 0) v1
Oct 01 17:03:54 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:54 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID tempest-cephx-id-26224483 with tenant a0ac8ec815504b8dae62c40a55008f52
Oct 01 17:03:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-26224483", "caps": ["mds", "allow rw path=/volumes/_nogroup/4554af13-83dd-4199-b167-2a7f1fff2e32/de9708af-1bcd-4a03-baf2-2fa9bfaca9fe", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_4554af13-83dd-4199-b167-2a7f1fff2e32", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:03:54 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-26224483", "caps": ["mds", "allow rw path=/volumes/_nogroup/4554af13-83dd-4199-b167-2a7f1fff2e32/de9708af-1bcd-4a03-baf2-2fa9bfaca9fe", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_4554af13-83dd-4199-b167-2a7f1fff2e32", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:03:54 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-26224483", "caps": ["mds", "allow rw path=/volumes/_nogroup/4554af13-83dd-4199-b167-2a7f1fff2e32/de9708af-1bcd-4a03-baf2-2fa9bfaca9fe", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_4554af13-83dd-4199-b167-2a7f1fff2e32", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:03:54 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume authorize, sub_name:4554af13-83dd-4199-b167-2a7f1fff2e32, tenant_id:a0ac8ec815504b8dae62c40a55008f52, vol_name:cephfs) < ""
Oct 01 17:03:54 compute-0 podman[270394]: 2025-10-01 17:03:54.803217237 +0000 UTC m=+0.106909909 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 01 17:03:55 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "05ea150c-46e1-4db5-bef4-041e811a35f2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:03:55 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "05ea150c-46e1-4db5-bef4-041e811a35f2", "format": "json"}]: dispatch
Oct 01 17:03:55 compute-0 ceph-mon[74273]: pgmap v1024: 305 pgs: 305 active+clean; 50 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 100 KiB/s wr, 13 op/s
Oct 01 17:03:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-26224483", "caps": ["mds", "allow rw path=/volumes/_nogroup/4554af13-83dd-4199-b167-2a7f1fff2e32/de9708af-1bcd-4a03-baf2-2fa9bfaca9fe", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_4554af13-83dd-4199-b167-2a7f1fff2e32", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:03:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-26224483", "caps": ["mds", "allow rw path=/volumes/_nogroup/4554af13-83dd-4199-b167-2a7f1fff2e32/de9708af-1bcd-4a03-baf2-2fa9bfaca9fe", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_4554af13-83dd-4199-b167-2a7f1fff2e32", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:03:55 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "format": "json"}]: dispatch
Oct 01 17:03:55 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:03:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Oct 01 17:03:55 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Oct 01 17:03:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Oct 01 17:03:55 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Oct 01 17:03:55 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Oct 01 17:03:55 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:03:55 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "format": "json"}]: dispatch
Oct 01 17:03:55 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:03:55 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1
Oct 01 17:03:55 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1],prefix=session evict} (starting...)
Oct 01 17:03:55 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:03:55 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:03:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:03:56 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 50 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 78 KiB/s wr, 10 op/s
Oct 01 17:03:56 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4554af13-83dd-4199-b167-2a7f1fff2e32", "auth_id": "tempest-cephx-id-26224483", "tenant_id": "a0ac8ec815504b8dae62c40a55008f52", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:03:56 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Oct 01 17:03:56 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Oct 01 17:03:56 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Oct 01 17:03:57 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "05ea150c-46e1-4db5-bef4-041e811a35f2", "snap_name": "d7ed2ab1-74ac-4e22-96ae-0d4b6ae249f8", "format": "json"}]: dispatch
Oct 01 17:03:57 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d7ed2ab1-74ac-4e22-96ae-0d4b6ae249f8, sub_name:05ea150c-46e1-4db5-bef4-041e811a35f2, vol_name:cephfs) < ""
Oct 01 17:03:57 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d7ed2ab1-74ac-4e22-96ae-0d4b6ae249f8, sub_name:05ea150c-46e1-4db5-bef4-041e811a35f2, vol_name:cephfs) < ""
Oct 01 17:03:57 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "format": "json"}]: dispatch
Oct 01 17:03:57 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "format": "json"}]: dispatch
Oct 01 17:03:57 compute-0 ceph-mon[74273]: pgmap v1025: 305 pgs: 305 active+clean; 50 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 78 KiB/s wr, 10 op/s
Oct 01 17:03:57 compute-0 podman[270415]: 2025-10-01 17:03:57.78644968 +0000 UTC m=+0.104607110 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 01 17:03:58 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 50 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 78 KiB/s wr, 10 op/s
Oct 01 17:03:58 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4554af13-83dd-4199-b167-2a7f1fff2e32", "auth_id": "tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:58 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume deauthorize, sub_name:4554af13-83dd-4199-b167-2a7f1fff2e32, vol_name:cephfs) < ""
Oct 01 17:03:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"} v 0) v1
Oct 01 17:03:58 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-26224483"} v 0) v1
Oct 01 17:03:58 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-26224483"}]: dispatch
Oct 01 17:03:58 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-26224483"}]': finished
Oct 01 17:03:58 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "05ea150c-46e1-4db5-bef4-041e811a35f2", "snap_name": "d7ed2ab1-74ac-4e22-96ae-0d4b6ae249f8", "format": "json"}]: dispatch
Oct 01 17:03:58 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:58 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-26224483"}]: dispatch
Oct 01 17:03:58 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-26224483"}]': finished
Oct 01 17:03:58 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume deauthorize, sub_name:4554af13-83dd-4199-b167-2a7f1fff2e32, vol_name:cephfs) < ""
Oct 01 17:03:58 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4554af13-83dd-4199-b167-2a7f1fff2e32", "auth_id": "tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:58 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume evict, sub_name:4554af13-83dd-4199-b167-2a7f1fff2e32, vol_name:cephfs) < ""
Oct 01 17:03:58 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-26224483, client_metadata.root=/volumes/_nogroup/4554af13-83dd-4199-b167-2a7f1fff2e32/de9708af-1bcd-4a03-baf2-2fa9bfaca9fe
Oct 01 17:03:58 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=tempest-cephx-id-26224483,client_metadata.root=/volumes/_nogroup/4554af13-83dd-4199-b167-2a7f1fff2e32/de9708af-1bcd-4a03-baf2-2fa9bfaca9fe],prefix=session evict} (starting...)
Oct 01 17:03:58 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:03:58 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume evict, sub_name:4554af13-83dd-4199-b167-2a7f1fff2e32, vol_name:cephfs) < ""
Oct 01 17:03:58 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "05ea150c-46e1-4db5-bef4-041e811a35f2", "snap_name": "2c3c3790-b86d-42b8-9d53-c0c1697de805", "format": "json"}]: dispatch
Oct 01 17:03:58 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:2c3c3790-b86d-42b8-9d53-c0c1697de805, sub_name:05ea150c-46e1-4db5-bef4-041e811a35f2, vol_name:cephfs) < ""
Oct 01 17:03:58 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:2c3c3790-b86d-42b8-9d53-c0c1697de805, sub_name:05ea150c-46e1-4db5-bef4-041e811a35f2, vol_name:cephfs) < ""
Oct 01 17:03:58 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "r", "format": "json"}]: dispatch
Oct 01 17:03:58 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:03:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Oct 01 17:03:58 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Oct 01 17:03:58 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID alice_bob with tenant 1841221f332340a299707d253063659f
Oct 01 17:03:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:03:58 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:03:58 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:03:58 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:03:59 compute-0 ceph-mon[74273]: pgmap v1026: 305 pgs: 305 active+clean; 50 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 78 KiB/s wr, 10 op/s
Oct 01 17:03:59 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4554af13-83dd-4199-b167-2a7f1fff2e32", "auth_id": "tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:59 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4554af13-83dd-4199-b167-2a7f1fff2e32", "auth_id": "tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:03:59 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "05ea150c-46e1-4db5-bef4-041e811a35f2", "snap_name": "2c3c3790-b86d-42b8-9d53-c0c1697de805", "format": "json"}]: dispatch
Oct 01 17:03:59 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Oct 01 17:03:59 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:03:59 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:04:00 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 50 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 123 KiB/s wr, 16 op/s
Oct 01 17:04:00 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "r", "format": "json"}]: dispatch
Oct 01 17:04:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:04:01 compute-0 ceph-mon[74273]: pgmap v1027: 305 pgs: 305 active+clean; 50 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 123 KiB/s wr, 16 op/s
Oct 01 17:04:01 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4554af13-83dd-4199-b167-2a7f1fff2e32", "auth_id": "tempest-cephx-id-26224483", "tenant_id": "a0ac8ec815504b8dae62c40a55008f52", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:04:01 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume authorize, sub_name:4554af13-83dd-4199-b167-2a7f1fff2e32, tenant_id:a0ac8ec815504b8dae62c40a55008f52, vol_name:cephfs) < ""
Oct 01 17:04:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"} v 0) v1
Oct 01 17:04:01 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:04:01 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID tempest-cephx-id-26224483 with tenant a0ac8ec815504b8dae62c40a55008f52
Oct 01 17:04:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-26224483", "caps": ["mds", "allow rw path=/volumes/_nogroup/4554af13-83dd-4199-b167-2a7f1fff2e32/de9708af-1bcd-4a03-baf2-2fa9bfaca9fe", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_4554af13-83dd-4199-b167-2a7f1fff2e32", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:04:01 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-26224483", "caps": ["mds", "allow rw path=/volumes/_nogroup/4554af13-83dd-4199-b167-2a7f1fff2e32/de9708af-1bcd-4a03-baf2-2fa9bfaca9fe", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_4554af13-83dd-4199-b167-2a7f1fff2e32", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:04:01 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-26224483", "caps": ["mds", "allow rw path=/volumes/_nogroup/4554af13-83dd-4199-b167-2a7f1fff2e32/de9708af-1bcd-4a03-baf2-2fa9bfaca9fe", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_4554af13-83dd-4199-b167-2a7f1fff2e32", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:04:01 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume authorize, sub_name:4554af13-83dd-4199-b167-2a7f1fff2e32, tenant_id:a0ac8ec815504b8dae62c40a55008f52, vol_name:cephfs) < ""
Oct 01 17:04:02 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "05ea150c-46e1-4db5-bef4-041e811a35f2", "snap_name": "2c3c3790-b86d-42b8-9d53-c0c1697de805_af799ca1-a7f4-4812-94f2-d79db8e45f3d", "force": true, "format": "json"}]: dispatch
Oct 01 17:04:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2c3c3790-b86d-42b8-9d53-c0c1697de805_af799ca1-a7f4-4812-94f2-d79db8e45f3d, sub_name:05ea150c-46e1-4db5-bef4-041e811a35f2, vol_name:cephfs) < ""
Oct 01 17:04:02 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 50 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 73 KiB/s wr, 9 op/s
Oct 01 17:04:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/05ea150c-46e1-4db5-bef4-041e811a35f2/.meta.tmp'
Oct 01 17:04:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/05ea150c-46e1-4db5-bef4-041e811a35f2/.meta.tmp' to config b'/volumes/_nogroup/05ea150c-46e1-4db5-bef4-041e811a35f2/.meta'
Oct 01 17:04:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2c3c3790-b86d-42b8-9d53-c0c1697de805_af799ca1-a7f4-4812-94f2-d79db8e45f3d, sub_name:05ea150c-46e1-4db5-bef4-041e811a35f2, vol_name:cephfs) < ""
Oct 01 17:04:02 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "05ea150c-46e1-4db5-bef4-041e811a35f2", "snap_name": "2c3c3790-b86d-42b8-9d53-c0c1697de805", "force": true, "format": "json"}]: dispatch
Oct 01 17:04:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2c3c3790-b86d-42b8-9d53-c0c1697de805, sub_name:05ea150c-46e1-4db5-bef4-041e811a35f2, vol_name:cephfs) < ""
Oct 01 17:04:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/05ea150c-46e1-4db5-bef4-041e811a35f2/.meta.tmp'
Oct 01 17:04:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/05ea150c-46e1-4db5-bef4-041e811a35f2/.meta.tmp' to config b'/volumes/_nogroup/05ea150c-46e1-4db5-bef4-041e811a35f2/.meta'
Oct 01 17:04:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2c3c3790-b86d-42b8-9d53-c0c1697de805, sub_name:05ea150c-46e1-4db5-bef4-041e811a35f2, vol_name:cephfs) < ""
Oct 01 17:04:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:04:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-26224483", "caps": ["mds", "allow rw path=/volumes/_nogroup/4554af13-83dd-4199-b167-2a7f1fff2e32/de9708af-1bcd-4a03-baf2-2fa9bfaca9fe", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_4554af13-83dd-4199-b167-2a7f1fff2e32", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:04:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-26224483", "caps": ["mds", "allow rw path=/volumes/_nogroup/4554af13-83dd-4199-b167-2a7f1fff2e32/de9708af-1bcd-4a03-baf2-2fa9bfaca9fe", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_4554af13-83dd-4199-b167-2a7f1fff2e32", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:04:02 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "format": "json"}]: dispatch
Oct 01 17:04:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:04:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Oct 01 17:04:02 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Oct 01 17:04:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Oct 01 17:04:02 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Oct 01 17:04:02 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Oct 01 17:04:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:04:02 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "format": "json"}]: dispatch
Oct 01 17:04:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:04:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1
Oct 01 17:04:02 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1],prefix=session evict} (starting...)
Oct 01 17:04:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:04:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:04:03 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4554af13-83dd-4199-b167-2a7f1fff2e32", "auth_id": "tempest-cephx-id-26224483", "tenant_id": "a0ac8ec815504b8dae62c40a55008f52", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:04:03 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "05ea150c-46e1-4db5-bef4-041e811a35f2", "snap_name": "2c3c3790-b86d-42b8-9d53-c0c1697de805_af799ca1-a7f4-4812-94f2-d79db8e45f3d", "force": true, "format": "json"}]: dispatch
Oct 01 17:04:03 compute-0 ceph-mon[74273]: pgmap v1028: 305 pgs: 305 active+clean; 50 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 73 KiB/s wr, 9 op/s
Oct 01 17:04:03 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "05ea150c-46e1-4db5-bef4-041e811a35f2", "snap_name": "2c3c3790-b86d-42b8-9d53-c0c1697de805", "force": true, "format": "json"}]: dispatch
Oct 01 17:04:03 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Oct 01 17:04:03 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Oct 01 17:04:03 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Oct 01 17:04:04 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 51 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 120 KiB/s wr, 16 op/s
Oct 01 17:04:04 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "format": "json"}]: dispatch
Oct 01 17:04:04 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "format": "json"}]: dispatch
Oct 01 17:04:05 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4554af13-83dd-4199-b167-2a7f1fff2e32", "auth_id": "tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:04:05 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume deauthorize, sub_name:4554af13-83dd-4199-b167-2a7f1fff2e32, vol_name:cephfs) < ""
Oct 01 17:04:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Oct 01 17:04:05 compute-0 ceph-mon[74273]: pgmap v1029: 305 pgs: 305 active+clean; 51 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 120 KiB/s wr, 16 op/s
Oct 01 17:04:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Oct 01 17:04:05 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Oct 01 17:04:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"} v 0) v1
Oct 01 17:04:05 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:04:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-26224483"} v 0) v1
Oct 01 17:04:05 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-26224483"}]: dispatch
Oct 01 17:04:05 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-26224483"}]': finished
Oct 01 17:04:05 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume deauthorize, sub_name:4554af13-83dd-4199-b167-2a7f1fff2e32, vol_name:cephfs) < ""
Oct 01 17:04:05 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4554af13-83dd-4199-b167-2a7f1fff2e32", "auth_id": "tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:04:05 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume evict, sub_name:4554af13-83dd-4199-b167-2a7f1fff2e32, vol_name:cephfs) < ""
Oct 01 17:04:05 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-26224483, client_metadata.root=/volumes/_nogroup/4554af13-83dd-4199-b167-2a7f1fff2e32/de9708af-1bcd-4a03-baf2-2fa9bfaca9fe
Oct 01 17:04:05 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=tempest-cephx-id-26224483,client_metadata.root=/volumes/_nogroup/4554af13-83dd-4199-b167-2a7f1fff2e32/de9708af-1bcd-4a03-baf2-2fa9bfaca9fe],prefix=session evict} (starting...)
Oct 01 17:04:05 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:04:05 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume evict, sub_name:4554af13-83dd-4199-b167-2a7f1fff2e32, vol_name:cephfs) < ""
Oct 01 17:04:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:04:05 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "05ea150c-46e1-4db5-bef4-041e811a35f2", "snap_name": "d7ed2ab1-74ac-4e22-96ae-0d4b6ae249f8_3e295899-8577-429c-89d2-8b88c17eefc7", "force": true, "format": "json"}]: dispatch
Oct 01 17:04:05 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d7ed2ab1-74ac-4e22-96ae-0d4b6ae249f8_3e295899-8577-429c-89d2-8b88c17eefc7, sub_name:05ea150c-46e1-4db5-bef4-041e811a35f2, vol_name:cephfs) < ""
Oct 01 17:04:05 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/05ea150c-46e1-4db5-bef4-041e811a35f2/.meta.tmp'
Oct 01 17:04:05 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/05ea150c-46e1-4db5-bef4-041e811a35f2/.meta.tmp' to config b'/volumes/_nogroup/05ea150c-46e1-4db5-bef4-041e811a35f2/.meta'
Oct 01 17:04:05 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d7ed2ab1-74ac-4e22-96ae-0d4b6ae249f8_3e295899-8577-429c-89d2-8b88c17eefc7, sub_name:05ea150c-46e1-4db5-bef4-041e811a35f2, vol_name:cephfs) < ""
Oct 01 17:04:05 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "05ea150c-46e1-4db5-bef4-041e811a35f2", "snap_name": "d7ed2ab1-74ac-4e22-96ae-0d4b6ae249f8", "force": true, "format": "json"}]: dispatch
Oct 01 17:04:05 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d7ed2ab1-74ac-4e22-96ae-0d4b6ae249f8, sub_name:05ea150c-46e1-4db5-bef4-041e811a35f2, vol_name:cephfs) < ""
Oct 01 17:04:05 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/05ea150c-46e1-4db5-bef4-041e811a35f2/.meta.tmp'
Oct 01 17:04:05 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/05ea150c-46e1-4db5-bef4-041e811a35f2/.meta.tmp' to config b'/volumes/_nogroup/05ea150c-46e1-4db5-bef4-041e811a35f2/.meta'
Oct 01 17:04:05 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d7ed2ab1-74ac-4e22-96ae-0d4b6ae249f8, sub_name:05ea150c-46e1-4db5-bef4-041e811a35f2, vol_name:cephfs) < ""
Oct 01 17:04:06 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 51 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 110 KiB/s wr, 14 op/s
Oct 01 17:04:06 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4554af13-83dd-4199-b167-2a7f1fff2e32", "auth_id": "tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:04:06 compute-0 ceph-mon[74273]: osdmap e143: 3 total, 3 up, 3 in
Oct 01 17:04:06 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:04:06 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-26224483"}]: dispatch
Oct 01 17:04:06 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-26224483"}]': finished
Oct 01 17:04:06 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4554af13-83dd-4199-b167-2a7f1fff2e32", "auth_id": "tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:04:06 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:04:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:04:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Oct 01 17:04:06 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Oct 01 17:04:06 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID alice bob with tenant 1841221f332340a299707d253063659f
Oct 01 17:04:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:04:06 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:04:06 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:04:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:04:07 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "05ea150c-46e1-4db5-bef4-041e811a35f2", "snap_name": "d7ed2ab1-74ac-4e22-96ae-0d4b6ae249f8_3e295899-8577-429c-89d2-8b88c17eefc7", "force": true, "format": "json"}]: dispatch
Oct 01 17:04:07 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "05ea150c-46e1-4db5-bef4-041e811a35f2", "snap_name": "d7ed2ab1-74ac-4e22-96ae-0d4b6ae249f8", "force": true, "format": "json"}]: dispatch
Oct 01 17:04:07 compute-0 ceph-mon[74273]: pgmap v1031: 305 pgs: 305 active+clean; 51 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 110 KiB/s wr, 14 op/s
Oct 01 17:04:07 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Oct 01 17:04:07 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:04:07 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:04:08 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 51 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 110 KiB/s wr, 14 op/s
Oct 01 17:04:08 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:04:08 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4554af13-83dd-4199-b167-2a7f1fff2e32", "auth_id": "tempest-cephx-id-26224483", "tenant_id": "a0ac8ec815504b8dae62c40a55008f52", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:04:08 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume authorize, sub_name:4554af13-83dd-4199-b167-2a7f1fff2e32, tenant_id:a0ac8ec815504b8dae62c40a55008f52, vol_name:cephfs) < ""
Oct 01 17:04:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"} v 0) v1
Oct 01 17:04:08 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:04:08 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID tempest-cephx-id-26224483 with tenant a0ac8ec815504b8dae62c40a55008f52
Oct 01 17:04:08 compute-0 podman[270438]: 2025-10-01 17:04:08.853003262 +0000 UTC m=+0.172063857 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 01 17:04:09 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-26224483", "caps": ["mds", "allow rw path=/volumes/_nogroup/4554af13-83dd-4199-b167-2a7f1fff2e32/de9708af-1bcd-4a03-baf2-2fa9bfaca9fe", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_4554af13-83dd-4199-b167-2a7f1fff2e32", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:04:09 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-26224483", "caps": ["mds", "allow rw path=/volumes/_nogroup/4554af13-83dd-4199-b167-2a7f1fff2e32/de9708af-1bcd-4a03-baf2-2fa9bfaca9fe", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_4554af13-83dd-4199-b167-2a7f1fff2e32", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:04:09 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-26224483", "caps": ["mds", "allow rw path=/volumes/_nogroup/4554af13-83dd-4199-b167-2a7f1fff2e32/de9708af-1bcd-4a03-baf2-2fa9bfaca9fe", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_4554af13-83dd-4199-b167-2a7f1fff2e32", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:04:09 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume authorize, sub_name:4554af13-83dd-4199-b167-2a7f1fff2e32, tenant_id:a0ac8ec815504b8dae62c40a55008f52, vol_name:cephfs) < ""
Oct 01 17:04:09 compute-0 ceph-mon[74273]: pgmap v1032: 305 pgs: 305 active+clean; 51 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 110 KiB/s wr, 14 op/s
Oct 01 17:04:09 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4554af13-83dd-4199-b167-2a7f1fff2e32", "auth_id": "tempest-cephx-id-26224483", "tenant_id": "a0ac8ec815504b8dae62c40a55008f52", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:04:09 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:04:09 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-26224483", "caps": ["mds", "allow rw path=/volumes/_nogroup/4554af13-83dd-4199-b167-2a7f1fff2e32/de9708af-1bcd-4a03-baf2-2fa9bfaca9fe", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_4554af13-83dd-4199-b167-2a7f1fff2e32", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:04:09 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-26224483", "caps": ["mds", "allow rw path=/volumes/_nogroup/4554af13-83dd-4199-b167-2a7f1fff2e32/de9708af-1bcd-4a03-baf2-2fa9bfaca9fe", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_4554af13-83dd-4199-b167-2a7f1fff2e32", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:04:09 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "05ea150c-46e1-4db5-bef4-041e811a35f2", "format": "json"}]: dispatch
Oct 01 17:04:09 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:05ea150c-46e1-4db5-bef4-041e811a35f2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:04:09 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:05ea150c-46e1-4db5-bef4-041e811a35f2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:04:09 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:04:09.481+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '05ea150c-46e1-4db5-bef4-041e811a35f2' of type subvolume
Oct 01 17:04:09 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '05ea150c-46e1-4db5-bef4-041e811a35f2' of type subvolume
Oct 01 17:04:09 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "05ea150c-46e1-4db5-bef4-041e811a35f2", "force": true, "format": "json"}]: dispatch
Oct 01 17:04:09 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:05ea150c-46e1-4db5-bef4-041e811a35f2, vol_name:cephfs) < ""
Oct 01 17:04:09 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/05ea150c-46e1-4db5-bef4-041e811a35f2'' moved to trashcan
Oct 01 17:04:09 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:04:09 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:05ea150c-46e1-4db5-bef4-041e811a35f2, vol_name:cephfs) < ""
Oct 01 17:04:09 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "format": "json"}]: dispatch
Oct 01 17:04:09 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:04:10 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 51 MiB data, 222 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 102 KiB/s wr, 14 op/s
Oct 01 17:04:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Oct 01 17:04:10 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Oct 01 17:04:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Oct 01 17:04:10 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Oct 01 17:04:10 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Oct 01 17:04:10 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:04:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Oct 01 17:04:10 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "format": "json"}]: dispatch
Oct 01 17:04:10 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:04:10 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1
Oct 01 17:04:10 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1],prefix=session evict} (starting...)
Oct 01 17:04:10 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:04:10 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:04:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Oct 01 17:04:10 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "05ea150c-46e1-4db5-bef4-041e811a35f2", "format": "json"}]: dispatch
Oct 01 17:04:10 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "05ea150c-46e1-4db5-bef4-041e811a35f2", "force": true, "format": "json"}]: dispatch
Oct 01 17:04:10 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "format": "json"}]: dispatch
Oct 01 17:04:10 compute-0 ceph-mon[74273]: pgmap v1033: 305 pgs: 305 active+clean; 51 MiB data, 222 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 102 KiB/s wr, 14 op/s
Oct 01 17:04:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Oct 01 17:04:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Oct 01 17:04:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Oct 01 17:04:10 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Oct 01 17:04:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:04:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Oct 01 17:04:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Oct 01 17:04:10 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Oct 01 17:04:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:04:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:04:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_17:04:11
Oct 01 17:04:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 17:04:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 17:04:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', '.mgr', 'volumes', 'default.rgw.meta', 'vms', 'backups', 'default.rgw.log', 'default.rgw.control']
Oct 01 17:04:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:04:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:04:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 17:04:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:04:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:04:11 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "format": "json"}]: dispatch
Oct 01 17:04:11 compute-0 ceph-mon[74273]: osdmap e144: 3 total, 3 up, 3 in
Oct 01 17:04:11 compute-0 ceph-mon[74273]: osdmap e145: 3 total, 3 up, 3 in
Oct 01 17:04:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 17:04:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:04:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 17:04:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:04:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:04:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:04:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:04:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:04:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:04:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:04:11 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6a6b0432-ea25-4083-8cff-afe449e66429", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:04:11 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6a6b0432-ea25-4083-8cff-afe449e66429, vol_name:cephfs) < ""
Oct 01 17:04:12 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 51 MiB data, 222 MiB used, 60 GiB / 60 GiB avail; 749 B/s rd, 67 KiB/s wr, 9 op/s
Oct 01 17:04:12 compute-0 ceph-mgr[74571]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3235544197
Oct 01 17:04:12 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6a6b0432-ea25-4083-8cff-afe449e66429", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:04:12 compute-0 ceph-mon[74273]: pgmap v1036: 305 pgs: 305 active+clean; 51 MiB data, 222 MiB used, 60 GiB / 60 GiB avail; 749 B/s rd, 67 KiB/s wr, 9 op/s
Oct 01 17:04:12 compute-0 podman[270468]: 2025-10-01 17:04:12.771466411 +0000 UTC m=+0.077473437 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 01 17:04:13 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6a6b0432-ea25-4083-8cff-afe449e66429/.meta.tmp'
Oct 01 17:04:13 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6a6b0432-ea25-4083-8cff-afe449e66429/.meta.tmp' to config b'/volumes/_nogroup/6a6b0432-ea25-4083-8cff-afe449e66429/.meta'
Oct 01 17:04:13 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6a6b0432-ea25-4083-8cff-afe449e66429, vol_name:cephfs) < ""
Oct 01 17:04:13 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6a6b0432-ea25-4083-8cff-afe449e66429", "format": "json"}]: dispatch
Oct 01 17:04:13 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6a6b0432-ea25-4083-8cff-afe449e66429, vol_name:cephfs) < ""
Oct 01 17:04:13 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6a6b0432-ea25-4083-8cff-afe449e66429, vol_name:cephfs) < ""
Oct 01 17:04:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:04:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:04:13 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "r", "format": "json"}]: dispatch
Oct 01 17:04:13 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:04:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Oct 01 17:04:13 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Oct 01 17:04:13 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID alice bob with tenant 1841221f332340a299707d253063659f
Oct 01 17:04:14 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 52 MiB data, 222 MiB used, 60 GiB / 60 GiB avail; 895 B/s rd, 121 KiB/s wr, 15 op/s
Oct 01 17:04:14 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6a6b0432-ea25-4083-8cff-afe449e66429", "format": "json"}]: dispatch
Oct 01 17:04:14 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:04:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Oct 01 17:04:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:04:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:04:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:04:14 compute-0 nova_compute[259504]: 2025-10-01 17:04:14.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:04:15 compute-0 sshd-session[270487]: Connection closed by 136.26.36.177 port 55804
Oct 01 17:04:15 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "r", "format": "json"}]: dispatch
Oct 01 17:04:15 compute-0 ceph-mon[74273]: pgmap v1037: 305 pgs: 305 active+clean; 52 MiB data, 222 MiB used, 60 GiB / 60 GiB avail; 895 B/s rd, 121 KiB/s wr, 15 op/s
Oct 01 17:04:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:04:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:04:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:04:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Oct 01 17:04:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Oct 01 17:04:15 compute-0 sshd-session[270488]: Invalid user a from 136.26.36.177 port 55806
Oct 01 17:04:15 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Oct 01 17:04:15 compute-0 sshd-session[270488]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 17:04:15 compute-0 sshd-session[270488]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=136.26.36.177
Oct 01 17:04:15 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:04:16 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4554af13-83dd-4199-b167-2a7f1fff2e32", "auth_id": "tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:04:16 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume deauthorize, sub_name:4554af13-83dd-4199-b167-2a7f1fff2e32, vol_name:cephfs) < ""
Oct 01 17:04:16 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 52 MiB data, 222 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 KiB/s wr, 9 op/s
Oct 01 17:04:16 compute-0 ceph-mon[74273]: osdmap e146: 3 total, 3 up, 3 in
Oct 01 17:04:16 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4554af13-83dd-4199-b167-2a7f1fff2e32", "auth_id": "tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:04:16 compute-0 ceph-mon[74273]: pgmap v1039: 305 pgs: 305 active+clean; 52 MiB data, 222 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 KiB/s wr, 9 op/s
Oct 01 17:04:17 compute-0 sshd-session[270488]: Failed password for invalid user a from 136.26.36.177 port 55806 ssh2
Oct 01 17:04:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"} v 0) v1
Oct 01 17:04:17 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:04:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-26224483"} v 0) v1
Oct 01 17:04:17 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-26224483"}]: dispatch
Oct 01 17:04:17 compute-0 nova_compute[259504]: 2025-10-01 17:04:17.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:04:17 compute-0 nova_compute[259504]: 2025-10-01 17:04:17.818 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:04:17 compute-0 nova_compute[259504]: 2025-10-01 17:04:17.818 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:04:17 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-26224483"}]': finished
Oct 01 17:04:17 compute-0 nova_compute[259504]: 2025-10-01 17:04:17.819 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:04:17 compute-0 nova_compute[259504]: 2025-10-01 17:04:17.819 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 17:04:17 compute-0 nova_compute[259504]: 2025-10-01 17:04:17.819 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:04:18 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 52 MiB data, 222 MiB used, 60 GiB / 60 GiB avail; 270 B/s rd, 68 KiB/s wr, 7 op/s
Oct 01 17:04:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:04:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/849096429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:04:18 compute-0 nova_compute[259504]: 2025-10-01 17:04:18.226 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:04:18 compute-0 sshd-session[270488]: Connection closed by invalid user a 136.26.36.177 port 55806 [preauth]
Oct 01 17:04:18 compute-0 nova_compute[259504]: 2025-10-01 17:04:18.410 2 WARNING nova.virt.libvirt.driver [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 17:04:18 compute-0 nova_compute[259504]: 2025-10-01 17:04:18.411 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5114MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 17:04:18 compute-0 nova_compute[259504]: 2025-10-01 17:04:18.411 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:04:18 compute-0 nova_compute[259504]: 2025-10-01 17:04:18.411 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:04:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume deauthorize, sub_name:4554af13-83dd-4199-b167-2a7f1fff2e32, vol_name:cephfs) < ""
Oct 01 17:04:18 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4554af13-83dd-4199-b167-2a7f1fff2e32", "auth_id": "tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:04:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume evict, sub_name:4554af13-83dd-4199-b167-2a7f1fff2e32, vol_name:cephfs) < ""
Oct 01 17:04:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-26224483, client_metadata.root=/volumes/_nogroup/4554af13-83dd-4199-b167-2a7f1fff2e32/de9708af-1bcd-4a03-baf2-2fa9bfaca9fe
Oct 01 17:04:18 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=tempest-cephx-id-26224483,client_metadata.root=/volumes/_nogroup/4554af13-83dd-4199-b167-2a7f1fff2e32/de9708af-1bcd-4a03-baf2-2fa9bfaca9fe],prefix=session evict} (starting...)
Oct 01 17:04:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:04:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-26224483, format:json, prefix:fs subvolume evict, sub_name:4554af13-83dd-4199-b167-2a7f1fff2e32, vol_name:cephfs) < ""
Oct 01 17:04:18 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:04:18 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-26224483"}]: dispatch
Oct 01 17:04:18 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-26224483"}]': finished
Oct 01 17:04:18 compute-0 ceph-mon[74273]: pgmap v1040: 305 pgs: 305 active+clean; 52 MiB data, 222 MiB used, 60 GiB / 60 GiB avail; 270 B/s rd, 68 KiB/s wr, 7 op/s
Oct 01 17:04:18 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/849096429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:04:18 compute-0 nova_compute[259504]: 2025-10-01 17:04:18.715 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 17:04:18 compute-0 nova_compute[259504]: 2025-10-01 17:04:18.716 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 17:04:18 compute-0 sshd-session[270513]: Invalid user nil from 136.26.36.177 port 55856
Oct 01 17:04:18 compute-0 nova_compute[259504]: 2025-10-01 17:04:18.885 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Refreshing inventories for resource provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 01 17:04:18 compute-0 sshd-session[270513]: Failed none for invalid user nil from 136.26.36.177 port 55856 ssh2
Oct 01 17:04:19 compute-0 sshd-session[270513]: Connection closed by invalid user nil 136.26.36.177 port 55856 [preauth]
Oct 01 17:04:19 compute-0 nova_compute[259504]: 2025-10-01 17:04:19.036 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Updating ProviderTree inventory for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 01 17:04:19 compute-0 nova_compute[259504]: 2025-10-01 17:04:19.037 2 DEBUG nova.compute.provider_tree [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Updating inventory in ProviderTree for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 01 17:04:19 compute-0 nova_compute[259504]: 2025-10-01 17:04:19.052 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Refreshing aggregate associations for resource provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 01 17:04:19 compute-0 nova_compute[259504]: 2025-10-01 17:04:19.087 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Refreshing trait associations for resource provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_ABM,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AVX2,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_ACCELERATORS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSSE3,HW_CPU_X86_SHA,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 01 17:04:19 compute-0 nova_compute[259504]: 2025-10-01 17:04:19.112 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:04:19 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6a6b0432-ea25-4083-8cff-afe449e66429", "format": "json"}]: dispatch
Oct 01 17:04:19 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6a6b0432-ea25-4083-8cff-afe449e66429, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:04:19 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6a6b0432-ea25-4083-8cff-afe449e66429, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:04:19 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:04:19.399+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6a6b0432-ea25-4083-8cff-afe449e66429' of type subvolume
Oct 01 17:04:19 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6a6b0432-ea25-4083-8cff-afe449e66429' of type subvolume
Oct 01 17:04:19 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6a6b0432-ea25-4083-8cff-afe449e66429", "force": true, "format": "json"}]: dispatch
Oct 01 17:04:19 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6a6b0432-ea25-4083-8cff-afe449e66429, vol_name:cephfs) < ""
Oct 01 17:04:19 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6a6b0432-ea25-4083-8cff-afe449e66429'' moved to trashcan
Oct 01 17:04:19 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:04:19 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6a6b0432-ea25-4083-8cff-afe449e66429, vol_name:cephfs) < ""
Oct 01 17:04:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:04:19 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1260864005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:04:19 compute-0 nova_compute[259504]: 2025-10-01 17:04:19.540 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:04:19 compute-0 nova_compute[259504]: 2025-10-01 17:04:19.547 2 DEBUG nova.compute.provider_tree [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed in ProviderTree for provider: 2417da73-53f1-4edf-ae4c-fbd9fa470d6b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 17:04:19 compute-0 nova_compute[259504]: 2025-10-01 17:04:19.563 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 17:04:19 compute-0 nova_compute[259504]: 2025-10-01 17:04:19.566 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 17:04:19 compute-0 nova_compute[259504]: 2025-10-01 17:04:19.566 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:04:19 compute-0 sshd-session[270516]: Invalid user admin from 136.26.36.177 port 55866
Oct 01 17:04:19 compute-0 sshd-session[270516]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 17:04:19 compute-0 sshd-session[270516]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=136.26.36.177
Oct 01 17:04:19 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "format": "json"}]: dispatch
Oct 01 17:04:19 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:04:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:04:19.972 162304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:04:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:04:19.973 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:04:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:04:19.973 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:04:20 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 52 MiB data, 223 MiB used, 60 GiB / 60 GiB avail; 324 B/s rd, 104 KiB/s wr, 10 op/s
Oct 01 17:04:20 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4554af13-83dd-4199-b167-2a7f1fff2e32", "auth_id": "tempest-cephx-id-26224483", "format": "json"}]: dispatch
Oct 01 17:04:20 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1260864005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:04:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:04:20 compute-0 nova_compute[259504]: 2025-10-01 17:04:20.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:04:20 compute-0 nova_compute[259504]: 2025-10-01 17:04:20.751 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:04:20 compute-0 nova_compute[259504]: 2025-10-01 17:04:20.751 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 17:04:20 compute-0 nova_compute[259504]: 2025-10-01 17:04:20.752 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 17:04:20 compute-0 nova_compute[259504]: 2025-10-01 17:04:20.771 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 17:04:20 compute-0 nova_compute[259504]: 2025-10-01 17:04:20.771 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:04:20 compute-0 nova_compute[259504]: 2025-10-01 17:04:20.772 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 17:04:20 compute-0 nova_compute[259504]: 2025-10-01 17:04:20.772 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:04:20 compute-0 nova_compute[259504]: 2025-10-01 17:04:20.773 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 01 17:04:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 17:04:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:04:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 17:04:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:04:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:04:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:04:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:04:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:04:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:04:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:04:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 01 17:04:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:04:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00017799039118968548 of space, bias 4.0, pg target 0.21358846942762258 quantized to 16 (current 16)
Oct 01 17:04:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:04:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00013354048642312953 quantized to 32 (current 32)
Oct 01 17:04:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:04:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 17:04:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:04:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 17:04:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:04:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:04:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:04:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 17:04:21 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6a6b0432-ea25-4083-8cff-afe449e66429", "format": "json"}]: dispatch
Oct 01 17:04:21 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6a6b0432-ea25-4083-8cff-afe449e66429", "force": true, "format": "json"}]: dispatch
Oct 01 17:04:21 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "format": "json"}]: dispatch
Oct 01 17:04:21 compute-0 ceph-mon[74273]: pgmap v1041: 305 pgs: 305 active+clean; 52 MiB data, 223 MiB used, 60 GiB / 60 GiB avail; 324 B/s rd, 104 KiB/s wr, 10 op/s
Oct 01 17:04:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Oct 01 17:04:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Oct 01 17:04:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Oct 01 17:04:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Oct 01 17:04:21 compute-0 nova_compute[259504]: 2025-10-01 17:04:21.749 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:04:21 compute-0 nova_compute[259504]: 2025-10-01 17:04:21.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:04:21 compute-0 nova_compute[259504]: 2025-10-01 17:04:21.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:04:21 compute-0 nova_compute[259504]: 2025-10-01 17:04:21.751 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:04:21 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Oct 01 17:04:22 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 52 MiB data, 223 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 98 KiB/s wr, 10 op/s
Oct 01 17:04:22 compute-0 sshd-session[270516]: Failed password for invalid user admin from 136.26.36.177 port 55866 ssh2
Oct 01 17:04:22 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:04:22 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "format": "json"}]: dispatch
Oct 01 17:04:22 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:04:22 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1
Oct 01 17:04:22 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1],prefix=session evict} (starting...)
Oct 01 17:04:22 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:04:22 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:04:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Oct 01 17:04:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Oct 01 17:04:22 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Oct 01 17:04:22 compute-0 ceph-mon[74273]: pgmap v1042: 305 pgs: 305 active+clean; 52 MiB data, 223 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 98 KiB/s wr, 10 op/s
Oct 01 17:04:22 compute-0 nova_compute[259504]: 2025-10-01 17:04:22.778 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:04:22 compute-0 nova_compute[259504]: 2025-10-01 17:04:22.836 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:04:23 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4554af13-83dd-4199-b167-2a7f1fff2e32", "format": "json"}]: dispatch
Oct 01 17:04:23 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4554af13-83dd-4199-b167-2a7f1fff2e32, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:04:23 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4554af13-83dd-4199-b167-2a7f1fff2e32, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:04:23 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:04:23.299+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4554af13-83dd-4199-b167-2a7f1fff2e32' of type subvolume
Oct 01 17:04:23 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4554af13-83dd-4199-b167-2a7f1fff2e32' of type subvolume
Oct 01 17:04:23 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4554af13-83dd-4199-b167-2a7f1fff2e32", "force": true, "format": "json"}]: dispatch
Oct 01 17:04:23 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4554af13-83dd-4199-b167-2a7f1fff2e32, vol_name:cephfs) < ""
Oct 01 17:04:23 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4554af13-83dd-4199-b167-2a7f1fff2e32'' moved to trashcan
Oct 01 17:04:23 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:04:23 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4554af13-83dd-4199-b167-2a7f1fff2e32, vol_name:cephfs) < ""
Oct 01 17:04:23 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:04:23 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:04:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Oct 01 17:04:23 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Oct 01 17:04:23 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID alice with tenant 1841221f332340a299707d253063659f
Oct 01 17:04:23 compute-0 sshd-session[270516]: Connection closed by invalid user admin 136.26.36.177 port 55866 [preauth]
Oct 01 17:04:24 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 53 MiB data, 223 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 81 KiB/s wr, 8 op/s
Oct 01 17:04:24 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "format": "json"}]: dispatch
Oct 01 17:04:24 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4554af13-83dd-4199-b167-2a7f1fff2e32", "format": "json"}]: dispatch
Oct 01 17:04:24 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Oct 01 17:04:24 compute-0 unix_chkpwd[270543]: password check failed for user (root)
Oct 01 17:04:24 compute-0 sshd-session[270541]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=136.26.36.177  user=root
Oct 01 17:04:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:04:24 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:04:24 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:04:24 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:04:24 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4850b31b-d69e-4e1f-a8c0-08812bc9688d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:04:24 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4850b31b-d69e-4e1f-a8c0-08812bc9688d, vol_name:cephfs) < ""
Oct 01 17:04:25 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4850b31b-d69e-4e1f-a8c0-08812bc9688d/.meta.tmp'
Oct 01 17:04:25 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4850b31b-d69e-4e1f-a8c0-08812bc9688d/.meta.tmp' to config b'/volumes/_nogroup/4850b31b-d69e-4e1f-a8c0-08812bc9688d/.meta'
Oct 01 17:04:25 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4850b31b-d69e-4e1f-a8c0-08812bc9688d, vol_name:cephfs) < ""
Oct 01 17:04:25 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4850b31b-d69e-4e1f-a8c0-08812bc9688d", "format": "json"}]: dispatch
Oct 01 17:04:25 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4850b31b-d69e-4e1f-a8c0-08812bc9688d, vol_name:cephfs) < ""
Oct 01 17:04:25 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4850b31b-d69e-4e1f-a8c0-08812bc9688d, vol_name:cephfs) < ""
Oct 01 17:04:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:04:25 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:04:25 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4554af13-83dd-4199-b167-2a7f1fff2e32", "force": true, "format": "json"}]: dispatch
Oct 01 17:04:25 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:04:25 compute-0 ceph-mon[74273]: pgmap v1043: 305 pgs: 305 active+clean; 53 MiB data, 223 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 81 KiB/s wr, 8 op/s
Oct 01 17:04:25 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:04:25 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:04:25 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:04:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:04:25 compute-0 nova_compute[259504]: 2025-10-01 17:04:25.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:04:25 compute-0 nova_compute[259504]: 2025-10-01 17:04:25.750 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 01 17:04:25 compute-0 podman[270544]: 2025-10-01 17:04:25.784122382 +0000 UTC m=+0.097468873 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 01 17:04:26 compute-0 nova_compute[259504]: 2025-10-01 17:04:26.027 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 01 17:04:26 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 53 MiB data, 223 MiB used, 60 GiB / 60 GiB avail; 301 B/s rd, 79 KiB/s wr, 7 op/s
Oct 01 17:04:26 compute-0 sshd-session[270541]: Failed password for root from 136.26.36.177 port 55930 ssh2
Oct 01 17:04:26 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4850b31b-d69e-4e1f-a8c0-08812bc9688d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:04:26 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4850b31b-d69e-4e1f-a8c0-08812bc9688d", "format": "json"}]: dispatch
Oct 01 17:04:27 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "format": "json"}]: dispatch
Oct 01 17:04:27 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:04:27 compute-0 ceph-mon[74273]: pgmap v1044: 305 pgs: 305 active+clean; 53 MiB data, 223 MiB used, 60 GiB / 60 GiB avail; 301 B/s rd, 79 KiB/s wr, 7 op/s
Oct 01 17:04:27 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "format": "json"}]: dispatch
Oct 01 17:04:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Oct 01 17:04:27 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Oct 01 17:04:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Oct 01 17:04:27 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Oct 01 17:04:27 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Oct 01 17:04:27 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:04:27 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "format": "json"}]: dispatch
Oct 01 17:04:27 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:04:27 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1
Oct 01 17:04:27 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1],prefix=session evict} (starting...)
Oct 01 17:04:27 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:04:27 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:04:27 compute-0 sshd-session[270541]: Connection closed by authenticating user root 136.26.36.177 port 55930 [preauth]
Oct 01 17:04:28 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 53 MiB data, 223 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 67 KiB/s wr, 6 op/s
Oct 01 17:04:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Oct 01 17:04:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Oct 01 17:04:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Oct 01 17:04:28 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "format": "json"}]: dispatch
Oct 01 17:04:28 compute-0 ceph-mon[74273]: pgmap v1045: 305 pgs: 305 active+clean; 53 MiB data, 223 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 67 KiB/s wr, 6 op/s
Oct 01 17:04:28 compute-0 sshd-session[270565]: Invalid user orangepi from 136.26.36.177 port 55996
Oct 01 17:04:28 compute-0 podman[270567]: 2025-10-01 17:04:28.594968524 +0000 UTC m=+0.066119369 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 01 17:04:28 compute-0 sshd-session[270565]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 17:04:28 compute-0 sshd-session[270565]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=136.26.36.177
Oct 01 17:04:30 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 53 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 112 KiB/s wr, 11 op/s
Oct 01 17:04:30 compute-0 sshd-session[270565]: Failed password for invalid user orangepi from 136.26.36.177 port 55996 ssh2
Oct 01 17:04:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:04:31 compute-0 ceph-mon[74273]: pgmap v1046: 305 pgs: 305 active+clean; 53 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 112 KiB/s wr, 11 op/s
Oct 01 17:04:31 compute-0 nova_compute[259504]: 2025-10-01 17:04:31.253 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:04:31 compute-0 sshd-session[270565]: Connection closed by invalid user orangepi 136.26.36.177 port 55996 [preauth]
Oct 01 17:04:31 compute-0 sshd-session[270588]: Invalid user support from 136.26.36.177 port 56048
Oct 01 17:04:31 compute-0 sshd-session[270588]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 17:04:31 compute-0 sshd-session[270588]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=136.26.36.177
Oct 01 17:04:32 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 53 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 72 KiB/s wr, 8 op/s
Oct 01 17:04:33 compute-0 ceph-mon[74273]: pgmap v1047: 305 pgs: 305 active+clean; 53 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 72 KiB/s wr, 8 op/s
Oct 01 17:04:34 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 53 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 80 KiB/s wr, 9 op/s
Oct 01 17:04:34 compute-0 sshd-session[270588]: Failed password for invalid user support from 136.26.36.177 port 56048 ssh2
Oct 01 17:04:35 compute-0 ceph-mon[74273]: pgmap v1048: 305 pgs: 305 active+clean; 53 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 80 KiB/s wr, 9 op/s
Oct 01 17:04:35 compute-0 ceph-osd[89167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct 01 17:04:35 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4850b31b-d69e-4e1f-a8c0-08812bc9688d", "format": "json"}]: dispatch
Oct 01 17:04:35 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4850b31b-d69e-4e1f-a8c0-08812bc9688d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:04:35 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4850b31b-d69e-4e1f-a8c0-08812bc9688d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:04:35 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:04:35.437+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4850b31b-d69e-4e1f-a8c0-08812bc9688d' of type subvolume
Oct 01 17:04:35 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4850b31b-d69e-4e1f-a8c0-08812bc9688d' of type subvolume
Oct 01 17:04:35 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4850b31b-d69e-4e1f-a8c0-08812bc9688d", "force": true, "format": "json"}]: dispatch
Oct 01 17:04:35 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4850b31b-d69e-4e1f-a8c0-08812bc9688d, vol_name:cephfs) < ""
Oct 01 17:04:35 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4850b31b-d69e-4e1f-a8c0-08812bc9688d'' moved to trashcan
Oct 01 17:04:35 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:04:35 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4850b31b-d69e-4e1f-a8c0-08812bc9688d, vol_name:cephfs) < ""
Oct 01 17:04:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:04:35 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "r", "format": "json"}]: dispatch
Oct 01 17:04:35 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:04:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Oct 01 17:04:35 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Oct 01 17:04:35 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID alice with tenant 1841221f332340a299707d253063659f
Oct 01 17:04:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:04:35 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:04:35 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:04:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:04:36 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 53 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 52 KiB/s wr, 6 op/s
Oct 01 17:04:36 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Oct 01 17:04:36 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:04:36 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:04:36 compute-0 sshd-session[270588]: Connection closed by invalid user support 136.26.36.177 port 56048 [preauth]
Oct 01 17:04:36 compute-0 sshd-session[270590]: Invalid user ubnt from 136.26.36.177 port 56118
Oct 01 17:04:37 compute-0 sshd-session[270590]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 17:04:37 compute-0 sshd-session[270590]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=136.26.36.177
Oct 01 17:04:37 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4850b31b-d69e-4e1f-a8c0-08812bc9688d", "format": "json"}]: dispatch
Oct 01 17:04:37 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4850b31b-d69e-4e1f-a8c0-08812bc9688d", "force": true, "format": "json"}]: dispatch
Oct 01 17:04:37 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "r", "format": "json"}]: dispatch
Oct 01 17:04:37 compute-0 ceph-mon[74273]: pgmap v1049: 305 pgs: 305 active+clean; 53 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 52 KiB/s wr, 6 op/s
Oct 01 17:04:38 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 53 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 52 KiB/s wr, 6 op/s
Oct 01 17:04:38 compute-0 sudo[270592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:04:38 compute-0 sudo[270592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:04:38 compute-0 sudo[270592]: pam_unix(sudo:session): session closed for user root
Oct 01 17:04:38 compute-0 sudo[270617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:04:38 compute-0 sudo[270617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:04:38 compute-0 sudo[270617]: pam_unix(sudo:session): session closed for user root
Oct 01 17:04:38 compute-0 sudo[270642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:04:38 compute-0 sudo[270642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:04:38 compute-0 sudo[270642]: pam_unix(sudo:session): session closed for user root
Oct 01 17:04:38 compute-0 sudo[270667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 17:04:38 compute-0 sudo[270667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:04:38 compute-0 sudo[270667]: pam_unix(sudo:session): session closed for user root
Oct 01 17:04:38 compute-0 sshd-session[270590]: Failed password for invalid user ubnt from 136.26.36.177 port 56118 ssh2
Oct 01 17:04:39 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:04:39.019 162304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:71:db', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:60:3f:78:bd:29'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 17:04:39 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:04:39.020 162304 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 17:04:39 compute-0 sudo[270723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:04:39 compute-0 sudo[270723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:04:39 compute-0 sudo[270723]: pam_unix(sudo:session): session closed for user root
Oct 01 17:04:39 compute-0 sudo[270749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:04:39 compute-0 sudo[270749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:04:39 compute-0 sudo[270749]: pam_unix(sudo:session): session closed for user root
Oct 01 17:04:39 compute-0 sudo[270794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:04:39 compute-0 ceph-mon[74273]: pgmap v1050: 305 pgs: 305 active+clean; 53 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 52 KiB/s wr, 6 op/s
Oct 01 17:04:39 compute-0 sudo[270794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:04:39 compute-0 sudo[270794]: pam_unix(sudo:session): session closed for user root
Oct 01 17:04:39 compute-0 podman[270747]: 2025-10-01 17:04:39.162030508 +0000 UTC m=+0.107204730 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible)
Oct 01 17:04:39 compute-0 sudo[270825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 01 17:04:39 compute-0 sudo[270825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:04:39 compute-0 sudo[270825]: pam_unix(sudo:session): session closed for user root
Oct 01 17:04:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 17:04:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:04:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 17:04:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:04:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 17:04:39 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:04:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 17:04:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 17:04:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 17:04:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:04:39 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 948f8fde-9db9-403e-8895-f8a4ad944daf does not exist
Oct 01 17:04:39 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev f9c5ff28-29cf-46fe-b7a6-10b6fbbd2132 does not exist
Oct 01 17:04:39 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 5a5e7920-92af-4229-a1d3-f22331f5a646 does not exist
Oct 01 17:04:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 17:04:39 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 17:04:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 17:04:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 17:04:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 17:04:39 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:04:39 compute-0 sudo[270870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:04:39 compute-0 sudo[270870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:04:39 compute-0 sudo[270870]: pam_unix(sudo:session): session closed for user root
Oct 01 17:04:39 compute-0 sudo[270895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:04:39 compute-0 sudo[270895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:04:39 compute-0 sudo[270895]: pam_unix(sudo:session): session closed for user root
Oct 01 17:04:39 compute-0 sudo[270920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:04:39 compute-0 sudo[270920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:04:39 compute-0 sudo[270920]: pam_unix(sudo:session): session closed for user root
Oct 01 17:04:39 compute-0 sudo[270945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 17:04:39 compute-0 sudo[270945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:04:39 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "format": "json"}]: dispatch
Oct 01 17:04:39 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:04:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Oct 01 17:04:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Oct 01 17:04:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Oct 01 17:04:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Oct 01 17:04:39 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Oct 01 17:04:39 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:04:39 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "format": "json"}]: dispatch
Oct 01 17:04:39 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:04:39 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1
Oct 01 17:04:39 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1],prefix=session evict} (starting...)
Oct 01 17:04:39 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:04:39 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:04:39 compute-0 podman[271011]: 2025-10-01 17:04:39.984662149 +0000 UTC m=+0.045030593 container create 2ad581ef547da0b7d5fa82e92ac4888ad0f28460cc738ba706f48167a854cd60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 01 17:04:40 compute-0 systemd[1]: Started libpod-conmon-2ad581ef547da0b7d5fa82e92ac4888ad0f28460cc738ba706f48167a854cd60.scope.
Oct 01 17:04:40 compute-0 sshd-session[270590]: Connection closed by invalid user ubnt 136.26.36.177 port 56118 [preauth]
Oct 01 17:04:40 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:04:40 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 53 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 79 KiB/s wr, 9 op/s
Oct 01 17:04:40 compute-0 podman[271011]: 2025-10-01 17:04:40.056981378 +0000 UTC m=+0.117349862 container init 2ad581ef547da0b7d5fa82e92ac4888ad0f28460cc738ba706f48167a854cd60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_neumann, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 17:04:40 compute-0 podman[271011]: 2025-10-01 17:04:39.963647021 +0000 UTC m=+0.024015505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:04:40 compute-0 podman[271011]: 2025-10-01 17:04:40.065324697 +0000 UTC m=+0.125693151 container start 2ad581ef547da0b7d5fa82e92ac4888ad0f28460cc738ba706f48167a854cd60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:04:40 compute-0 podman[271011]: 2025-10-01 17:04:40.069021343 +0000 UTC m=+0.129389817 container attach 2ad581ef547da0b7d5fa82e92ac4888ad0f28460cc738ba706f48167a854cd60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_neumann, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:04:40 compute-0 laughing_neumann[271027]: 167 167
Oct 01 17:04:40 compute-0 systemd[1]: libpod-2ad581ef547da0b7d5fa82e92ac4888ad0f28460cc738ba706f48167a854cd60.scope: Deactivated successfully.
Oct 01 17:04:40 compute-0 podman[271011]: 2025-10-01 17:04:40.071492515 +0000 UTC m=+0.131861009 container died 2ad581ef547da0b7d5fa82e92ac4888ad0f28460cc738ba706f48167a854cd60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_neumann, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:04:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a55716901a14ee9a3a1c44de626c597fe6533eee2f0026c0d341ca34a23940e-merged.mount: Deactivated successfully.
Oct 01 17:04:40 compute-0 podman[271011]: 2025-10-01 17:04:40.126996083 +0000 UTC m=+0.187364537 container remove 2ad581ef547da0b7d5fa82e92ac4888ad0f28460cc738ba706f48167a854cd60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_neumann, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:04:40 compute-0 systemd[1]: libpod-conmon-2ad581ef547da0b7d5fa82e92ac4888ad0f28460cc738ba706f48167a854cd60.scope: Deactivated successfully.
Oct 01 17:04:40 compute-0 podman[271055]: 2025-10-01 17:04:40.294205598 +0000 UTC m=+0.045134574 container create 310bdbb39600ea7642201c84b0aab5672f099387056815be0b977d2acbf25337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mcclintock, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Oct 01 17:04:40 compute-0 systemd[1]: Started libpod-conmon-310bdbb39600ea7642201c84b0aab5672f099387056815be0b977d2acbf25337.scope.
Oct 01 17:04:40 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:04:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcb7ed71eeaa9a872a3c0fcacff79035d769166306d5700edffcc373b2dd44d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:04:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcb7ed71eeaa9a872a3c0fcacff79035d769166306d5700edffcc373b2dd44d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:04:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcb7ed71eeaa9a872a3c0fcacff79035d769166306d5700edffcc373b2dd44d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:04:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcb7ed71eeaa9a872a3c0fcacff79035d769166306d5700edffcc373b2dd44d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:04:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcb7ed71eeaa9a872a3c0fcacff79035d769166306d5700edffcc373b2dd44d9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 17:04:40 compute-0 podman[271055]: 2025-10-01 17:04:40.369593319 +0000 UTC m=+0.120522315 container init 310bdbb39600ea7642201c84b0aab5672f099387056815be0b977d2acbf25337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mcclintock, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:04:40 compute-0 podman[271055]: 2025-10-01 17:04:40.277762365 +0000 UTC m=+0.028691361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:04:40 compute-0 podman[271055]: 2025-10-01 17:04:40.382136266 +0000 UTC m=+0.133065242 container start 310bdbb39600ea7642201c84b0aab5672f099387056815be0b977d2acbf25337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mcclintock, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:04:40 compute-0 podman[271055]: 2025-10-01 17:04:40.386174527 +0000 UTC m=+0.137103523 container attach 310bdbb39600ea7642201c84b0aab5672f099387056815be0b977d2acbf25337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mcclintock, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:04:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:04:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:04:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:04:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 17:04:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:04:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 17:04:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 17:04:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:04:40 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "format": "json"}]: dispatch
Oct 01 17:04:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Oct 01 17:04:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Oct 01 17:04:40 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Oct 01 17:04:40 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "format": "json"}]: dispatch
Oct 01 17:04:40 compute-0 ceph-mon[74273]: pgmap v1051: 305 pgs: 305 active+clean; 53 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 79 KiB/s wr, 9 op/s
Oct 01 17:04:40 compute-0 sshd-session[271044]: Invalid user user from 136.26.36.177 port 56166
Oct 01 17:04:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:04:40 compute-0 sshd-session[271044]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 17:04:40 compute-0 sshd-session[271044]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=136.26.36.177
Oct 01 17:04:41 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:04:41.022 162304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d2971fc2-5b75-459a-98a0-6e626d0d4d99, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 17:04:41 compute-0 elastic_mcclintock[271071]: --> passed data devices: 0 physical, 3 LVM
Oct 01 17:04:41 compute-0 elastic_mcclintock[271071]: --> relative data size: 1.0
Oct 01 17:04:41 compute-0 elastic_mcclintock[271071]: --> All data devices are unavailable
Oct 01 17:04:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:04:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:04:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:04:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:04:41 compute-0 systemd[1]: libpod-310bdbb39600ea7642201c84b0aab5672f099387056815be0b977d2acbf25337.scope: Deactivated successfully.
Oct 01 17:04:41 compute-0 conmon[271071]: conmon 310bdbb39600ea764220 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-310bdbb39600ea7642201c84b0aab5672f099387056815be0b977d2acbf25337.scope/container/memory.events
Oct 01 17:04:41 compute-0 podman[271055]: 2025-10-01 17:04:41.385460873 +0000 UTC m=+1.136389849 container died 310bdbb39600ea7642201c84b0aab5672f099387056815be0b977d2acbf25337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 17:04:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:04:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:04:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-dcb7ed71eeaa9a872a3c0fcacff79035d769166306d5700edffcc373b2dd44d9-merged.mount: Deactivated successfully.
Oct 01 17:04:41 compute-0 podman[271055]: 2025-10-01 17:04:41.445537056 +0000 UTC m=+1.196466032 container remove 310bdbb39600ea7642201c84b0aab5672f099387056815be0b977d2acbf25337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 01 17:04:41 compute-0 systemd[1]: libpod-conmon-310bdbb39600ea7642201c84b0aab5672f099387056815be0b977d2acbf25337.scope: Deactivated successfully.
Oct 01 17:04:41 compute-0 sudo[270945]: pam_unix(sudo:session): session closed for user root
Oct 01 17:04:41 compute-0 sudo[271114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:04:41 compute-0 sudo[271114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:04:41 compute-0 sudo[271114]: pam_unix(sudo:session): session closed for user root
Oct 01 17:04:41 compute-0 sudo[271139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:04:41 compute-0 sudo[271139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:04:41 compute-0 sudo[271139]: pam_unix(sudo:session): session closed for user root
Oct 01 17:04:41 compute-0 sudo[271164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:04:41 compute-0 sudo[271164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:04:41 compute-0 sudo[271164]: pam_unix(sudo:session): session closed for user root
Oct 01 17:04:41 compute-0 sudo[271189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 17:04:41 compute-0 sudo[271189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:04:42 compute-0 podman[271254]: 2025-10-01 17:04:42.011031639 +0000 UTC m=+0.039638064 container create cd04d9622c1b62c706d837c23276e4a5a1653cb8ce3c5fe952123e1e8af7a054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:04:42 compute-0 systemd[1]: Started libpod-conmon-cd04d9622c1b62c706d837c23276e4a5a1653cb8ce3c5fe952123e1e8af7a054.scope.
Oct 01 17:04:42 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 53 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 35 KiB/s wr, 3 op/s
Oct 01 17:04:42 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:04:42 compute-0 podman[271254]: 2025-10-01 17:04:41.995421411 +0000 UTC m=+0.024027836 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:04:42 compute-0 podman[271254]: 2025-10-01 17:04:42.09171926 +0000 UTC m=+0.120325715 container init cd04d9622c1b62c706d837c23276e4a5a1653cb8ce3c5fe952123e1e8af7a054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kowalevski, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 17:04:42 compute-0 podman[271254]: 2025-10-01 17:04:42.097630392 +0000 UTC m=+0.126236817 container start cd04d9622c1b62c706d837c23276e4a5a1653cb8ce3c5fe952123e1e8af7a054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 01 17:04:42 compute-0 nervous_kowalevski[271271]: 167 167
Oct 01 17:04:42 compute-0 podman[271254]: 2025-10-01 17:04:42.101203615 +0000 UTC m=+0.129810070 container attach cd04d9622c1b62c706d837c23276e4a5a1653cb8ce3c5fe952123e1e8af7a054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kowalevski, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:04:42 compute-0 systemd[1]: libpod-cd04d9622c1b62c706d837c23276e4a5a1653cb8ce3c5fe952123e1e8af7a054.scope: Deactivated successfully.
Oct 01 17:04:42 compute-0 podman[271254]: 2025-10-01 17:04:42.103006299 +0000 UTC m=+0.131612724 container died cd04d9622c1b62c706d837c23276e4a5a1653cb8ce3c5fe952123e1e8af7a054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kowalevski, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 01 17:04:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9a1ff4dacc6a347484ab7eb1e4422d583e6343d8dc10c10ef55e9c756853068-merged.mount: Deactivated successfully.
Oct 01 17:04:42 compute-0 podman[271254]: 2025-10-01 17:04:42.147248771 +0000 UTC m=+0.175855196 container remove cd04d9622c1b62c706d837c23276e4a5a1653cb8ce3c5fe952123e1e8af7a054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 01 17:04:42 compute-0 systemd[1]: libpod-conmon-cd04d9622c1b62c706d837c23276e4a5a1653cb8ce3c5fe952123e1e8af7a054.scope: Deactivated successfully.
Oct 01 17:04:42 compute-0 podman[271295]: 2025-10-01 17:04:42.292791531 +0000 UTC m=+0.041247238 container create 0caf6eb99636e2e523f98de616a8bc2725e7d0363a175dc7d1647a8f1b2ac4a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_curran, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 17:04:42 compute-0 systemd[1]: Started libpod-conmon-0caf6eb99636e2e523f98de616a8bc2725e7d0363a175dc7d1647a8f1b2ac4a1.scope.
Oct 01 17:04:42 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:04:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a50c2a673cfd9af0c462655f94c11789c9a8c257e9bef984a560dd7b2a109c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:04:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a50c2a673cfd9af0c462655f94c11789c9a8c257e9bef984a560dd7b2a109c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:04:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a50c2a673cfd9af0c462655f94c11789c9a8c257e9bef984a560dd7b2a109c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:04:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a50c2a673cfd9af0c462655f94c11789c9a8c257e9bef984a560dd7b2a109c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:04:42 compute-0 podman[271295]: 2025-10-01 17:04:42.273730381 +0000 UTC m=+0.022186118 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:04:42 compute-0 podman[271295]: 2025-10-01 17:04:42.372169409 +0000 UTC m=+0.120625116 container init 0caf6eb99636e2e523f98de616a8bc2725e7d0363a175dc7d1647a8f1b2ac4a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 17:04:42 compute-0 podman[271295]: 2025-10-01 17:04:42.385588614 +0000 UTC m=+0.134044321 container start 0caf6eb99636e2e523f98de616a8bc2725e7d0363a175dc7d1647a8f1b2ac4a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_curran, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:04:42 compute-0 podman[271295]: 2025-10-01 17:04:42.389049666 +0000 UTC m=+0.137505393 container attach 0caf6eb99636e2e523f98de616a8bc2725e7d0363a175dc7d1647a8f1b2ac4a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_curran, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:04:43 compute-0 sshd-session[271044]: Failed password for invalid user user from 136.26.36.177 port 56166 ssh2
Oct 01 17:04:43 compute-0 ceph-mon[74273]: pgmap v1052: 305 pgs: 305 active+clean; 53 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 35 KiB/s wr, 3 op/s
Oct 01 17:04:43 compute-0 friendly_curran[271311]: {
Oct 01 17:04:43 compute-0 friendly_curran[271311]:     "0": [
Oct 01 17:04:43 compute-0 friendly_curran[271311]:         {
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             "devices": [
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "/dev/loop3"
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             ],
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             "lv_name": "ceph_lv0",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             "lv_size": "21470642176",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             "name": "ceph_lv0",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             "tags": {
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "ceph.cluster_name": "ceph",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "ceph.crush_device_class": "",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "ceph.encrypted": "0",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "ceph.osd_id": "0",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "ceph.type": "block",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "ceph.vdo": "0"
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             },
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             "type": "block",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             "vg_name": "ceph_vg0"
Oct 01 17:04:43 compute-0 friendly_curran[271311]:         }
Oct 01 17:04:43 compute-0 friendly_curran[271311]:     ],
Oct 01 17:04:43 compute-0 friendly_curran[271311]:     "1": [
Oct 01 17:04:43 compute-0 friendly_curran[271311]:         {
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             "devices": [
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "/dev/loop4"
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             ],
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             "lv_name": "ceph_lv1",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             "lv_size": "21470642176",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             "name": "ceph_lv1",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             "tags": {
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "ceph.cluster_name": "ceph",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "ceph.crush_device_class": "",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "ceph.encrypted": "0",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "ceph.osd_id": "1",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "ceph.type": "block",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "ceph.vdo": "0"
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             },
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             "type": "block",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             "vg_name": "ceph_vg1"
Oct 01 17:04:43 compute-0 friendly_curran[271311]:         }
Oct 01 17:04:43 compute-0 friendly_curran[271311]:     ],
Oct 01 17:04:43 compute-0 friendly_curran[271311]:     "2": [
Oct 01 17:04:43 compute-0 friendly_curran[271311]:         {
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             "devices": [
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "/dev/loop5"
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             ],
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             "lv_name": "ceph_lv2",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             "lv_size": "21470642176",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             "name": "ceph_lv2",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             "tags": {
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "ceph.cluster_name": "ceph",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "ceph.crush_device_class": "",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "ceph.encrypted": "0",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "ceph.osd_id": "2",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "ceph.type": "block",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:                 "ceph.vdo": "0"
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             },
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             "type": "block",
Oct 01 17:04:43 compute-0 friendly_curran[271311]:             "vg_name": "ceph_vg2"
Oct 01 17:04:43 compute-0 friendly_curran[271311]:         }
Oct 01 17:04:43 compute-0 friendly_curran[271311]:     ]
Oct 01 17:04:43 compute-0 friendly_curran[271311]: }
Oct 01 17:04:43 compute-0 systemd[1]: libpod-0caf6eb99636e2e523f98de616a8bc2725e7d0363a175dc7d1647a8f1b2ac4a1.scope: Deactivated successfully.
Oct 01 17:04:43 compute-0 podman[271295]: 2025-10-01 17:04:43.156617403 +0000 UTC m=+0.905073110 container died 0caf6eb99636e2e523f98de616a8bc2725e7d0363a175dc7d1647a8f1b2ac4a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_curran, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:04:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-80a50c2a673cfd9af0c462655f94c11789c9a8c257e9bef984a560dd7b2a109c-merged.mount: Deactivated successfully.
Oct 01 17:04:43 compute-0 podman[271295]: 2025-10-01 17:04:43.222994118 +0000 UTC m=+0.971449855 container remove 0caf6eb99636e2e523f98de616a8bc2725e7d0363a175dc7d1647a8f1b2ac4a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_curran, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:04:43 compute-0 systemd[1]: libpod-conmon-0caf6eb99636e2e523f98de616a8bc2725e7d0363a175dc7d1647a8f1b2ac4a1.scope: Deactivated successfully.
Oct 01 17:04:43 compute-0 sudo[271189]: pam_unix(sudo:session): session closed for user root
Oct 01 17:04:43 compute-0 podman[271321]: 2025-10-01 17:04:43.262537422 +0000 UTC m=+0.075414776 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 01 17:04:43 compute-0 sudo[271349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:04:43 compute-0 sudo[271349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:04:43 compute-0 sudo[271349]: pam_unix(sudo:session): session closed for user root
Oct 01 17:04:43 compute-0 sudo[271374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:04:43 compute-0 sudo[271374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:04:43 compute-0 sudo[271374]: pam_unix(sudo:session): session closed for user root
Oct 01 17:04:43 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:04:43 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:04:43 compute-0 sudo[271399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:04:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Oct 01 17:04:43 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Oct 01 17:04:43 compute-0 sudo[271399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:04:43 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID alice_bob with tenant 1841221f332340a299707d253063659f
Oct 01 17:04:43 compute-0 sudo[271399]: pam_unix(sudo:session): session closed for user root
Oct 01 17:04:43 compute-0 ceph-osd[90269]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 01 17:04:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:04:43 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:04:43 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:04:43 compute-0 sudo[271424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 17:04:43 compute-0 sudo[271424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:04:43 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:04:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 17:04:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1401659529' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 17:04:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 17:04:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1401659529' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 17:04:43 compute-0 podman[271488]: 2025-10-01 17:04:43.956010489 +0000 UTC m=+0.120135666 container create 0b349116e92ac822f09271768b8feea6c6046076aba43537ed199d6e19f6b970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_pike, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:04:43 compute-0 podman[271488]: 2025-10-01 17:04:43.861507302 +0000 UTC m=+0.025632459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:04:44 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1053: 305 pgs: 305 active+clean; 54 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 49 KiB/s wr, 5 op/s
Oct 01 17:04:44 compute-0 systemd[1]: Started libpod-conmon-0b349116e92ac822f09271768b8feea6c6046076aba43537ed199d6e19f6b970.scope.
Oct 01 17:04:44 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:04:44 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Oct 01 17:04:44 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:04:44 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:04:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1401659529' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 17:04:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1401659529' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 17:04:44 compute-0 podman[271488]: 2025-10-01 17:04:44.142683914 +0000 UTC m=+0.306809062 container init 0b349116e92ac822f09271768b8feea6c6046076aba43537ed199d6e19f6b970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_pike, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:04:44 compute-0 podman[271488]: 2025-10-01 17:04:44.156746645 +0000 UTC m=+0.320871792 container start 0b349116e92ac822f09271768b8feea6c6046076aba43537ed199d6e19f6b970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_pike, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:04:44 compute-0 ecstatic_pike[271505]: 167 167
Oct 01 17:04:44 compute-0 systemd[1]: libpod-0b349116e92ac822f09271768b8feea6c6046076aba43537ed199d6e19f6b970.scope: Deactivated successfully.
Oct 01 17:04:44 compute-0 conmon[271505]: conmon 0b349116e92ac822f092 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0b349116e92ac822f09271768b8feea6c6046076aba43537ed199d6e19f6b970.scope/container/memory.events
Oct 01 17:04:44 compute-0 podman[271488]: 2025-10-01 17:04:44.174516384 +0000 UTC m=+0.338641561 container attach 0b349116e92ac822f09271768b8feea6c6046076aba43537ed199d6e19f6b970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_pike, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:04:44 compute-0 podman[271488]: 2025-10-01 17:04:44.175633677 +0000 UTC m=+0.339758854 container died 0b349116e92ac822f09271768b8feea6c6046076aba43537ed199d6e19f6b970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_pike, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 17:04:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d676e35743e8fbeb7be412df61e080072250b61f50f9e5a9f1d241c7b78b0b8-merged.mount: Deactivated successfully.
Oct 01 17:04:44 compute-0 podman[271488]: 2025-10-01 17:04:44.226387412 +0000 UTC m=+0.390512559 container remove 0b349116e92ac822f09271768b8feea6c6046076aba43537ed199d6e19f6b970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:04:44 compute-0 systemd[1]: libpod-conmon-0b349116e92ac822f09271768b8feea6c6046076aba43537ed199d6e19f6b970.scope: Deactivated successfully.
Oct 01 17:04:44 compute-0 podman[271530]: 2025-10-01 17:04:44.385513975 +0000 UTC m=+0.048305447 container create 811289e0a1ad0deb4db7567d0d8d696705af8522625a3277157cfe0281b4ab13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:04:44 compute-0 systemd[1]: Started libpod-conmon-811289e0a1ad0deb4db7567d0d8d696705af8522625a3277157cfe0281b4ab13.scope.
Oct 01 17:04:44 compute-0 podman[271530]: 2025-10-01 17:04:44.362726496 +0000 UTC m=+0.025518048 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:04:44 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:04:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6b851c78c5e429cc630c7b9071af905b894acfc0de626d64f1a838b8163742/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:04:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6b851c78c5e429cc630c7b9071af905b894acfc0de626d64f1a838b8163742/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:04:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6b851c78c5e429cc630c7b9071af905b894acfc0de626d64f1a838b8163742/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:04:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6b851c78c5e429cc630c7b9071af905b894acfc0de626d64f1a838b8163742/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:04:44 compute-0 podman[271530]: 2025-10-01 17:04:44.485686878 +0000 UTC m=+0.148478390 container init 811289e0a1ad0deb4db7567d0d8d696705af8522625a3277157cfe0281b4ab13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:04:44 compute-0 podman[271530]: 2025-10-01 17:04:44.493775671 +0000 UTC m=+0.156567153 container start 811289e0a1ad0deb4db7567d0d8d696705af8522625a3277157cfe0281b4ab13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_wing, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:04:44 compute-0 podman[271530]: 2025-10-01 17:04:44.541003807 +0000 UTC m=+0.203795289 container attach 811289e0a1ad0deb4db7567d0d8d696705af8522625a3277157cfe0281b4ab13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_wing, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:04:45 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:04:45 compute-0 ceph-mon[74273]: pgmap v1053: 305 pgs: 305 active+clean; 54 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 49 KiB/s wr, 5 op/s
Oct 01 17:04:45 compute-0 sshd-session[271044]: Connection closed by invalid user user 136.26.36.177 port 56166 [preauth]
Oct 01 17:04:45 compute-0 recursing_wing[271546]: {
Oct 01 17:04:45 compute-0 recursing_wing[271546]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 17:04:45 compute-0 recursing_wing[271546]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:04:45 compute-0 recursing_wing[271546]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 17:04:45 compute-0 recursing_wing[271546]:         "osd_id": 2,
Oct 01 17:04:45 compute-0 recursing_wing[271546]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 17:04:45 compute-0 recursing_wing[271546]:         "type": "bluestore"
Oct 01 17:04:45 compute-0 recursing_wing[271546]:     },
Oct 01 17:04:45 compute-0 recursing_wing[271546]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 17:04:45 compute-0 recursing_wing[271546]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:04:45 compute-0 recursing_wing[271546]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 17:04:45 compute-0 recursing_wing[271546]:         "osd_id": 0,
Oct 01 17:04:45 compute-0 recursing_wing[271546]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 17:04:45 compute-0 recursing_wing[271546]:         "type": "bluestore"
Oct 01 17:04:45 compute-0 recursing_wing[271546]:     },
Oct 01 17:04:45 compute-0 recursing_wing[271546]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 17:04:45 compute-0 recursing_wing[271546]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:04:45 compute-0 recursing_wing[271546]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 17:04:45 compute-0 recursing_wing[271546]:         "osd_id": 1,
Oct 01 17:04:45 compute-0 recursing_wing[271546]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 17:04:45 compute-0 recursing_wing[271546]:         "type": "bluestore"
Oct 01 17:04:45 compute-0 recursing_wing[271546]:     }
Oct 01 17:04:45 compute-0 recursing_wing[271546]: }
Oct 01 17:04:45 compute-0 systemd[1]: libpod-811289e0a1ad0deb4db7567d0d8d696705af8522625a3277157cfe0281b4ab13.scope: Deactivated successfully.
Oct 01 17:04:45 compute-0 systemd[1]: libpod-811289e0a1ad0deb4db7567d0d8d696705af8522625a3277157cfe0281b4ab13.scope: Consumed 1.087s CPU time.
Oct 01 17:04:45 compute-0 podman[271530]: 2025-10-01 17:04:45.585656049 +0000 UTC m=+1.248447511 container died 811289e0a1ad0deb4db7567d0d8d696705af8522625a3277157cfe0281b4ab13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 17:04:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-be6b851c78c5e429cc630c7b9071af905b894acfc0de626d64f1a838b8163742-merged.mount: Deactivated successfully.
Oct 01 17:04:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:04:45 compute-0 podman[271530]: 2025-10-01 17:04:45.658019643 +0000 UTC m=+1.320811125 container remove 811289e0a1ad0deb4db7567d0d8d696705af8522625a3277157cfe0281b4ab13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_wing, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:04:45 compute-0 systemd[1]: libpod-conmon-811289e0a1ad0deb4db7567d0d8d696705af8522625a3277157cfe0281b4ab13.scope: Deactivated successfully.
Oct 01 17:04:45 compute-0 sudo[271424]: pam_unix(sudo:session): session closed for user root
Oct 01 17:04:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 17:04:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:04:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 17:04:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:04:45 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 959c8f3c-429f-4e59-9ff7-54e7e7cd00f6 does not exist
Oct 01 17:04:45 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 7ec63ade-793b-41a8-a177-179d815357b5 does not exist
Oct 01 17:04:45 compute-0 sudo[271593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:04:45 compute-0 sudo[271593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:04:45 compute-0 sudo[271593]: pam_unix(sudo:session): session closed for user root
Oct 01 17:04:45 compute-0 sudo[271618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 17:04:45 compute-0 sudo[271618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:04:45 compute-0 sudo[271618]: pam_unix(sudo:session): session closed for user root
Oct 01 17:04:46 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 54 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 41 KiB/s wr, 4 op/s
Oct 01 17:04:46 compute-0 sshd-session[271579]: Connection closed by authenticating user root 136.26.36.177 port 56240 [preauth]
Oct 01 17:04:46 compute-0 sshd[189011]: drop connection #0 from [136.26.36.177]:56248 on [38.129.56.223]:22 penalty: failed authentication
Oct 01 17:04:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:04:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:04:46 compute-0 ceph-mon[74273]: pgmap v1054: 305 pgs: 305 active+clean; 54 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 41 KiB/s wr, 4 op/s
Oct 01 17:04:47 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "format": "json"}]: dispatch
Oct 01 17:04:47 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:04:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Oct 01 17:04:47 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Oct 01 17:04:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Oct 01 17:04:47 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Oct 01 17:04:47 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Oct 01 17:04:47 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:04:47 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "format": "json"}]: dispatch
Oct 01 17:04:47 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:04:47 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1
Oct 01 17:04:47 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1],prefix=session evict} (starting...)
Oct 01 17:04:47 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:04:47 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:04:47 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "format": "json"}]: dispatch
Oct 01 17:04:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Oct 01 17:04:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Oct 01 17:04:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Oct 01 17:04:47 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "format": "json"}]: dispatch
Oct 01 17:04:48 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 54 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 41 KiB/s wr, 4 op/s
Oct 01 17:04:48 compute-0 ceph-mon[74273]: pgmap v1055: 305 pgs: 305 active+clean; 54 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 41 KiB/s wr, 4 op/s
Oct 01 17:04:50 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 54 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 75 KiB/s wr, 8 op/s
Oct 01 17:04:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:04:50 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "r", "format": "json"}]: dispatch
Oct 01 17:04:50 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:04:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Oct 01 17:04:50 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Oct 01 17:04:50 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID alice_bob with tenant 1841221f332340a299707d253063659f
Oct 01 17:04:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:04:51 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:04:51 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:04:51 compute-0 ceph-mon[74273]: pgmap v1056: 305 pgs: 305 active+clean; 54 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 75 KiB/s wr, 8 op/s
Oct 01 17:04:51 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Oct 01 17:04:51 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:04:51 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:04:51 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:04:52 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 54 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 47 KiB/s wr, 5 op/s
Oct 01 17:04:52 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "47b2be1b-8457-4141-a779-0c9e30960e86", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:04:52 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:47b2be1b-8457-4141-a779-0c9e30960e86, vol_name:cephfs) < ""
Oct 01 17:04:52 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "r", "format": "json"}]: dispatch
Oct 01 17:04:53 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/47b2be1b-8457-4141-a779-0c9e30960e86/.meta.tmp'
Oct 01 17:04:53 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/47b2be1b-8457-4141-a779-0c9e30960e86/.meta.tmp' to config b'/volumes/_nogroup/47b2be1b-8457-4141-a779-0c9e30960e86/.meta'
Oct 01 17:04:53 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:47b2be1b-8457-4141-a779-0c9e30960e86, vol_name:cephfs) < ""
Oct 01 17:04:53 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "47b2be1b-8457-4141-a779-0c9e30960e86", "format": "json"}]: dispatch
Oct 01 17:04:53 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:47b2be1b-8457-4141-a779-0c9e30960e86, vol_name:cephfs) < ""
Oct 01 17:04:53 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:47b2be1b-8457-4141-a779-0c9e30960e86, vol_name:cephfs) < ""
Oct 01 17:04:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:04:53 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:04:53 compute-0 ceph-mon[74273]: pgmap v1057: 305 pgs: 305 active+clean; 54 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 47 KiB/s wr, 5 op/s
Oct 01 17:04:53 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "47b2be1b-8457-4141-a779-0c9e30960e86", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:04:53 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "47b2be1b-8457-4141-a779-0c9e30960e86", "format": "json"}]: dispatch
Oct 01 17:04:53 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:04:54 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 54 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 69 KiB/s wr, 8 op/s
Oct 01 17:04:54 compute-0 ceph-mon[74273]: pgmap v1058: 305 pgs: 305 active+clean; 54 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 69 KiB/s wr, 8 op/s
Oct 01 17:04:54 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "format": "json"}]: dispatch
Oct 01 17:04:54 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:04:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Oct 01 17:04:54 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Oct 01 17:04:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Oct 01 17:04:54 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Oct 01 17:04:54 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Oct 01 17:04:54 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:04:54 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "format": "json"}]: dispatch
Oct 01 17:04:54 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:04:54 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1
Oct 01 17:04:54 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1],prefix=session evict} (starting...)
Oct 01 17:04:54 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:04:54 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:04:54 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "72e3096d-d9f8-4227-b53d-59ed09f4fa07", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:04:54 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:72e3096d-d9f8-4227-b53d-59ed09f4fa07, vol_name:cephfs) < ""
Oct 01 17:04:54 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/72e3096d-d9f8-4227-b53d-59ed09f4fa07/.meta.tmp'
Oct 01 17:04:54 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/72e3096d-d9f8-4227-b53d-59ed09f4fa07/.meta.tmp' to config b'/volumes/_nogroup/72e3096d-d9f8-4227-b53d-59ed09f4fa07/.meta'
Oct 01 17:04:54 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:72e3096d-d9f8-4227-b53d-59ed09f4fa07, vol_name:cephfs) < ""
Oct 01 17:04:54 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "72e3096d-d9f8-4227-b53d-59ed09f4fa07", "format": "json"}]: dispatch
Oct 01 17:04:54 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:72e3096d-d9f8-4227-b53d-59ed09f4fa07, vol_name:cephfs) < ""
Oct 01 17:04:54 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:72e3096d-d9f8-4227-b53d-59ed09f4fa07, vol_name:cephfs) < ""
Oct 01 17:04:54 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:04:54 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:04:55 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "format": "json"}]: dispatch
Oct 01 17:04:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Oct 01 17:04:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Oct 01 17:04:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Oct 01 17:04:55 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "format": "json"}]: dispatch
Oct 01 17:04:55 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "72e3096d-d9f8-4227-b53d-59ed09f4fa07", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:04:55 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "72e3096d-d9f8-4227-b53d-59ed09f4fa07", "format": "json"}]: dispatch
Oct 01 17:04:55 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:04:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:04:56 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 54 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 55 KiB/s wr, 6 op/s
Oct 01 17:04:56 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5c1c29e1-3838-4336-89e4-29126b32e8a3", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:04:56 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5c1c29e1-3838-4336-89e4-29126b32e8a3, vol_name:cephfs) < ""
Oct 01 17:04:56 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5c1c29e1-3838-4336-89e4-29126b32e8a3/.meta.tmp'
Oct 01 17:04:56 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5c1c29e1-3838-4336-89e4-29126b32e8a3/.meta.tmp' to config b'/volumes/_nogroup/5c1c29e1-3838-4336-89e4-29126b32e8a3/.meta'
Oct 01 17:04:56 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5c1c29e1-3838-4336-89e4-29126b32e8a3, vol_name:cephfs) < ""
Oct 01 17:04:56 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5c1c29e1-3838-4336-89e4-29126b32e8a3", "format": "json"}]: dispatch
Oct 01 17:04:56 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5c1c29e1-3838-4336-89e4-29126b32e8a3, vol_name:cephfs) < ""
Oct 01 17:04:56 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5c1c29e1-3838-4336-89e4-29126b32e8a3, vol_name:cephfs) < ""
Oct 01 17:04:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:04:56 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:04:56 compute-0 ceph-mon[74273]: pgmap v1059: 305 pgs: 305 active+clean; 54 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 55 KiB/s wr, 6 op/s
Oct 01 17:04:56 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:04:56 compute-0 podman[271645]: 2025-10-01 17:04:56.801460574 +0000 UTC m=+0.104758271 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 01 17:04:57 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5c1c29e1-3838-4336-89e4-29126b32e8a3", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:04:57 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5c1c29e1-3838-4336-89e4-29126b32e8a3", "format": "json"}]: dispatch
Oct 01 17:04:57 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:04:57 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:04:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Oct 01 17:04:57 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Oct 01 17:04:57 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID alice bob with tenant 1841221f332340a299707d253063659f
Oct 01 17:04:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:04:58 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:04:58 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:04:58 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 54 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 55 KiB/s wr, 6 op/s
Oct 01 17:04:58 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:04:58 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:04:58 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Oct 01 17:04:58 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:04:58 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:04:58 compute-0 ceph-mon[74273]: pgmap v1060: 305 pgs: 305 active+clean; 54 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 55 KiB/s wr, 6 op/s
Oct 01 17:04:58 compute-0 podman[271665]: 2025-10-01 17:04:58.772852731 +0000 UTC m=+0.081523497 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd)
Oct 01 17:04:58 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5215a169-5994-425c-a184-e62fdc22c149", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:04:58 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5215a169-5994-425c-a184-e62fdc22c149, vol_name:cephfs) < ""
Oct 01 17:04:58 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5215a169-5994-425c-a184-e62fdc22c149/.meta.tmp'
Oct 01 17:04:58 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5215a169-5994-425c-a184-e62fdc22c149/.meta.tmp' to config b'/volumes/_nogroup/5215a169-5994-425c-a184-e62fdc22c149/.meta'
Oct 01 17:04:58 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5215a169-5994-425c-a184-e62fdc22c149, vol_name:cephfs) < ""
Oct 01 17:04:58 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5215a169-5994-425c-a184-e62fdc22c149", "format": "json"}]: dispatch
Oct 01 17:04:58 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5215a169-5994-425c-a184-e62fdc22c149, vol_name:cephfs) < ""
Oct 01 17:04:58 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5215a169-5994-425c-a184-e62fdc22c149, vol_name:cephfs) < ""
Oct 01 17:04:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:04:58 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:04:59 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5215a169-5994-425c-a184-e62fdc22c149", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:04:59 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5215a169-5994-425c-a184-e62fdc22c149", "format": "json"}]: dispatch
Oct 01 17:04:59 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:04:59 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5c1c29e1-3838-4336-89e4-29126b32e8a3", "auth_id": "Joe", "tenant_id": "86a0062edb9a4a3293bf2e93012e4f13", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:04:59 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:5c1c29e1-3838-4336-89e4-29126b32e8a3, tenant_id:86a0062edb9a4a3293bf2e93012e4f13, vol_name:cephfs) < ""
Oct 01 17:04:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) v1
Oct 01 17:04:59 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.Joe", "format": "json"}]: dispatch
Oct 01 17:04:59 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID Joe with tenant 86a0062edb9a4a3293bf2e93012e4f13
Oct 01 17:04:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c1c29e1-3838-4336-89e4-29126b32e8a3/6880fe2f-c522-4c39-a5e0-71e5bd9f0ff3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5c1c29e1-3838-4336-89e4-29126b32e8a3", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:04:59 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c1c29e1-3838-4336-89e4-29126b32e8a3/6880fe2f-c522-4c39-a5e0-71e5bd9f0ff3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5c1c29e1-3838-4336-89e4-29126b32e8a3", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:04:59 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c1c29e1-3838-4336-89e4-29126b32e8a3/6880fe2f-c522-4c39-a5e0-71e5bd9f0ff3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5c1c29e1-3838-4336-89e4-29126b32e8a3", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:05:00 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 55 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 97 KiB/s wr, 10 op/s
Oct 01 17:05:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:5c1c29e1-3838-4336-89e4-29126b32e8a3, tenant_id:86a0062edb9a4a3293bf2e93012e4f13, vol_name:cephfs) < ""
Oct 01 17:05:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:05:00 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5c1c29e1-3838-4336-89e4-29126b32e8a3", "auth_id": "Joe", "tenant_id": "86a0062edb9a4a3293bf2e93012e4f13", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:05:00 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.Joe", "format": "json"}]: dispatch
Oct 01 17:05:00 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c1c29e1-3838-4336-89e4-29126b32e8a3/6880fe2f-c522-4c39-a5e0-71e5bd9f0ff3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5c1c29e1-3838-4336-89e4-29126b32e8a3", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:05:00 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c1c29e1-3838-4336-89e4-29126b32e8a3/6880fe2f-c522-4c39-a5e0-71e5bd9f0ff3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5c1c29e1-3838-4336-89e4-29126b32e8a3", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:05:00 compute-0 ceph-mon[74273]: pgmap v1061: 305 pgs: 305 active+clean; 55 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 97 KiB/s wr, 10 op/s
Oct 01 17:05:01 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "format": "json"}]: dispatch
Oct 01 17:05:01 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:05:02 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 55 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 63 KiB/s wr, 7 op/s
Oct 01 17:05:02 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "format": "json"}]: dispatch
Oct 01 17:05:02 compute-0 ceph-mon[74273]: pgmap v1062: 305 pgs: 305 active+clean; 55 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 63 KiB/s wr, 7 op/s
Oct 01 17:05:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Oct 01 17:05:02 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Oct 01 17:05:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Oct 01 17:05:02 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Oct 01 17:05:03 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Oct 01 17:05:03 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:05:03.127274) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 17:05:03 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Oct 01 17:05:03 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338303127312, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2365, "num_deletes": 253, "total_data_size": 2950358, "memory_usage": 3010896, "flush_reason": "Manual Compaction"}
Oct 01 17:05:03 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Oct 01 17:05:03 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Oct 01 17:05:03 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338303460556, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 2890717, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21207, "largest_seqno": 23571, "table_properties": {"data_size": 2880375, "index_size": 6260, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 25107, "raw_average_key_size": 21, "raw_value_size": 2858115, "raw_average_value_size": 2420, "num_data_blocks": 277, "num_entries": 1181, "num_filter_entries": 1181, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759338160, "oldest_key_time": 1759338160, "file_creation_time": 1759338303, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Oct 01 17:05:03 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 333334 microseconds, and 7532 cpu microseconds.
Oct 01 17:05:03 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 17:05:03 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:05:03.460609) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 2890717 bytes OK
Oct 01 17:05:03 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:05:03.460630) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Oct 01 17:05:03 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:05:03.788165) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Oct 01 17:05:03 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:05:03.788205) EVENT_LOG_v1 {"time_micros": 1759338303788194, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 17:05:03 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:05:03.788228) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 17:05:03 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 2939622, prev total WAL file size 2941667, number of live WAL files 2.
Oct 01 17:05:03 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 17:05:03 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:05:03.789355) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Oct 01 17:05:03 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 17:05:03 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(2822KB)], [50(7540KB)]
Oct 01 17:05:03 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338303789415, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10612566, "oldest_snapshot_seqno": -1}
Oct 01 17:05:04 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 55 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 92 KiB/s wr, 10 op/s
Oct 01 17:05:04 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Oct 01 17:05:04 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Oct 01 17:05:04 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Oct 01 17:05:04 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5169 keys, 8877979 bytes, temperature: kUnknown
Oct 01 17:05:04 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338304598494, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 8877979, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8841483, "index_size": 22498, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12933, "raw_key_size": 127250, "raw_average_key_size": 24, "raw_value_size": 8746536, "raw_average_value_size": 1692, "num_data_blocks": 938, "num_entries": 5169, "num_filter_entries": 5169, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336399, "oldest_key_time": 0, "file_creation_time": 1759338303, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Oct 01 17:05:04 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 17:05:04 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:05:04.598735) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 8877979 bytes
Oct 01 17:05:04 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:05:04.730370) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 13.1 rd, 11.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 7.4 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 5697, records dropped: 528 output_compression: NoCompression
Oct 01 17:05:04 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:05:04.730412) EVENT_LOG_v1 {"time_micros": 1759338304730393, "job": 26, "event": "compaction_finished", "compaction_time_micros": 809143, "compaction_time_cpu_micros": 37910, "output_level": 6, "num_output_files": 1, "total_output_size": 8877979, "num_input_records": 5697, "num_output_records": 5169, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 17:05:04 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 17:05:04 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338304731097, "job": 26, "event": "table_file_deletion", "file_number": 52}
Oct 01 17:05:04 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 17:05:04 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338304732657, "job": 26, "event": "table_file_deletion", "file_number": 50}
Oct 01 17:05:04 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:05:03.789243) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:05:04 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:05:04.732689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:05:04 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:05:04.732694) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:05:04 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:05:04.732696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:05:04 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:05:04.732698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:05:04 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:05:04.732699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:05:05 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:05:05 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "format": "json"}]: dispatch
Oct 01 17:05:05 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:05:05 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1
Oct 01 17:05:05 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1],prefix=session evict} (starting...)
Oct 01 17:05:05 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:05:05 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:05:05 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f4359a50-b671-43df-8a71-afb168cd35b0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:05:05 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f4359a50-b671-43df-8a71-afb168cd35b0, vol_name:cephfs) < ""
Oct 01 17:05:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:05:05 compute-0 ceph-mon[74273]: pgmap v1063: 305 pgs: 305 active+clean; 55 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 92 KiB/s wr, 10 op/s
Oct 01 17:05:05 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "format": "json"}]: dispatch
Oct 01 17:05:06 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 55 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 70 KiB/s wr, 7 op/s
Oct 01 17:05:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f4359a50-b671-43df-8a71-afb168cd35b0/.meta.tmp'
Oct 01 17:05:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f4359a50-b671-43df-8a71-afb168cd35b0/.meta.tmp' to config b'/volumes/_nogroup/f4359a50-b671-43df-8a71-afb168cd35b0/.meta'
Oct 01 17:05:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f4359a50-b671-43df-8a71-afb168cd35b0, vol_name:cephfs) < ""
Oct 01 17:05:06 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f4359a50-b671-43df-8a71-afb168cd35b0", "format": "json"}]: dispatch
Oct 01 17:05:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f4359a50-b671-43df-8a71-afb168cd35b0, vol_name:cephfs) < ""
Oct 01 17:05:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f4359a50-b671-43df-8a71-afb168cd35b0, vol_name:cephfs) < ""
Oct 01 17:05:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:05:06 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:05:06 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ff4382dd-2fca-4301-a194-9666d40ecd5f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:05:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ff4382dd-2fca-4301-a194-9666d40ecd5f, vol_name:cephfs) < ""
Oct 01 17:05:07 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f4359a50-b671-43df-8a71-afb168cd35b0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:05:07 compute-0 ceph-mon[74273]: pgmap v1064: 305 pgs: 305 active+clean; 55 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 70 KiB/s wr, 7 op/s
Oct 01 17:05:07 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:05:07 compute-0 ceph-osd[88140]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 17:05:07 compute-0 ceph-osd[88140]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 6923 writes, 27K keys, 6923 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6923 writes, 1355 syncs, 5.11 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1197 writes, 3013 keys, 1197 commit groups, 1.0 writes per commit group, ingest: 1.69 MB, 0.00 MB/s
                                           Interval WAL: 1197 writes, 417 syncs, 2.87 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 17:05:08 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 55 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 70 KiB/s wr, 7 op/s
Oct 01 17:05:08 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ff4382dd-2fca-4301-a194-9666d40ecd5f/.meta.tmp'
Oct 01 17:05:08 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ff4382dd-2fca-4301-a194-9666d40ecd5f/.meta.tmp' to config b'/volumes/_nogroup/ff4382dd-2fca-4301-a194-9666d40ecd5f/.meta'
Oct 01 17:05:08 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ff4382dd-2fca-4301-a194-9666d40ecd5f, vol_name:cephfs) < ""
Oct 01 17:05:08 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ff4382dd-2fca-4301-a194-9666d40ecd5f", "format": "json"}]: dispatch
Oct 01 17:05:08 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ff4382dd-2fca-4301-a194-9666d40ecd5f, vol_name:cephfs) < ""
Oct 01 17:05:08 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ff4382dd-2fca-4301-a194-9666d40ecd5f, vol_name:cephfs) < ""
Oct 01 17:05:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:05:08 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:05:08 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f4359a50-b671-43df-8a71-afb168cd35b0", "format": "json"}]: dispatch
Oct 01 17:05:08 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ff4382dd-2fca-4301-a194-9666d40ecd5f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:05:08 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "r", "format": "json"}]: dispatch
Oct 01 17:05:08 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:05:08 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Oct 01 17:05:08 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Oct 01 17:05:08 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID alice bob with tenant 1841221f332340a299707d253063659f
Oct 01 17:05:09 compute-0 ceph-mon[74273]: pgmap v1065: 305 pgs: 305 active+clean; 55 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 70 KiB/s wr, 7 op/s
Oct 01 17:05:09 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ff4382dd-2fca-4301-a194-9666d40ecd5f", "format": "json"}]: dispatch
Oct 01 17:05:09 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:05:09 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "r", "format": "json"}]: dispatch
Oct 01 17:05:09 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Oct 01 17:05:09 compute-0 podman[271686]: 2025-10-01 17:05:09.791151719 +0000 UTC m=+0.106321417 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 01 17:05:10 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 55 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 96 KiB/s wr, 9 op/s
Oct 01 17:05:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:05:10 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:05:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:05:10 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:05:10 compute-0 ceph-mon[74273]: pgmap v1066: 305 pgs: 305 active+clean; 55 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 96 KiB/s wr, 9 op/s
Oct 01 17:05:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:05:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:05:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:05:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:05:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:05:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_17:05:11
Oct 01 17:05:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 17:05:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 17:05:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'default.rgw.log', 'backups', 'images', 'volumes', '.mgr', 'default.rgw.control', 'default.rgw.meta', '.rgw.root']
Oct 01 17:05:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 17:05:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:05:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:05:11 compute-0 ceph-osd[89167]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 17:05:11 compute-0 ceph-osd[89167]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 10K writes, 41K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 2753 syncs, 3.79 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3588 writes, 13K keys, 3588 commit groups, 1.0 writes per commit group, ingest: 19.98 MB, 0.03 MB/s
                                           Interval WAL: 3588 writes, 1465 syncs, 2.45 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 17:05:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 17:05:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:05:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:05:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:05:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:05:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 17:05:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:05:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:05:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:05:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:05:12 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 305 active+clean; 55 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 54 KiB/s wr, 4 op/s
Oct 01 17:05:12 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:05:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:05:12 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "885343a2-ae40-4dfa-a601-01b84b6596ee", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:05:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:885343a2-ae40-4dfa-a601-01b84b6596ee, vol_name:cephfs) < ""
Oct 01 17:05:13 compute-0 ceph-mon[74273]: pgmap v1067: 305 pgs: 305 active+clean; 55 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 54 KiB/s wr, 4 op/s
Oct 01 17:05:13 compute-0 podman[271712]: 2025-10-01 17:05:13.779325556 +0000 UTC m=+0.090310287 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 01 17:05:14 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 55 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 78 KiB/s wr, 7 op/s
Oct 01 17:05:14 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/885343a2-ae40-4dfa-a601-01b84b6596ee/.meta.tmp'
Oct 01 17:05:14 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/885343a2-ae40-4dfa-a601-01b84b6596ee/.meta.tmp' to config b'/volumes/_nogroup/885343a2-ae40-4dfa-a601-01b84b6596ee/.meta'
Oct 01 17:05:14 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:885343a2-ae40-4dfa-a601-01b84b6596ee, vol_name:cephfs) < ""
Oct 01 17:05:14 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "885343a2-ae40-4dfa-a601-01b84b6596ee", "format": "json"}]: dispatch
Oct 01 17:05:14 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:885343a2-ae40-4dfa-a601-01b84b6596ee, vol_name:cephfs) < ""
Oct 01 17:05:14 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:885343a2-ae40-4dfa-a601-01b84b6596ee, vol_name:cephfs) < ""
Oct 01 17:05:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:05:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:05:14 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "f4359a50-b671-43df-8a71-afb168cd35b0", "auth_id": "Joe", "tenant_id": "fb3120e923574f759bb16e2e2d603473", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:05:14 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:f4359a50-b671-43df-8a71-afb168cd35b0, tenant_id:fb3120e923574f759bb16e2e2d603473, vol_name:cephfs) < ""
Oct 01 17:05:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) v1
Oct 01 17:05:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.Joe", "format": "json"}]: dispatch
Oct 01 17:05:14 compute-0 ceph-mgr[74571]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: Joe is already in use
Oct 01 17:05:14 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:f4359a50-b671-43df-8a71-afb168cd35b0, tenant_id:fb3120e923574f759bb16e2e2d603473, vol_name:cephfs) < ""
Oct 01 17:05:14 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:05:14.300+0000 7f813a030640 -1 mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use
Oct 01 17:05:14 compute-0 ceph-mgr[74571]: mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use
Oct 01 17:05:14 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "885343a2-ae40-4dfa-a601-01b84b6596ee", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:05:14 compute-0 ceph-mon[74273]: pgmap v1068: 305 pgs: 305 active+clean; 55 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 78 KiB/s wr, 7 op/s
Oct 01 17:05:14 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "885343a2-ae40-4dfa-a601-01b84b6596ee", "format": "json"}]: dispatch
Oct 01 17:05:14 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:05:14 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "f4359a50-b671-43df-8a71-afb168cd35b0", "auth_id": "Joe", "tenant_id": "fb3120e923574f759bb16e2e2d603473", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:05:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.Joe", "format": "json"}]: dispatch
Oct 01 17:05:15 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "format": "json"}]: dispatch
Oct 01 17:05:15 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:05:15 compute-0 nova_compute[259504]: 2025-10-01 17:05:15.521 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:05:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:05:15 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "format": "json"}]: dispatch
Oct 01 17:05:16 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 55 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 49 KiB/s wr, 4 op/s
Oct 01 17:05:16 compute-0 ceph-osd[90269]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 17:05:16 compute-0 ceph-osd[90269]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 7483 writes, 29K keys, 7483 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7482 writes, 1560 syncs, 4.80 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1885 writes, 5550 keys, 1885 commit groups, 1.0 writes per commit group, ingest: 5.36 MB, 0.01 MB/s
                                           Interval WAL: 1884 writes, 696 syncs, 2.71 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 17:05:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Oct 01 17:05:16 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Oct 01 17:05:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Oct 01 17:05:16 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Oct 01 17:05:16 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Oct 01 17:05:16 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:05:16 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "format": "json"}]: dispatch
Oct 01 17:05:16 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:05:16 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1
Oct 01 17:05:16 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1],prefix=session evict} (starting...)
Oct 01 17:05:16 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:05:16 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:05:16 compute-0 ceph-mon[74273]: pgmap v1069: 305 pgs: 305 active+clean; 55 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 49 KiB/s wr, 4 op/s
Oct 01 17:05:16 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Oct 01 17:05:16 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Oct 01 17:05:16 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Oct 01 17:05:16 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "f4359a50-b671-43df-8a71-afb168cd35b0", "auth_id": "tempest-cephx-id-1818181793", "tenant_id": "fb3120e923574f759bb16e2e2d603473", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:05:16 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1818181793, format:json, prefix:fs subvolume authorize, sub_name:f4359a50-b671-43df-8a71-afb168cd35b0, tenant_id:fb3120e923574f759bb16e2e2d603473, vol_name:cephfs) < ""
Oct 01 17:05:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1818181793", "format": "json"} v 0) v1
Oct 01 17:05:16 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1818181793", "format": "json"}]: dispatch
Oct 01 17:05:16 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID tempest-cephx-id-1818181793 with tenant fb3120e923574f759bb16e2e2d603473
Oct 01 17:05:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1818181793", "caps": ["mds", "allow rw path=/volumes/_nogroup/f4359a50-b671-43df-8a71-afb168cd35b0/b045be5c-ec65-49ca-b68d-4dfa56f47772", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_f4359a50-b671-43df-8a71-afb168cd35b0", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:05:17 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1818181793", "caps": ["mds", "allow rw path=/volumes/_nogroup/f4359a50-b671-43df-8a71-afb168cd35b0/b045be5c-ec65-49ca-b68d-4dfa56f47772", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_f4359a50-b671-43df-8a71-afb168cd35b0", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:05:17 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1818181793", "caps": ["mds", "allow rw path=/volumes/_nogroup/f4359a50-b671-43df-8a71-afb168cd35b0/b045be5c-ec65-49ca-b68d-4dfa56f47772", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_f4359a50-b671-43df-8a71-afb168cd35b0", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:05:17 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1818181793, format:json, prefix:fs subvolume authorize, sub_name:f4359a50-b671-43df-8a71-afb168cd35b0, tenant_id:fb3120e923574f759bb16e2e2d603473, vol_name:cephfs) < ""
Oct 01 17:05:17 compute-0 nova_compute[259504]: 2025-10-01 17:05:17.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:05:17 compute-0 nova_compute[259504]: 2025-10-01 17:05:17.784 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:05:17 compute-0 nova_compute[259504]: 2025-10-01 17:05:17.784 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:05:17 compute-0 nova_compute[259504]: 2025-10-01 17:05:17.785 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:05:17 compute-0 nova_compute[259504]: 2025-10-01 17:05:17.785 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 17:05:17 compute-0 nova_compute[259504]: 2025-10-01 17:05:17.785 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:05:17 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "format": "json"}]: dispatch
Oct 01 17:05:17 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "f4359a50-b671-43df-8a71-afb168cd35b0", "auth_id": "tempest-cephx-id-1818181793", "tenant_id": "fb3120e923574f759bb16e2e2d603473", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:05:17 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1818181793", "format": "json"}]: dispatch
Oct 01 17:05:17 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1818181793", "caps": ["mds", "allow rw path=/volumes/_nogroup/f4359a50-b671-43df-8a71-afb168cd35b0/b045be5c-ec65-49ca-b68d-4dfa56f47772", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_f4359a50-b671-43df-8a71-afb168cd35b0", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:05:17 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1818181793", "caps": ["mds", "allow rw path=/volumes/_nogroup/f4359a50-b671-43df-8a71-afb168cd35b0/b045be5c-ec65-49ca-b68d-4dfa56f47772", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_f4359a50-b671-43df-8a71-afb168cd35b0", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:05:18 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 55 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 50 KiB/s wr, 4 op/s
Oct 01 17:05:18 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "885343a2-ae40-4dfa-a601-01b84b6596ee", "format": "json"}]: dispatch
Oct 01 17:05:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:885343a2-ae40-4dfa-a601-01b84b6596ee, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:05:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:885343a2-ae40-4dfa-a601-01b84b6596ee, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:05:18 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:05:18.115+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '885343a2-ae40-4dfa-a601-01b84b6596ee' of type subvolume
Oct 01 17:05:18 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '885343a2-ae40-4dfa-a601-01b84b6596ee' of type subvolume
Oct 01 17:05:18 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "885343a2-ae40-4dfa-a601-01b84b6596ee", "force": true, "format": "json"}]: dispatch
Oct 01 17:05:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:885343a2-ae40-4dfa-a601-01b84b6596ee, vol_name:cephfs) < ""
Oct 01 17:05:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/885343a2-ae40-4dfa-a601-01b84b6596ee'' moved to trashcan
Oct 01 17:05:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:05:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:885343a2-ae40-4dfa-a601-01b84b6596ee, vol_name:cephfs) < ""
Oct 01 17:05:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:05:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2270853364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:05:18 compute-0 nova_compute[259504]: 2025-10-01 17:05:18.421 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.636s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:05:18 compute-0 ceph-mgr[74571]: [devicehealth INFO root] Check health
Oct 01 17:05:18 compute-0 nova_compute[259504]: 2025-10-01 17:05:18.592 2 WARNING nova.virt.libvirt.driver [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 17:05:18 compute-0 nova_compute[259504]: 2025-10-01 17:05:18.593 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5112MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 17:05:18 compute-0 nova_compute[259504]: 2025-10-01 17:05:18.593 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:05:18 compute-0 nova_compute[259504]: 2025-10-01 17:05:18.594 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:05:18 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:05:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:05:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Oct 01 17:05:18 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Oct 01 17:05:18 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID alice with tenant 1841221f332340a299707d253063659f
Oct 01 17:05:19 compute-0 nova_compute[259504]: 2025-10-01 17:05:19.111 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 17:05:19 compute-0 nova_compute[259504]: 2025-10-01 17:05:19.112 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 17:05:19 compute-0 nova_compute[259504]: 2025-10-01 17:05:19.137 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:05:19 compute-0 ceph-mon[74273]: pgmap v1070: 305 pgs: 305 active+clean; 55 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 50 KiB/s wr, 4 op/s
Oct 01 17:05:19 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "885343a2-ae40-4dfa-a601-01b84b6596ee", "format": "json"}]: dispatch
Oct 01 17:05:19 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "885343a2-ae40-4dfa-a601-01b84b6596ee", "force": true, "format": "json"}]: dispatch
Oct 01 17:05:19 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2270853364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:05:19 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Oct 01 17:05:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:05:19 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1061403045' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:05:19 compute-0 nova_compute[259504]: 2025-10-01 17:05:19.838 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.701s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:05:19 compute-0 nova_compute[259504]: 2025-10-01 17:05:19.845 2 DEBUG nova.compute.provider_tree [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed in ProviderTree for provider: 2417da73-53f1-4edf-ae4c-fbd9fa470d6b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 17:05:19 compute-0 nova_compute[259504]: 2025-10-01 17:05:19.887 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 17:05:19 compute-0 nova_compute[259504]: 2025-10-01 17:05:19.889 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 17:05:19 compute-0 nova_compute[259504]: 2025-10-01 17:05:19.889 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.296s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:05:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:05:19.974 162304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:05:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:05:19.974 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:05:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:05:19.975 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:05:20 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 56 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 88 KiB/s wr, 7 op/s
Oct 01 17:05:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:05:20 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:05:20 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:05:20 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:05:20 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1061403045' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:05:20 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:05:20 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:05:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00023833797291137593 of space, bias 4.0, pg target 0.28600556749365114 quantized to 16 (current 16)
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 8.902699094875301e-07 of space, bias 1.0, pg target 0.00026708097284625906 quantized to 32 (current 32)
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "f4359a50-b671-43df-8a71-afb168cd35b0", "auth_id": "Joe", "format": "json"}]: dispatch
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:f4359a50-b671-43df-8a71-afb168cd35b0, vol_name:cephfs) < ""
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'Joe' for subvolume 'f4359a50-b671-43df-8a71-afb168cd35b0'
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:f4359a50-b671-43df-8a71-afb168cd35b0, vol_name:cephfs) < ""
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "f4359a50-b671-43df-8a71-afb168cd35b0", "auth_id": "Joe", "format": "json"}]: dispatch
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:f4359a50-b671-43df-8a71-afb168cd35b0, vol_name:cephfs) < ""
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/f4359a50-b671-43df-8a71-afb168cd35b0/b045be5c-ec65-49ca-b68d-4dfa56f47772
Oct 01 17:05:21 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=Joe,client_metadata.root=/volumes/_nogroup/f4359a50-b671-43df-8a71-afb168cd35b0/b045be5c-ec65-49ca-b68d-4dfa56f47772],prefix=session evict} (starting...)
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:f4359a50-b671-43df-8a71-afb168cd35b0, vol_name:cephfs) < ""
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ff4382dd-2fca-4301-a194-9666d40ecd5f", "format": "json"}]: dispatch
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ff4382dd-2fca-4301-a194-9666d40ecd5f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ff4382dd-2fca-4301-a194-9666d40ecd5f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:05:21 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:05:21.551+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ff4382dd-2fca-4301-a194-9666d40ecd5f' of type subvolume
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ff4382dd-2fca-4301-a194-9666d40ecd5f' of type subvolume
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ff4382dd-2fca-4301-a194-9666d40ecd5f", "force": true, "format": "json"}]: dispatch
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ff4382dd-2fca-4301-a194-9666d40ecd5f, vol_name:cephfs) < ""
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ff4382dd-2fca-4301-a194-9666d40ecd5f'' moved to trashcan
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:05:21 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ff4382dd-2fca-4301-a194-9666d40ecd5f, vol_name:cephfs) < ""
Oct 01 17:05:21 compute-0 ceph-mon[74273]: pgmap v1071: 305 pgs: 305 active+clean; 56 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 88 KiB/s wr, 7 op/s
Oct 01 17:05:22 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 56 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s wr, 5 op/s
Oct 01 17:05:22 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "format": "json"}]: dispatch
Oct 01 17:05:22 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:05:22 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "f4359a50-b671-43df-8a71-afb168cd35b0", "auth_id": "Joe", "format": "json"}]: dispatch
Oct 01 17:05:22 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "f4359a50-b671-43df-8a71-afb168cd35b0", "auth_id": "Joe", "format": "json"}]: dispatch
Oct 01 17:05:22 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ff4382dd-2fca-4301-a194-9666d40ecd5f", "format": "json"}]: dispatch
Oct 01 17:05:22 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ff4382dd-2fca-4301-a194-9666d40ecd5f", "force": true, "format": "json"}]: dispatch
Oct 01 17:05:22 compute-0 ceph-mon[74273]: pgmap v1072: 305 pgs: 305 active+clean; 56 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s wr, 5 op/s
Oct 01 17:05:22 compute-0 nova_compute[259504]: 2025-10-01 17:05:22.890 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:05:22 compute-0 nova_compute[259504]: 2025-10-01 17:05:22.890 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:05:22 compute-0 nova_compute[259504]: 2025-10-01 17:05:22.891 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 17:05:22 compute-0 nova_compute[259504]: 2025-10-01 17:05:22.891 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 17:05:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Oct 01 17:05:22 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Oct 01 17:05:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Oct 01 17:05:22 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Oct 01 17:05:22 compute-0 nova_compute[259504]: 2025-10-01 17:05:22.953 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 17:05:22 compute-0 nova_compute[259504]: 2025-10-01 17:05:22.953 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:05:22 compute-0 nova_compute[259504]: 2025-10-01 17:05:22.954 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:05:22 compute-0 nova_compute[259504]: 2025-10-01 17:05:22.954 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:05:22 compute-0 nova_compute[259504]: 2025-10-01 17:05:22.955 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:05:22 compute-0 nova_compute[259504]: 2025-10-01 17:05:22.955 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 17:05:23 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Oct 01 17:05:23 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:05:23 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "format": "json"}]: dispatch
Oct 01 17:05:23 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:05:23 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1
Oct 01 17:05:23 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1],prefix=session evict} (starting...)
Oct 01 17:05:23 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:05:23 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:05:23 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "f4359a50-b671-43df-8a71-afb168cd35b0", "auth_id": "tempest-cephx-id-1818181793", "format": "json"}]: dispatch
Oct 01 17:05:23 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1818181793, format:json, prefix:fs subvolume deauthorize, sub_name:f4359a50-b671-43df-8a71-afb168cd35b0, vol_name:cephfs) < ""
Oct 01 17:05:23 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "format": "json"}]: dispatch
Oct 01 17:05:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Oct 01 17:05:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Oct 01 17:05:23 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Oct 01 17:05:23 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "format": "json"}]: dispatch
Oct 01 17:05:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1818181793", "format": "json"} v 0) v1
Oct 01 17:05:23 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1818181793", "format": "json"}]: dispatch
Oct 01 17:05:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1818181793"} v 0) v1
Oct 01 17:05:23 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1818181793"}]: dispatch
Oct 01 17:05:23 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1818181793"}]': finished
Oct 01 17:05:24 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1818181793, format:json, prefix:fs subvolume deauthorize, sub_name:f4359a50-b671-43df-8a71-afb168cd35b0, vol_name:cephfs) < ""
Oct 01 17:05:24 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "f4359a50-b671-43df-8a71-afb168cd35b0", "auth_id": "tempest-cephx-id-1818181793", "format": "json"}]: dispatch
Oct 01 17:05:24 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1818181793, format:json, prefix:fs subvolume evict, sub_name:f4359a50-b671-43df-8a71-afb168cd35b0, vol_name:cephfs) < ""
Oct 01 17:05:24 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1818181793, client_metadata.root=/volumes/_nogroup/f4359a50-b671-43df-8a71-afb168cd35b0/b045be5c-ec65-49ca-b68d-4dfa56f47772
Oct 01 17:05:24 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=tempest-cephx-id-1818181793,client_metadata.root=/volumes/_nogroup/f4359a50-b671-43df-8a71-afb168cd35b0/b045be5c-ec65-49ca-b68d-4dfa56f47772],prefix=session evict} (starting...)
Oct 01 17:05:24 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:05:24 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1818181793, format:json, prefix:fs subvolume evict, sub_name:f4359a50-b671-43df-8a71-afb168cd35b0, vol_name:cephfs) < ""
Oct 01 17:05:24 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 56 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 92 KiB/s wr, 9 op/s
Oct 01 17:05:24 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "f4359a50-b671-43df-8a71-afb168cd35b0", "auth_id": "tempest-cephx-id-1818181793", "format": "json"}]: dispatch
Oct 01 17:05:24 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1818181793", "format": "json"}]: dispatch
Oct 01 17:05:24 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1818181793"}]: dispatch
Oct 01 17:05:24 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1818181793"}]': finished
Oct 01 17:05:24 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "f4359a50-b671-43df-8a71-afb168cd35b0", "auth_id": "tempest-cephx-id-1818181793", "format": "json"}]: dispatch
Oct 01 17:05:24 compute-0 ceph-mon[74273]: pgmap v1073: 305 pgs: 305 active+clean; 56 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 92 KiB/s wr, 9 op/s
Oct 01 17:05:24 compute-0 nova_compute[259504]: 2025-10-01 17:05:24.751 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:05:25 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5215a169-5994-425c-a184-e62fdc22c149", "format": "json"}]: dispatch
Oct 01 17:05:25 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5215a169-5994-425c-a184-e62fdc22c149, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:05:25 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5215a169-5994-425c-a184-e62fdc22c149, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:05:25 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:05:25.043+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5215a169-5994-425c-a184-e62fdc22c149' of type subvolume
Oct 01 17:05:25 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5215a169-5994-425c-a184-e62fdc22c149' of type subvolume
Oct 01 17:05:25 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5215a169-5994-425c-a184-e62fdc22c149", "force": true, "format": "json"}]: dispatch
Oct 01 17:05:25 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5215a169-5994-425c-a184-e62fdc22c149, vol_name:cephfs) < ""
Oct 01 17:05:25 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5215a169-5994-425c-a184-e62fdc22c149'' moved to trashcan
Oct 01 17:05:25 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:05:25 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5215a169-5994-425c-a184-e62fdc22c149, vol_name:cephfs) < ""
Oct 01 17:05:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:05:26 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5215a169-5994-425c-a184-e62fdc22c149", "format": "json"}]: dispatch
Oct 01 17:05:26 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5215a169-5994-425c-a184-e62fdc22c149", "force": true, "format": "json"}]: dispatch
Oct 01 17:05:26 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 56 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 69 KiB/s wr, 7 op/s
Oct 01 17:05:26 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "r", "format": "json"}]: dispatch
Oct 01 17:05:26 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:05:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Oct 01 17:05:26 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Oct 01 17:05:26 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID alice with tenant 1841221f332340a299707d253063659f
Oct 01 17:05:27 compute-0 ceph-mon[74273]: pgmap v1074: 305 pgs: 305 active+clean; 56 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 69 KiB/s wr, 7 op/s
Oct 01 17:05:27 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Oct 01 17:05:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:05:27 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:05:27 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:05:27 compute-0 podman[271780]: 2025-10-01 17:05:27.728858302 +0000 UTC m=+0.051619549 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 01 17:05:27 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:05:28 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 57 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 121 KiB/s wr, 13 op/s
Oct 01 17:05:28 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5c1c29e1-3838-4336-89e4-29126b32e8a3", "auth_id": "Joe", "format": "json"}]: dispatch
Oct 01 17:05:28 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:5c1c29e1-3838-4336-89e4-29126b32e8a3, vol_name:cephfs) < ""
Oct 01 17:05:28 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "r", "format": "json"}]: dispatch
Oct 01 17:05:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:05:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:05:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) v1
Oct 01 17:05:28 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.Joe", "format": "json"}]: dispatch
Oct 01 17:05:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.Joe"} v 0) v1
Oct 01 17:05:28 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.Joe"}]: dispatch
Oct 01 17:05:28 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished
Oct 01 17:05:29 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:5c1c29e1-3838-4336-89e4-29126b32e8a3, vol_name:cephfs) < ""
Oct 01 17:05:29 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5c1c29e1-3838-4336-89e4-29126b32e8a3", "auth_id": "Joe", "format": "json"}]: dispatch
Oct 01 17:05:29 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:5c1c29e1-3838-4336-89e4-29126b32e8a3, vol_name:cephfs) < ""
Oct 01 17:05:29 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/5c1c29e1-3838-4336-89e4-29126b32e8a3/6880fe2f-c522-4c39-a5e0-71e5bd9f0ff3
Oct 01 17:05:29 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=Joe,client_metadata.root=/volumes/_nogroup/5c1c29e1-3838-4336-89e4-29126b32e8a3/6880fe2f-c522-4c39-a5e0-71e5bd9f0ff3],prefix=session evict} (starting...)
Oct 01 17:05:29 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:05:29 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:5c1c29e1-3838-4336-89e4-29126b32e8a3, vol_name:cephfs) < ""
Oct 01 17:05:29 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "72e3096d-d9f8-4227-b53d-59ed09f4fa07", "format": "json"}]: dispatch
Oct 01 17:05:29 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:72e3096d-d9f8-4227-b53d-59ed09f4fa07, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:05:29 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:72e3096d-d9f8-4227-b53d-59ed09f4fa07, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:05:29 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:05:29.302+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '72e3096d-d9f8-4227-b53d-59ed09f4fa07' of type subvolume
Oct 01 17:05:29 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '72e3096d-d9f8-4227-b53d-59ed09f4fa07' of type subvolume
Oct 01 17:05:29 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "72e3096d-d9f8-4227-b53d-59ed09f4fa07", "force": true, "format": "json"}]: dispatch
Oct 01 17:05:29 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:72e3096d-d9f8-4227-b53d-59ed09f4fa07, vol_name:cephfs) < ""
Oct 01 17:05:29 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/72e3096d-d9f8-4227-b53d-59ed09f4fa07'' moved to trashcan
Oct 01 17:05:29 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:05:29 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:72e3096d-d9f8-4227-b53d-59ed09f4fa07, vol_name:cephfs) < ""
Oct 01 17:05:29 compute-0 ceph-mon[74273]: pgmap v1075: 305 pgs: 305 active+clean; 57 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 121 KiB/s wr, 13 op/s
Oct 01 17:05:29 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5c1c29e1-3838-4336-89e4-29126b32e8a3", "auth_id": "Joe", "format": "json"}]: dispatch
Oct 01 17:05:29 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.Joe", "format": "json"}]: dispatch
Oct 01 17:05:29 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.Joe"}]: dispatch
Oct 01 17:05:29 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished
Oct 01 17:05:29 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5c1c29e1-3838-4336-89e4-29126b32e8a3", "auth_id": "Joe", "format": "json"}]: dispatch
Oct 01 17:05:29 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "72e3096d-d9f8-4227-b53d-59ed09f4fa07", "format": "json"}]: dispatch
Oct 01 17:05:29 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "72e3096d-d9f8-4227-b53d-59ed09f4fa07", "force": true, "format": "json"}]: dispatch
Oct 01 17:05:29 compute-0 podman[271805]: 2025-10-01 17:05:29.77122886 +0000 UTC m=+0.080425732 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 01 17:05:30 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 57 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 122 KiB/s wr, 13 op/s
Oct 01 17:05:30 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "format": "json"}]: dispatch
Oct 01 17:05:30 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:05:30 compute-0 ceph-mon[74273]: pgmap v1076: 305 pgs: 305 active+clean; 57 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 122 KiB/s wr, 13 op/s
Oct 01 17:05:30 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "format": "json"}]: dispatch
Oct 01 17:05:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:05:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Oct 01 17:05:31 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Oct 01 17:05:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Oct 01 17:05:31 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Oct 01 17:05:31 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Oct 01 17:05:32 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Oct 01 17:05:32 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Oct 01 17:05:32 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Oct 01 17:05:32 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 57 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 84 KiB/s wr, 10 op/s
Oct 01 17:05:32 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:05:32 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "format": "json"}]: dispatch
Oct 01 17:05:32 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:05:32 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1
Oct 01 17:05:32 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1],prefix=session evict} (starting...)
Oct 01 17:05:32 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:05:32 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:05:33 compute-0 ceph-mon[74273]: pgmap v1077: 305 pgs: 305 active+clean; 57 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 84 KiB/s wr, 10 op/s
Oct 01 17:05:33 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice", "format": "json"}]: dispatch
Oct 01 17:05:33 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "47b2be1b-8457-4141-a779-0c9e30960e86", "auth_id": "admin", "tenant_id": "86a0062edb9a4a3293bf2e93012e4f13", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:05:33 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:47b2be1b-8457-4141-a779-0c9e30960e86, tenant_id:86a0062edb9a4a3293bf2e93012e4f13, vol_name:cephfs) < ""
Oct 01 17:05:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin", "format": "json"} v 0) v1
Oct 01 17:05:33 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin", "format": "json"}]: dispatch
Oct 01 17:05:33 compute-0 ceph-mgr[74571]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Oct 01 17:05:33 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:47b2be1b-8457-4141-a779-0c9e30960e86, tenant_id:86a0062edb9a4a3293bf2e93012e4f13, vol_name:cephfs) < ""
Oct 01 17:05:33 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:05:33.468+0000 7f813a030640 -1 mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Oct 01 17:05:33 compute-0 ceph-mgr[74571]: mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Oct 01 17:05:34 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 57 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 116 KiB/s wr, 14 op/s
Oct 01 17:05:34 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin", "format": "json"}]: dispatch
Oct 01 17:05:34 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:05:34 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:05:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Oct 01 17:05:34 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Oct 01 17:05:34 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID alice_bob with tenant 1841221f332340a299707d253063659f
Oct 01 17:05:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:05:35 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:05:35 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:05:35 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "47b2be1b-8457-4141-a779-0c9e30960e86", "auth_id": "admin", "tenant_id": "86a0062edb9a4a3293bf2e93012e4f13", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:05:35 compute-0 ceph-mon[74273]: pgmap v1078: 305 pgs: 305 active+clean; 57 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 116 KiB/s wr, 14 op/s
Oct 01 17:05:35 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Oct 01 17:05:35 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:05:35 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:05:35 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:05:35 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "47b2be1b-8457-4141-a779-0c9e30960e86", "auth_id": "david", "tenant_id": "86a0062edb9a4a3293bf2e93012e4f13", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:05:35 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:47b2be1b-8457-4141-a779-0c9e30960e86, tenant_id:86a0062edb9a4a3293bf2e93012e4f13, vol_name:cephfs) < ""
Oct 01 17:05:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) v1
Oct 01 17:05:35 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.david", "format": "json"}]: dispatch
Oct 01 17:05:35 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID david with tenant 86a0062edb9a4a3293bf2e93012e4f13
Oct 01 17:05:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:05:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/47b2be1b-8457-4141-a779-0c9e30960e86/103fa946-2c7f-48e4-9385-b7cf2236d2f2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_47b2be1b-8457-4141-a779-0c9e30960e86", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:05:35 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/47b2be1b-8457-4141-a779-0c9e30960e86/103fa946-2c7f-48e4-9385-b7cf2236d2f2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_47b2be1b-8457-4141-a779-0c9e30960e86", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:05:35 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/47b2be1b-8457-4141-a779-0c9e30960e86/103fa946-2c7f-48e4-9385-b7cf2236d2f2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_47b2be1b-8457-4141-a779-0c9e30960e86", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:05:35 compute-0 ceph-osd[88140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 01 17:05:35 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:47b2be1b-8457-4141-a779-0c9e30960e86, tenant_id:86a0062edb9a4a3293bf2e93012e4f13, vol_name:cephfs) < ""
Oct 01 17:05:36 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 57 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 KiB/s wr, 9 op/s
Oct 01 17:05:36 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:05:36 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.david", "format": "json"}]: dispatch
Oct 01 17:05:36 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/47b2be1b-8457-4141-a779-0c9e30960e86/103fa946-2c7f-48e4-9385-b7cf2236d2f2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_47b2be1b-8457-4141-a779-0c9e30960e86", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:05:36 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/47b2be1b-8457-4141-a779-0c9e30960e86/103fa946-2c7f-48e4-9385-b7cf2236d2f2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_47b2be1b-8457-4141-a779-0c9e30960e86", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:05:37 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "47b2be1b-8457-4141-a779-0c9e30960e86", "auth_id": "david", "tenant_id": "86a0062edb9a4a3293bf2e93012e4f13", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:05:37 compute-0 ceph-mon[74273]: pgmap v1079: 305 pgs: 305 active+clean; 57 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 KiB/s wr, 9 op/s
Oct 01 17:05:37 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "format": "json"}]: dispatch
Oct 01 17:05:37 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:05:38 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 57 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 97 KiB/s wr, 11 op/s
Oct 01 17:05:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Oct 01 17:05:38 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Oct 01 17:05:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Oct 01 17:05:38 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Oct 01 17:05:38 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Oct 01 17:05:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:05:38 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "format": "json"}]: dispatch
Oct 01 17:05:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:05:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1
Oct 01 17:05:38 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1],prefix=session evict} (starting...)
Oct 01 17:05:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:05:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:05:38 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Oct 01 17:05:38 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Oct 01 17:05:38 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Oct 01 17:05:38 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1c897d2a-1173-4d28-affc-8c999f770456", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:05:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1c897d2a-1173-4d28-affc-8c999f770456, vol_name:cephfs) < ""
Oct 01 17:05:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1c897d2a-1173-4d28-affc-8c999f770456/.meta.tmp'
Oct 01 17:05:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1c897d2a-1173-4d28-affc-8c999f770456/.meta.tmp' to config b'/volumes/_nogroup/1c897d2a-1173-4d28-affc-8c999f770456/.meta'
Oct 01 17:05:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1c897d2a-1173-4d28-affc-8c999f770456, vol_name:cephfs) < ""
Oct 01 17:05:38 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1c897d2a-1173-4d28-affc-8c999f770456", "format": "json"}]: dispatch
Oct 01 17:05:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1c897d2a-1173-4d28-affc-8c999f770456, vol_name:cephfs) < ""
Oct 01 17:05:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1c897d2a-1173-4d28-affc-8c999f770456, vol_name:cephfs) < ""
Oct 01 17:05:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:05:38 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:05:38 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "90543113-872c-4287-b768-45f56fc8f849", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:05:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:90543113-872c-4287-b768-45f56fc8f849, vol_name:cephfs) < ""
Oct 01 17:05:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/90543113-872c-4287-b768-45f56fc8f849/.meta.tmp'
Oct 01 17:05:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/90543113-872c-4287-b768-45f56fc8f849/.meta.tmp' to config b'/volumes/_nogroup/90543113-872c-4287-b768-45f56fc8f849/.meta'
Oct 01 17:05:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:90543113-872c-4287-b768-45f56fc8f849, vol_name:cephfs) < ""
Oct 01 17:05:38 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "90543113-872c-4287-b768-45f56fc8f849", "format": "json"}]: dispatch
Oct 01 17:05:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:90543113-872c-4287-b768-45f56fc8f849, vol_name:cephfs) < ""
Oct 01 17:05:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:90543113-872c-4287-b768-45f56fc8f849, vol_name:cephfs) < ""
Oct 01 17:05:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:05:38 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:05:39 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "format": "json"}]: dispatch
Oct 01 17:05:39 compute-0 ceph-mon[74273]: pgmap v1080: 305 pgs: 305 active+clean; 57 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 97 KiB/s wr, 11 op/s
Oct 01 17:05:39 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "format": "json"}]: dispatch
Oct 01 17:05:39 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:05:39 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:05:40 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 58 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 91 KiB/s wr, 9 op/s
Oct 01 17:05:40 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1c897d2a-1173-4d28-affc-8c999f770456", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:05:40 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1c897d2a-1173-4d28-affc-8c999f770456", "format": "json"}]: dispatch
Oct 01 17:05:40 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "90543113-872c-4287-b768-45f56fc8f849", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:05:40 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "90543113-872c-4287-b768-45f56fc8f849", "format": "json"}]: dispatch
Oct 01 17:05:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:05:40 compute-0 podman[271828]: 2025-10-01 17:05:40.782102737 +0000 UTC m=+0.095745650 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 01 17:05:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:05:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:05:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:05:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:05:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:05:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:05:41 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "r", "format": "json"}]: dispatch
Oct 01 17:05:41 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:05:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Oct 01 17:05:41 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Oct 01 17:05:41 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID alice_bob with tenant 1841221f332340a299707d253063659f
Oct 01 17:05:41 compute-0 ceph-mon[74273]: pgmap v1081: 305 pgs: 305 active+clean; 58 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 91 KiB/s wr, 9 op/s
Oct 01 17:05:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:05:41 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:05:41 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:05:41 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:05:42 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:05:42 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:05:42 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta.tmp'
Oct 01 17:05:42 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta.tmp' to config b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta'
Oct 01 17:05:42 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:05:42 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "format": "json"}]: dispatch
Oct 01 17:05:42 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:05:42 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:05:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:05:42 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:05:42 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 58 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 91 KiB/s wr, 9 op/s
Oct 01 17:05:42 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "90543113-872c-4287-b768-45f56fc8f849", "auth_id": "david", "tenant_id": "fb3120e923574f759bb16e2e2d603473", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:05:42 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:90543113-872c-4287-b768-45f56fc8f849, tenant_id:fb3120e923574f759bb16e2e2d603473, vol_name:cephfs) < ""
Oct 01 17:05:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) v1
Oct 01 17:05:42 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.david", "format": "json"}]: dispatch
Oct 01 17:05:42 compute-0 ceph-mgr[74571]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: david is already in use
Oct 01 17:05:42 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:90543113-872c-4287-b768-45f56fc8f849, tenant_id:fb3120e923574f759bb16e2e2d603473, vol_name:cephfs) < ""
Oct 01 17:05:42 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:05:42.363+0000 7f813a030640 -1 mgr.server reply reply (1) Operation not permitted auth ID: david is already in use
Oct 01 17:05:42 compute-0 ceph-mgr[74571]: mgr.server reply reply (1) Operation not permitted auth ID: david is already in use
Oct 01 17:05:42 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1c897d2a-1173-4d28-affc-8c999f770456", "auth_id": "eve49", "tenant_id": "ad666ccc4f754e07a5041ffb5aaf32ae", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:05:42 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:1c897d2a-1173-4d28-affc-8c999f770456, tenant_id:ad666ccc4f754e07a5041ffb5aaf32ae, vol_name:cephfs) < ""
Oct 01 17:05:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0) v1
Oct 01 17:05:42 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.eve49", "format": "json"}]: dispatch
Oct 01 17:05:42 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID eve49 with tenant ad666ccc4f754e07a5041ffb5aaf32ae
Oct 01 17:05:42 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "r", "format": "json"}]: dispatch
Oct 01 17:05:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Oct 01 17:05:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:05:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:05:42 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:05:42 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "format": "json"}]: dispatch
Oct 01 17:05:42 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:05:42 compute-0 ceph-mon[74273]: pgmap v1082: 305 pgs: 305 active+clean; 58 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 91 KiB/s wr, 9 op/s
Oct 01 17:05:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.david", "format": "json"}]: dispatch
Oct 01 17:05:42 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.eve49", "format": "json"}]: dispatch
Oct 01 17:05:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c897d2a-1173-4d28-affc-8c999f770456/6a685f3e-18b7-4295-a517-5103fbf2a8f4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_1c897d2a-1173-4d28-affc-8c999f770456", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:05:43 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c897d2a-1173-4d28-affc-8c999f770456/6a685f3e-18b7-4295-a517-5103fbf2a8f4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_1c897d2a-1173-4d28-affc-8c999f770456", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:05:43 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c897d2a-1173-4d28-affc-8c999f770456/6a685f3e-18b7-4295-a517-5103fbf2a8f4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_1c897d2a-1173-4d28-affc-8c999f770456", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:05:43 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:1c897d2a-1173-4d28-affc-8c999f770456, tenant_id:ad666ccc4f754e07a5041ffb5aaf32ae, vol_name:cephfs) < ""
Oct 01 17:05:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 17:05:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1522609104' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 17:05:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 17:05:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1522609104' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 17:05:44 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "90543113-872c-4287-b768-45f56fc8f849", "auth_id": "david", "tenant_id": "fb3120e923574f759bb16e2e2d603473", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:05:44 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1c897d2a-1173-4d28-affc-8c999f770456", "auth_id": "eve49", "tenant_id": "ad666ccc4f754e07a5041ffb5aaf32ae", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:05:44 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c897d2a-1173-4d28-affc-8c999f770456/6a685f3e-18b7-4295-a517-5103fbf2a8f4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_1c897d2a-1173-4d28-affc-8c999f770456", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:05:44 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c897d2a-1173-4d28-affc-8c999f770456/6a685f3e-18b7-4295-a517-5103fbf2a8f4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_1c897d2a-1173-4d28-affc-8c999f770456", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:05:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1522609104' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 17:05:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1522609104' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 17:05:44 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 58 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 125 KiB/s wr, 13 op/s
Oct 01 17:05:44 compute-0 podman[271857]: 2025-10-01 17:05:44.756999411 +0000 UTC m=+0.071423851 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 01 17:05:44 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "format": "json"}]: dispatch
Oct 01 17:05:44 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:05:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Oct 01 17:05:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Oct 01 17:05:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Oct 01 17:05:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Oct 01 17:05:45 compute-0 ceph-mon[74273]: pgmap v1083: 305 pgs: 305 active+clean; 58 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 125 KiB/s wr, 13 op/s
Oct 01 17:05:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Oct 01 17:05:45 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:05:45 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "format": "json"}]: dispatch
Oct 01 17:05:45 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:05:45 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1
Oct 01 17:05:45 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1],prefix=session evict} (starting...)
Oct 01 17:05:45 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:05:45 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:05:45 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "snap_name": "994e3294-4a8a-410f-bff7-df7bb18891d7", "format": "json"}]: dispatch
Oct 01 17:05:45 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:994e3294-4a8a-410f-bff7-df7bb18891d7, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:05:45 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:994e3294-4a8a-410f-bff7-df7bb18891d7, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:05:45 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1c897d2a-1173-4d28-affc-8c999f770456", "auth_id": "eve48", "tenant_id": "ad666ccc4f754e07a5041ffb5aaf32ae", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:05:45 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:1c897d2a-1173-4d28-affc-8c999f770456, tenant_id:ad666ccc4f754e07a5041ffb5aaf32ae, vol_name:cephfs) < ""
Oct 01 17:05:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0) v1
Oct 01 17:05:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.eve48", "format": "json"}]: dispatch
Oct 01 17:05:45 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID eve48 with tenant ad666ccc4f754e07a5041ffb5aaf32ae
Oct 01 17:05:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:05:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c897d2a-1173-4d28-affc-8c999f770456/6a685f3e-18b7-4295-a517-5103fbf2a8f4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_1c897d2a-1173-4d28-affc-8c999f770456", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:05:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c897d2a-1173-4d28-affc-8c999f770456/6a685f3e-18b7-4295-a517-5103fbf2a8f4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_1c897d2a-1173-4d28-affc-8c999f770456", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:05:45 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Oct 01 17:05:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:05:45.751462) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 17:05:45 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Oct 01 17:05:45 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338345751520, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 868, "num_deletes": 255, "total_data_size": 782960, "memory_usage": 800328, "flush_reason": "Manual Compaction"}
Oct 01 17:05:45 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Oct 01 17:05:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c897d2a-1173-4d28-affc-8c999f770456/6a685f3e-18b7-4295-a517-5103fbf2a8f4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_1c897d2a-1173-4d28-affc-8c999f770456", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:05:45 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338345762592, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 772860, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23572, "largest_seqno": 24439, "table_properties": {"data_size": 768553, "index_size": 1831, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10399, "raw_average_key_size": 19, "raw_value_size": 759521, "raw_average_value_size": 1411, "num_data_blocks": 82, "num_entries": 538, "num_filter_entries": 538, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759338303, "oldest_key_time": 1759338303, "file_creation_time": 1759338345, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Oct 01 17:05:45 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 11196 microseconds, and 5044 cpu microseconds.
Oct 01 17:05:45 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 17:05:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:05:45.762658) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 772860 bytes OK
Oct 01 17:05:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:05:45.762682) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Oct 01 17:05:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:05:45.764464) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Oct 01 17:05:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:05:45.764485) EVENT_LOG_v1 {"time_micros": 1759338345764478, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 17:05:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:05:45.764509) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 17:05:45 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 778357, prev total WAL file size 779444, number of live WAL files 2.
Oct 01 17:05:45 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 17:05:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:05:45.765201) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353030' seq:72057594037927935, type:22 .. '6C6F676D00373531' seq:0, type:0; will stop at (end)
Oct 01 17:05:45 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 17:05:45 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(754KB)], [53(8669KB)]
Oct 01 17:05:45 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338345765247, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 9650839, "oldest_snapshot_seqno": -1}
Oct 01 17:05:45 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5185 keys, 9566498 bytes, temperature: kUnknown
Oct 01 17:05:45 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338345831733, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 9566498, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9528448, "index_size": 23988, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12997, "raw_key_size": 129136, "raw_average_key_size": 24, "raw_value_size": 9431891, "raw_average_value_size": 1819, "num_data_blocks": 1000, "num_entries": 5185, "num_filter_entries": 5185, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336399, "oldest_key_time": 0, "file_creation_time": 1759338345, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Oct 01 17:05:45 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 17:05:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:05:45.832161) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 9566498 bytes
Oct 01 17:05:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:05:45.834215) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.0 rd, 143.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 8.5 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(24.9) write-amplify(12.4) OK, records in: 5707, records dropped: 522 output_compression: NoCompression
Oct 01 17:05:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:05:45.834247) EVENT_LOG_v1 {"time_micros": 1759338345834233, "job": 28, "event": "compaction_finished", "compaction_time_micros": 66574, "compaction_time_cpu_micros": 29383, "output_level": 6, "num_output_files": 1, "total_output_size": 9566498, "num_input_records": 5707, "num_output_records": 5185, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 17:05:45 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 17:05:45 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338345834612, "job": 28, "event": "table_file_deletion", "file_number": 55}
Oct 01 17:05:45 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 17:05:45 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338345838310, "job": 28, "event": "table_file_deletion", "file_number": 53}
Oct 01 17:05:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:05:45.765140) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:05:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:05:45.838391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:05:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:05:45.838398) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:05:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:05:45.838401) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:05:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:05:45.838404) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:05:45 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:05:45.838408) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:05:45 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:1c897d2a-1173-4d28-affc-8c999f770456, tenant_id:ad666ccc4f754e07a5041ffb5aaf32ae, vol_name:cephfs) < ""
Oct 01 17:05:45 compute-0 sudo[271878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:05:45 compute-0 sudo[271878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:05:45 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "90543113-872c-4287-b768-45f56fc8f849", "auth_id": "david", "format": "json"}]: dispatch
Oct 01 17:05:45 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:90543113-872c-4287-b768-45f56fc8f849, vol_name:cephfs) < ""
Oct 01 17:05:45 compute-0 sudo[271878]: pam_unix(sudo:session): session closed for user root
Oct 01 17:05:45 compute-0 ceph-mgr[74571]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'david' for subvolume '90543113-872c-4287-b768-45f56fc8f849'
Oct 01 17:05:45 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:90543113-872c-4287-b768-45f56fc8f849, vol_name:cephfs) < ""
Oct 01 17:05:45 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "90543113-872c-4287-b768-45f56fc8f849", "auth_id": "david", "format": "json"}]: dispatch
Oct 01 17:05:45 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:90543113-872c-4287-b768-45f56fc8f849, vol_name:cephfs) < ""
Oct 01 17:05:45 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/90543113-872c-4287-b768-45f56fc8f849/d840e995-dc57-43c1-a20c-f46f70fbb2fc
Oct 01 17:05:45 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=david,client_metadata.root=/volumes/_nogroup/90543113-872c-4287-b768-45f56fc8f849/d840e995-dc57-43c1-a20c-f46f70fbb2fc],prefix=session evict} (starting...)
Oct 01 17:05:45 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:05:45 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:90543113-872c-4287-b768-45f56fc8f849, vol_name:cephfs) < ""
Oct 01 17:05:46 compute-0 sudo[271904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:05:46 compute-0 sudo[271904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:05:46 compute-0 sudo[271904]: pam_unix(sudo:session): session closed for user root
Oct 01 17:05:46 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "format": "json"}]: dispatch
Oct 01 17:05:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Oct 01 17:05:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Oct 01 17:05:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Oct 01 17:05:46 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice_bob", "format": "json"}]: dispatch
Oct 01 17:05:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.eve48", "format": "json"}]: dispatch
Oct 01 17:05:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c897d2a-1173-4d28-affc-8c999f770456/6a685f3e-18b7-4295-a517-5103fbf2a8f4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_1c897d2a-1173-4d28-affc-8c999f770456", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:05:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c897d2a-1173-4d28-affc-8c999f770456/6a685f3e-18b7-4295-a517-5103fbf2a8f4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_1c897d2a-1173-4d28-affc-8c999f770456", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:05:46 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 58 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 93 KiB/s wr, 9 op/s
Oct 01 17:05:46 compute-0 sudo[271929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:05:46 compute-0 sudo[271929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:05:46 compute-0 sudo[271929]: pam_unix(sudo:session): session closed for user root
Oct 01 17:05:46 compute-0 sudo[271954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 01 17:05:46 compute-0 sudo[271954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:05:46 compute-0 sudo[271954]: pam_unix(sudo:session): session closed for user root
Oct 01 17:05:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 17:05:46 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:05:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 17:05:46 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:05:46 compute-0 sudo[271999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:05:46 compute-0 sudo[271999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:05:46 compute-0 sudo[271999]: pam_unix(sudo:session): session closed for user root
Oct 01 17:05:46 compute-0 sudo[272024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:05:46 compute-0 sudo[272024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:05:46 compute-0 sudo[272024]: pam_unix(sudo:session): session closed for user root
Oct 01 17:05:46 compute-0 sudo[272049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:05:46 compute-0 sudo[272049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:05:46 compute-0 sudo[272049]: pam_unix(sudo:session): session closed for user root
Oct 01 17:05:46 compute-0 sudo[272074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 17:05:46 compute-0 sudo[272074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:05:47 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "snap_name": "994e3294-4a8a-410f-bff7-df7bb18891d7", "format": "json"}]: dispatch
Oct 01 17:05:47 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1c897d2a-1173-4d28-affc-8c999f770456", "auth_id": "eve48", "tenant_id": "ad666ccc4f754e07a5041ffb5aaf32ae", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:05:47 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "90543113-872c-4287-b768-45f56fc8f849", "auth_id": "david", "format": "json"}]: dispatch
Oct 01 17:05:47 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "90543113-872c-4287-b768-45f56fc8f849", "auth_id": "david", "format": "json"}]: dispatch
Oct 01 17:05:47 compute-0 ceph-mon[74273]: pgmap v1084: 305 pgs: 305 active+clean; 58 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 93 KiB/s wr, 9 op/s
Oct 01 17:05:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:05:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:05:47 compute-0 sudo[272074]: pam_unix(sudo:session): session closed for user root
Oct 01 17:05:47 compute-0 sudo[272129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:05:47 compute-0 sudo[272129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:05:47 compute-0 sudo[272129]: pam_unix(sudo:session): session closed for user root
Oct 01 17:05:47 compute-0 sudo[272154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:05:47 compute-0 sudo[272154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:05:47 compute-0 sudo[272154]: pam_unix(sudo:session): session closed for user root
Oct 01 17:05:47 compute-0 sudo[272179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:05:47 compute-0 sudo[272179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:05:47 compute-0 sudo[272179]: pam_unix(sudo:session): session closed for user root
Oct 01 17:05:47 compute-0 sudo[272204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- inventory --format=json-pretty --filter-for-batch
Oct 01 17:05:47 compute-0 sudo[272204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:05:48 compute-0 podman[272269]: 2025-10-01 17:05:48.075396445 +0000 UTC m=+0.089967006 container create 66ce0072beb6e87c412a5b18e679dd28ae1b6be68d2cac3442ca82fcc3ddd231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:05:48 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 58 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 94 KiB/s wr, 11 op/s
Oct 01 17:05:48 compute-0 podman[272269]: 2025-10-01 17:05:48.024771897 +0000 UTC m=+0.039342518 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:05:48 compute-0 systemd[1]: Started libpod-conmon-66ce0072beb6e87c412a5b18e679dd28ae1b6be68d2cac3442ca82fcc3ddd231.scope.
Oct 01 17:05:48 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:05:48 compute-0 podman[272269]: 2025-10-01 17:05:48.210847603 +0000 UTC m=+0.225418144 container init 66ce0072beb6e87c412a5b18e679dd28ae1b6be68d2cac3442ca82fcc3ddd231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ritchie, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:05:48 compute-0 podman[272269]: 2025-10-01 17:05:48.217478619 +0000 UTC m=+0.232049170 container start 66ce0072beb6e87c412a5b18e679dd28ae1b6be68d2cac3442ca82fcc3ddd231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ritchie, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 01 17:05:48 compute-0 podman[272269]: 2025-10-01 17:05:48.22157924 +0000 UTC m=+0.236149781 container attach 66ce0072beb6e87c412a5b18e679dd28ae1b6be68d2cac3442ca82fcc3ddd231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ritchie, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 17:05:48 compute-0 exciting_ritchie[272286]: 167 167
Oct 01 17:05:48 compute-0 systemd[1]: libpod-66ce0072beb6e87c412a5b18e679dd28ae1b6be68d2cac3442ca82fcc3ddd231.scope: Deactivated successfully.
Oct 01 17:05:48 compute-0 podman[272269]: 2025-10-01 17:05:48.225332909 +0000 UTC m=+0.239903470 container died 66ce0072beb6e87c412a5b18e679dd28ae1b6be68d2cac3442ca82fcc3ddd231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ritchie, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 01 17:05:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-da7c42b00d128b0af46bc1790be25c3c5c1c3cac06e8870ed37664003f2fda5a-merged.mount: Deactivated successfully.
Oct 01 17:05:48 compute-0 podman[272269]: 2025-10-01 17:05:48.272756261 +0000 UTC m=+0.287326792 container remove 66ce0072beb6e87c412a5b18e679dd28ae1b6be68d2cac3442ca82fcc3ddd231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ritchie, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 01 17:05:48 compute-0 systemd[1]: libpod-conmon-66ce0072beb6e87c412a5b18e679dd28ae1b6be68d2cac3442ca82fcc3ddd231.scope: Deactivated successfully.
Oct 01 17:05:48 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:05:48 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:05:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Oct 01 17:05:48 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Oct 01 17:05:48 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID alice bob with tenant 1841221f332340a299707d253063659f
Oct 01 17:05:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:05:48 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:05:48 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:05:48 compute-0 podman[272310]: 2025-10-01 17:05:48.479373234 +0000 UTC m=+0.064987778 container create e08af85146cd395a7b52df7cb83c99452addf6e0ada7d77cb8e7dbfb9c8fdf52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_tesla, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 01 17:05:48 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:05:48 compute-0 systemd[1]: Started libpod-conmon-e08af85146cd395a7b52df7cb83c99452addf6e0ada7d77cb8e7dbfb9c8fdf52.scope.
Oct 01 17:05:48 compute-0 podman[272310]: 2025-10-01 17:05:48.443560912 +0000 UTC m=+0.029175486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:05:48 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:05:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b48697a318e61eef7f7f529d66227d8e13091b0999b9f60bf7c3030eaf95534/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:05:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b48697a318e61eef7f7f529d66227d8e13091b0999b9f60bf7c3030eaf95534/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:05:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b48697a318e61eef7f7f529d66227d8e13091b0999b9f60bf7c3030eaf95534/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:05:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b48697a318e61eef7f7f529d66227d8e13091b0999b9f60bf7c3030eaf95534/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:05:48 compute-0 podman[272310]: 2025-10-01 17:05:48.572390615 +0000 UTC m=+0.158005189 container init e08af85146cd395a7b52df7cb83c99452addf6e0ada7d77cb8e7dbfb9c8fdf52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_tesla, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 01 17:05:48 compute-0 podman[272310]: 2025-10-01 17:05:48.58063571 +0000 UTC m=+0.166250284 container start e08af85146cd395a7b52df7cb83c99452addf6e0ada7d77cb8e7dbfb9c8fdf52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 17:05:48 compute-0 podman[272310]: 2025-10-01 17:05:48.585786535 +0000 UTC m=+0.171401119 container attach e08af85146cd395a7b52df7cb83c99452addf6e0ada7d77cb8e7dbfb9c8fdf52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_tesla, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 01 17:05:48 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "snap_name": "e018909b-45d5-4fb2-882d-c4b0c2c5f1b4", "format": "json"}]: dispatch
Oct 01 17:05:48 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e018909b-45d5-4fb2-882d-c4b0c2c5f1b4, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:05:48 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e018909b-45d5-4fb2-882d-c4b0c2c5f1b4, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:05:49 compute-0 ceph-mon[74273]: pgmap v1085: 305 pgs: 305 active+clean; 58 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 94 KiB/s wr, 11 op/s
Oct 01 17:05:49 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Oct 01 17:05:49 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:05:49 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:05:49 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "47b2be1b-8457-4141-a779-0c9e30960e86", "auth_id": "david", "format": "json"}]: dispatch
Oct 01 17:05:49 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:47b2be1b-8457-4141-a779-0c9e30960e86, vol_name:cephfs) < ""
Oct 01 17:05:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) v1
Oct 01 17:05:49 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.david", "format": "json"}]: dispatch
Oct 01 17:05:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.david"} v 0) v1
Oct 01 17:05:49 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.david"}]: dispatch
Oct 01 17:05:49 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished
Oct 01 17:05:49 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:47b2be1b-8457-4141-a779-0c9e30960e86, vol_name:cephfs) < ""
Oct 01 17:05:49 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "47b2be1b-8457-4141-a779-0c9e30960e86", "auth_id": "david", "format": "json"}]: dispatch
Oct 01 17:05:49 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:47b2be1b-8457-4141-a779-0c9e30960e86, vol_name:cephfs) < ""
Oct 01 17:05:49 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/47b2be1b-8457-4141-a779-0c9e30960e86/103fa946-2c7f-48e4-9385-b7cf2236d2f2
Oct 01 17:05:49 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=david,client_metadata.root=/volumes/_nogroup/47b2be1b-8457-4141-a779-0c9e30960e86/103fa946-2c7f-48e4-9385-b7cf2236d2f2],prefix=session evict} (starting...)
Oct 01 17:05:49 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:05:49 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:47b2be1b-8457-4141-a779-0c9e30960e86, vol_name:cephfs) < ""
Oct 01 17:05:49 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1c897d2a-1173-4d28-affc-8c999f770456", "auth_id": "eve48", "format": "json"}]: dispatch
Oct 01 17:05:49 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:1c897d2a-1173-4d28-affc-8c999f770456, vol_name:cephfs) < ""
Oct 01 17:05:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0) v1
Oct 01 17:05:49 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.eve48", "format": "json"}]: dispatch
Oct 01 17:05:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve48"} v 0) v1
Oct 01 17:05:49 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.eve48"}]: dispatch
Oct 01 17:05:49 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished
Oct 01 17:05:49 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:1c897d2a-1173-4d28-affc-8c999f770456, vol_name:cephfs) < ""
Oct 01 17:05:49 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1c897d2a-1173-4d28-affc-8c999f770456", "auth_id": "eve48", "format": "json"}]: dispatch
Oct 01 17:05:49 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:1c897d2a-1173-4d28-affc-8c999f770456, vol_name:cephfs) < ""
Oct 01 17:05:49 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve48, client_metadata.root=/volumes/_nogroup/1c897d2a-1173-4d28-affc-8c999f770456/6a685f3e-18b7-4295-a517-5103fbf2a8f4
Oct 01 17:05:49 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=eve48,client_metadata.root=/volumes/_nogroup/1c897d2a-1173-4d28-affc-8c999f770456/6a685f3e-18b7-4295-a517-5103fbf2a8f4],prefix=session evict} (starting...)
Oct 01 17:05:49 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:05:49 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:1c897d2a-1173-4d28-affc-8c999f770456, vol_name:cephfs) < ""
Oct 01 17:05:50 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 59 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 133 KiB/s wr, 13 op/s
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]: [
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:     {
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:         "available": false,
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:         "ceph_device": false,
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:         "lsm_data": {},
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:         "lvs": [],
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:         "path": "/dev/sr0",
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:         "rejected_reasons": [
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:             "Insufficient space (<5GB)",
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:             "Has a FileSystem"
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:         ],
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:         "sys_api": {
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:             "actuators": null,
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:             "device_nodes": "sr0",
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:             "devname": "sr0",
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:             "human_readable_size": "482.00 KB",
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:             "id_bus": "ata",
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:             "model": "QEMU DVD-ROM",
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:             "nr_requests": "2",
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:             "parent": "/dev/sr0",
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:             "partitions": {},
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:             "path": "/dev/sr0",
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:             "removable": "1",
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:             "rev": "2.5+",
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:             "ro": "0",
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:             "rotational": "0",
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:             "sas_address": "",
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:             "sas_device_handle": "",
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:             "scheduler_mode": "mq-deadline",
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:             "sectors": 0,
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:             "sectorsize": "2048",
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:             "size": 493568.0,
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:             "support_discard": "2048",
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:             "type": "disk",
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:             "vendor": "QEMU"
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:         }
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]:     }
Oct 01 17:05:50 compute-0 xenodochial_tesla[272327]: ]
Oct 01 17:05:50 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:05:50 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "snap_name": "e018909b-45d5-4fb2-882d-c4b0c2c5f1b4", "format": "json"}]: dispatch
Oct 01 17:05:50 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "47b2be1b-8457-4141-a779-0c9e30960e86", "auth_id": "david", "format": "json"}]: dispatch
Oct 01 17:05:50 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.david", "format": "json"}]: dispatch
Oct 01 17:05:50 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.david"}]: dispatch
Oct 01 17:05:50 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished
Oct 01 17:05:50 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "47b2be1b-8457-4141-a779-0c9e30960e86", "auth_id": "david", "format": "json"}]: dispatch
Oct 01 17:05:50 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.eve48", "format": "json"}]: dispatch
Oct 01 17:05:50 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.eve48"}]: dispatch
Oct 01 17:05:50 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished
Oct 01 17:05:50 compute-0 systemd[1]: libpod-e08af85146cd395a7b52df7cb83c99452addf6e0ada7d77cb8e7dbfb9c8fdf52.scope: Deactivated successfully.
Oct 01 17:05:50 compute-0 podman[272310]: 2025-10-01 17:05:50.17543171 +0000 UTC m=+1.761046284 container died e08af85146cd395a7b52df7cb83c99452addf6e0ada7d77cb8e7dbfb9c8fdf52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_tesla, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:05:50 compute-0 systemd[1]: libpod-e08af85146cd395a7b52df7cb83c99452addf6e0ada7d77cb8e7dbfb9c8fdf52.scope: Consumed 1.649s CPU time.
Oct 01 17:05:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b48697a318e61eef7f7f529d66227d8e13091b0999b9f60bf7c3030eaf95534-merged.mount: Deactivated successfully.
Oct 01 17:05:50 compute-0 podman[272310]: 2025-10-01 17:05:50.243714318 +0000 UTC m=+1.829328852 container remove e08af85146cd395a7b52df7cb83c99452addf6e0ada7d77cb8e7dbfb9c8fdf52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_tesla, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 17:05:50 compute-0 systemd[1]: libpod-conmon-e08af85146cd395a7b52df7cb83c99452addf6e0ada7d77cb8e7dbfb9c8fdf52.scope: Deactivated successfully.
Oct 01 17:05:50 compute-0 sudo[272204]: pam_unix(sudo:session): session closed for user root
Oct 01 17:05:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 17:05:50 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:05:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 17:05:50 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:05:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 17:05:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:05:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 17:05:50 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 17:05:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 17:05:50 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:05:50 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev c0094d81-1511-4169-827b-d84befa0dcaa does not exist
Oct 01 17:05:50 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 1ae75df8-b5fe-4be6-8c91-bfe4bd46bb8b does not exist
Oct 01 17:05:50 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 3c099017-8ea0-447e-b1cf-6432972fc26a does not exist
Oct 01 17:05:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 17:05:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 17:05:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 17:05:50 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 17:05:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 17:05:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:05:50 compute-0 sudo[274352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:05:50 compute-0 sudo[274352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:05:50 compute-0 sudo[274352]: pam_unix(sudo:session): session closed for user root
Oct 01 17:05:50 compute-0 sudo[274377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:05:50 compute-0 sudo[274377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:05:50 compute-0 sudo[274377]: pam_unix(sudo:session): session closed for user root
Oct 01 17:05:50 compute-0 sudo[274402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:05:50 compute-0 sudo[274402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:05:50 compute-0 sudo[274402]: pam_unix(sudo:session): session closed for user root
Oct 01 17:05:50 compute-0 sudo[274427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 17:05:50 compute-0 sudo[274427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:05:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:05:51 compute-0 podman[274494]: 2025-10-01 17:05:51.034476294 +0000 UTC m=+0.053758040 container create 6e8553b6b1da2bb9e369ab1e51603070f095c0119608d1b0400d31a20cd2c9d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 01 17:05:51 compute-0 systemd[1]: Started libpod-conmon-6e8553b6b1da2bb9e369ab1e51603070f095c0119608d1b0400d31a20cd2c9d8.scope.
Oct 01 17:05:51 compute-0 podman[274494]: 2025-10-01 17:05:51.005689423 +0000 UTC m=+0.024971239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:05:51 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:05:51 compute-0 podman[274494]: 2025-10-01 17:05:51.131542774 +0000 UTC m=+0.150824490 container init 6e8553b6b1da2bb9e369ab1e51603070f095c0119608d1b0400d31a20cd2c9d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_burnell, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 17:05:51 compute-0 podman[274494]: 2025-10-01 17:05:51.143158084 +0000 UTC m=+0.162439820 container start 6e8553b6b1da2bb9e369ab1e51603070f095c0119608d1b0400d31a20cd2c9d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_burnell, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:05:51 compute-0 podman[274494]: 2025-10-01 17:05:51.146993439 +0000 UTC m=+0.166275205 container attach 6e8553b6b1da2bb9e369ab1e51603070f095c0119608d1b0400d31a20cd2c9d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 17:05:51 compute-0 dazzling_burnell[274510]: 167 167
Oct 01 17:05:51 compute-0 systemd[1]: libpod-6e8553b6b1da2bb9e369ab1e51603070f095c0119608d1b0400d31a20cd2c9d8.scope: Deactivated successfully.
Oct 01 17:05:51 compute-0 podman[274494]: 2025-10-01 17:05:51.151547578 +0000 UTC m=+0.170829334 container died 6e8553b6b1da2bb9e369ab1e51603070f095c0119608d1b0400d31a20cd2c9d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 01 17:05:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-48ac83c5bc40cdb3bc7a10de07f558a91b0483eba4dec1d4c7659697296db6fd-merged.mount: Deactivated successfully.
Oct 01 17:05:51 compute-0 podman[274494]: 2025-10-01 17:05:51.192163853 +0000 UTC m=+0.211445579 container remove 6e8553b6b1da2bb9e369ab1e51603070f095c0119608d1b0400d31a20cd2c9d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_burnell, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:05:51 compute-0 systemd[1]: libpod-conmon-6e8553b6b1da2bb9e369ab1e51603070f095c0119608d1b0400d31a20cd2c9d8.scope: Deactivated successfully.
Oct 01 17:05:51 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1c897d2a-1173-4d28-affc-8c999f770456", "auth_id": "eve48", "format": "json"}]: dispatch
Oct 01 17:05:51 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1c897d2a-1173-4d28-affc-8c999f770456", "auth_id": "eve48", "format": "json"}]: dispatch
Oct 01 17:05:51 compute-0 ceph-mon[74273]: pgmap v1086: 305 pgs: 305 active+clean; 59 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 133 KiB/s wr, 13 op/s
Oct 01 17:05:51 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:05:51 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:05:51 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:05:51 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 17:05:51 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:05:51 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 17:05:51 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 17:05:51 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:05:51 compute-0 podman[274534]: 2025-10-01 17:05:51.346809025 +0000 UTC m=+0.038679447 container create 56b310f88af9b64874b6ba7a8c2118ce8c302ca57c3a27e424ae0d74b68905cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_tharp, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Oct 01 17:05:51 compute-0 systemd[1]: Started libpod-conmon-56b310f88af9b64874b6ba7a8c2118ce8c302ca57c3a27e424ae0d74b68905cf.scope.
Oct 01 17:05:51 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:05:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a0f1a4995d30aaff2ab18df450e46082de37c8be126de0a87f3378e96ce7a88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:05:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a0f1a4995d30aaff2ab18df450e46082de37c8be126de0a87f3378e96ce7a88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:05:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a0f1a4995d30aaff2ab18df450e46082de37c8be126de0a87f3378e96ce7a88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:05:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a0f1a4995d30aaff2ab18df450e46082de37c8be126de0a87f3378e96ce7a88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:05:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a0f1a4995d30aaff2ab18df450e46082de37c8be126de0a87f3378e96ce7a88/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 17:05:51 compute-0 podman[274534]: 2025-10-01 17:05:51.329379539 +0000 UTC m=+0.021249971 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:05:51 compute-0 podman[274534]: 2025-10-01 17:05:51.42565272 +0000 UTC m=+0.117523162 container init 56b310f88af9b64874b6ba7a8c2118ce8c302ca57c3a27e424ae0d74b68905cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:05:51 compute-0 podman[274534]: 2025-10-01 17:05:51.432753874 +0000 UTC m=+0.124624296 container start 56b310f88af9b64874b6ba7a8c2118ce8c302ca57c3a27e424ae0d74b68905cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 01 17:05:51 compute-0 podman[274534]: 2025-10-01 17:05:51.436540016 +0000 UTC m=+0.128410458 container attach 56b310f88af9b64874b6ba7a8c2118ce8c302ca57c3a27e424ae0d74b68905cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_tharp, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:05:51 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "format": "json"}]: dispatch
Oct 01 17:05:51 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:05:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Oct 01 17:05:51 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Oct 01 17:05:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Oct 01 17:05:51 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Oct 01 17:05:51 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Oct 01 17:05:51 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:05:51 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "format": "json"}]: dispatch
Oct 01 17:05:51 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:05:51 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1
Oct 01 17:05:51 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1],prefix=session evict} (starting...)
Oct 01 17:05:51 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:05:51 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:05:52 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 59 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 86 KiB/s wr, 9 op/s
Oct 01 17:05:52 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Oct 01 17:05:52 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Oct 01 17:05:52 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Oct 01 17:05:52 compute-0 hardcore_tharp[274550]: --> passed data devices: 0 physical, 3 LVM
Oct 01 17:05:52 compute-0 hardcore_tharp[274550]: --> relative data size: 1.0
Oct 01 17:05:52 compute-0 hardcore_tharp[274550]: --> All data devices are unavailable
Oct 01 17:05:52 compute-0 systemd[1]: libpod-56b310f88af9b64874b6ba7a8c2118ce8c302ca57c3a27e424ae0d74b68905cf.scope: Deactivated successfully.
Oct 01 17:05:52 compute-0 podman[274534]: 2025-10-01 17:05:52.463505894 +0000 UTC m=+1.155376356 container died 56b310f88af9b64874b6ba7a8c2118ce8c302ca57c3a27e424ae0d74b68905cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_tharp, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Oct 01 17:05:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a0f1a4995d30aaff2ab18df450e46082de37c8be126de0a87f3378e96ce7a88-merged.mount: Deactivated successfully.
Oct 01 17:05:52 compute-0 podman[274534]: 2025-10-01 17:05:52.558546669 +0000 UTC m=+1.250417131 container remove 56b310f88af9b64874b6ba7a8c2118ce8c302ca57c3a27e424ae0d74b68905cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_tharp, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:05:52 compute-0 systemd[1]: libpod-conmon-56b310f88af9b64874b6ba7a8c2118ce8c302ca57c3a27e424ae0d74b68905cf.scope: Deactivated successfully.
Oct 01 17:05:52 compute-0 sudo[274427]: pam_unix(sudo:session): session closed for user root
Oct 01 17:05:52 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1c897d2a-1173-4d28-affc-8c999f770456", "auth_id": "eve47", "tenant_id": "ad666ccc4f754e07a5041ffb5aaf32ae", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:05:52 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:1c897d2a-1173-4d28-affc-8c999f770456, tenant_id:ad666ccc4f754e07a5041ffb5aaf32ae, vol_name:cephfs) < ""
Oct 01 17:05:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0) v1
Oct 01 17:05:52 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.eve47", "format": "json"}]: dispatch
Oct 01 17:05:52 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID eve47 with tenant ad666ccc4f754e07a5041ffb5aaf32ae
Oct 01 17:05:52 compute-0 sudo[274594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:05:52 compute-0 sudo[274594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:05:52 compute-0 sudo[274594]: pam_unix(sudo:session): session closed for user root
Oct 01 17:05:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c897d2a-1173-4d28-affc-8c999f770456/6a685f3e-18b7-4295-a517-5103fbf2a8f4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_1c897d2a-1173-4d28-affc-8c999f770456", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:05:52 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c897d2a-1173-4d28-affc-8c999f770456/6a685f3e-18b7-4295-a517-5103fbf2a8f4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_1c897d2a-1173-4d28-affc-8c999f770456", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:05:52 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c897d2a-1173-4d28-affc-8c999f770456/6a685f3e-18b7-4295-a517-5103fbf2a8f4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_1c897d2a-1173-4d28-affc-8c999f770456", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:05:52 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:1c897d2a-1173-4d28-affc-8c999f770456, tenant_id:ad666ccc4f754e07a5041ffb5aaf32ae, vol_name:cephfs) < ""
Oct 01 17:05:52 compute-0 sudo[274619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:05:52 compute-0 sudo[274619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:05:52 compute-0 sudo[274619]: pam_unix(sudo:session): session closed for user root
Oct 01 17:05:52 compute-0 sudo[274644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:05:52 compute-0 sudo[274644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:05:52 compute-0 sudo[274644]: pam_unix(sudo:session): session closed for user root
Oct 01 17:05:52 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "snap_name": "e018909b-45d5-4fb2-882d-c4b0c2c5f1b4_64ac2cdb-3026-47ea-a540-30145adcb229", "force": true, "format": "json"}]: dispatch
Oct 01 17:05:52 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e018909b-45d5-4fb2-882d-c4b0c2c5f1b4_64ac2cdb-3026-47ea-a540-30145adcb229, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:05:52 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta.tmp'
Oct 01 17:05:52 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta.tmp' to config b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta'
Oct 01 17:05:52 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e018909b-45d5-4fb2-882d-c4b0c2c5f1b4_64ac2cdb-3026-47ea-a540-30145adcb229, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:05:52 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "snap_name": "e018909b-45d5-4fb2-882d-c4b0c2c5f1b4", "force": true, "format": "json"}]: dispatch
Oct 01 17:05:52 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e018909b-45d5-4fb2-882d-c4b0c2c5f1b4, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:05:52 compute-0 sudo[274669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 17:05:52 compute-0 sudo[274669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:05:52 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta.tmp'
Oct 01 17:05:52 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta.tmp' to config b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta'
Oct 01 17:05:53 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e018909b-45d5-4fb2-882d-c4b0c2c5f1b4, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:05:53 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "90543113-872c-4287-b768-45f56fc8f849", "format": "json"}]: dispatch
Oct 01 17:05:53 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:90543113-872c-4287-b768-45f56fc8f849, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:05:53 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:90543113-872c-4287-b768-45f56fc8f849, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:05:53 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '90543113-872c-4287-b768-45f56fc8f849' of type subvolume
Oct 01 17:05:53 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:05:53.248+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '90543113-872c-4287-b768-45f56fc8f849' of type subvolume
Oct 01 17:05:53 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "90543113-872c-4287-b768-45f56fc8f849", "force": true, "format": "json"}]: dispatch
Oct 01 17:05:53 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:90543113-872c-4287-b768-45f56fc8f849, vol_name:cephfs) < ""
Oct 01 17:05:53 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/90543113-872c-4287-b768-45f56fc8f849'' moved to trashcan
Oct 01 17:05:53 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:05:53 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:90543113-872c-4287-b768-45f56fc8f849, vol_name:cephfs) < ""
Oct 01 17:05:53 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "format": "json"}]: dispatch
Oct 01 17:05:53 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "format": "json"}]: dispatch
Oct 01 17:05:53 compute-0 ceph-mon[74273]: pgmap v1087: 305 pgs: 305 active+clean; 59 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 86 KiB/s wr, 9 op/s
Oct 01 17:05:53 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.eve47", "format": "json"}]: dispatch
Oct 01 17:05:53 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c897d2a-1173-4d28-affc-8c999f770456/6a685f3e-18b7-4295-a517-5103fbf2a8f4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_1c897d2a-1173-4d28-affc-8c999f770456", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:05:53 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c897d2a-1173-4d28-affc-8c999f770456/6a685f3e-18b7-4295-a517-5103fbf2a8f4", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_1c897d2a-1173-4d28-affc-8c999f770456", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:05:53 compute-0 podman[274733]: 2025-10-01 17:05:53.314475472 +0000 UTC m=+0.053624452 container create 69fb98569ce8b419fb7b03d32769a792b89d7475571bdfd855106ea256d38454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_dubinsky, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 01 17:05:53 compute-0 systemd[1]: Started libpod-conmon-69fb98569ce8b419fb7b03d32769a792b89d7475571bdfd855106ea256d38454.scope.
Oct 01 17:05:53 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:05:53 compute-0 podman[274733]: 2025-10-01 17:05:53.37604452 +0000 UTC m=+0.115193540 container init 69fb98569ce8b419fb7b03d32769a792b89d7475571bdfd855106ea256d38454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_dubinsky, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:05:53 compute-0 podman[274733]: 2025-10-01 17:05:53.3822538 +0000 UTC m=+0.121402780 container start 69fb98569ce8b419fb7b03d32769a792b89d7475571bdfd855106ea256d38454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:05:53 compute-0 podman[274733]: 2025-10-01 17:05:53.387131748 +0000 UTC m=+0.126280848 container attach 69fb98569ce8b419fb7b03d32769a792b89d7475571bdfd855106ea256d38454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 01 17:05:53 compute-0 intelligent_dubinsky[274750]: 167 167
Oct 01 17:05:53 compute-0 podman[274733]: 2025-10-01 17:05:53.294374013 +0000 UTC m=+0.033523033 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:05:53 compute-0 systemd[1]: libpod-69fb98569ce8b419fb7b03d32769a792b89d7475571bdfd855106ea256d38454.scope: Deactivated successfully.
Oct 01 17:05:53 compute-0 podman[274733]: 2025-10-01 17:05:53.389815012 +0000 UTC m=+0.128964032 container died 69fb98569ce8b419fb7b03d32769a792b89d7475571bdfd855106ea256d38454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 01 17:05:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-37b267219382012f86d9a1bb8c66865cc6e6fc7070b58436e8012586e48c0e56-merged.mount: Deactivated successfully.
Oct 01 17:05:53 compute-0 podman[274733]: 2025-10-01 17:05:53.488708662 +0000 UTC m=+0.227857642 container remove 69fb98569ce8b419fb7b03d32769a792b89d7475571bdfd855106ea256d38454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_dubinsky, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 01 17:05:53 compute-0 systemd[1]: libpod-conmon-69fb98569ce8b419fb7b03d32769a792b89d7475571bdfd855106ea256d38454.scope: Deactivated successfully.
Oct 01 17:05:53 compute-0 podman[274775]: 2025-10-01 17:05:53.692776889 +0000 UTC m=+0.077115680 container create 0cd854803bf400bfae938f3064af6cef553ef6f9c9f9b6bd8758fdd9ef0a49c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_heisenberg, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 01 17:05:53 compute-0 podman[274775]: 2025-10-01 17:05:53.641930338 +0000 UTC m=+0.026269129 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:05:53 compute-0 systemd[1]: Started libpod-conmon-0cd854803bf400bfae938f3064af6cef553ef6f9c9f9b6bd8758fdd9ef0a49c8.scope.
Oct 01 17:05:53 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:05:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7813442c8f6bff0418e5a7022ad044d9c2c8d11fa14bb5001a1c4d13a9048f2b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:05:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7813442c8f6bff0418e5a7022ad044d9c2c8d11fa14bb5001a1c4d13a9048f2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:05:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7813442c8f6bff0418e5a7022ad044d9c2c8d11fa14bb5001a1c4d13a9048f2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:05:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7813442c8f6bff0418e5a7022ad044d9c2c8d11fa14bb5001a1c4d13a9048f2b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:05:53 compute-0 podman[274775]: 2025-10-01 17:05:53.823190169 +0000 UTC m=+0.207528980 container init 0cd854803bf400bfae938f3064af6cef553ef6f9c9f9b6bd8758fdd9ef0a49c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 01 17:05:53 compute-0 podman[274775]: 2025-10-01 17:05:53.836865366 +0000 UTC m=+0.221204167 container start 0cd854803bf400bfae938f3064af6cef553ef6f9c9f9b6bd8758fdd9ef0a49c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_heisenberg, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:05:53 compute-0 podman[274775]: 2025-10-01 17:05:53.841858441 +0000 UTC m=+0.226197242 container attach 0cd854803bf400bfae938f3064af6cef553ef6f9c9f9b6bd8758fdd9ef0a49c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_heisenberg, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:05:54 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 60 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 160 KiB/s wr, 17 op/s
Oct 01 17:05:54 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1c897d2a-1173-4d28-affc-8c999f770456", "auth_id": "eve47", "tenant_id": "ad666ccc4f754e07a5041ffb5aaf32ae", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:05:54 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "snap_name": "e018909b-45d5-4fb2-882d-c4b0c2c5f1b4_64ac2cdb-3026-47ea-a540-30145adcb229", "force": true, "format": "json"}]: dispatch
Oct 01 17:05:54 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "snap_name": "e018909b-45d5-4fb2-882d-c4b0c2c5f1b4", "force": true, "format": "json"}]: dispatch
Oct 01 17:05:54 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "90543113-872c-4287-b768-45f56fc8f849", "format": "json"}]: dispatch
Oct 01 17:05:54 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "90543113-872c-4287-b768-45f56fc8f849", "force": true, "format": "json"}]: dispatch
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]: {
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:     "0": [
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:         {
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             "devices": [
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "/dev/loop3"
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             ],
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             "lv_name": "ceph_lv0",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             "lv_size": "21470642176",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             "name": "ceph_lv0",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             "tags": {
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "ceph.cluster_name": "ceph",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "ceph.crush_device_class": "",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "ceph.encrypted": "0",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "ceph.osd_id": "0",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "ceph.type": "block",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "ceph.vdo": "0"
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             },
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             "type": "block",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             "vg_name": "ceph_vg0"
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:         }
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:     ],
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:     "1": [
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:         {
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             "devices": [
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "/dev/loop4"
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             ],
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             "lv_name": "ceph_lv1",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             "lv_size": "21470642176",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             "name": "ceph_lv1",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             "tags": {
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "ceph.cluster_name": "ceph",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "ceph.crush_device_class": "",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "ceph.encrypted": "0",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "ceph.osd_id": "1",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "ceph.type": "block",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "ceph.vdo": "0"
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             },
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             "type": "block",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             "vg_name": "ceph_vg1"
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:         }
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:     ],
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:     "2": [
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:         {
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             "devices": [
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "/dev/loop5"
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             ],
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             "lv_name": "ceph_lv2",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             "lv_size": "21470642176",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             "name": "ceph_lv2",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             "tags": {
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "ceph.cluster_name": "ceph",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "ceph.crush_device_class": "",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "ceph.encrypted": "0",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "ceph.osd_id": "2",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "ceph.type": "block",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:                 "ceph.vdo": "0"
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             },
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             "type": "block",
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:             "vg_name": "ceph_vg2"
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:         }
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]:     ]
Oct 01 17:05:54 compute-0 flamboyant_heisenberg[274792]: }
Oct 01 17:05:54 compute-0 systemd[1]: libpod-0cd854803bf400bfae938f3064af6cef553ef6f9c9f9b6bd8758fdd9ef0a49c8.scope: Deactivated successfully.
Oct 01 17:05:54 compute-0 podman[274775]: 2025-10-01 17:05:54.637170494 +0000 UTC m=+1.021509315 container died 0cd854803bf400bfae938f3064af6cef553ef6f9c9f9b6bd8758fdd9ef0a49c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:05:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-7813442c8f6bff0418e5a7022ad044d9c2c8d11fa14bb5001a1c4d13a9048f2b-merged.mount: Deactivated successfully.
Oct 01 17:05:55 compute-0 podman[274775]: 2025-10-01 17:05:55.000668136 +0000 UTC m=+1.385006927 container remove 0cd854803bf400bfae938f3064af6cef553ef6f9c9f9b6bd8758fdd9ef0a49c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_heisenberg, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 01 17:05:55 compute-0 systemd[1]: libpod-conmon-0cd854803bf400bfae938f3064af6cef553ef6f9c9f9b6bd8758fdd9ef0a49c8.scope: Deactivated successfully.
Oct 01 17:05:55 compute-0 sudo[274669]: pam_unix(sudo:session): session closed for user root
Oct 01 17:05:55 compute-0 sudo[274812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:05:55 compute-0 sudo[274812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:05:55 compute-0 sudo[274812]: pam_unix(sudo:session): session closed for user root
Oct 01 17:05:55 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "r", "format": "json"}]: dispatch
Oct 01 17:05:55 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:05:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Oct 01 17:05:55 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Oct 01 17:05:55 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID alice bob with tenant 1841221f332340a299707d253063659f
Oct 01 17:05:55 compute-0 sudo[274837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:05:55 compute-0 sudo[274837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:05:55 compute-0 sudo[274837]: pam_unix(sudo:session): session closed for user root
Oct 01 17:05:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:05:55 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:05:55 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:05:55 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:05:55 compute-0 sudo[274862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:05:55 compute-0 sudo[274862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:05:55 compute-0 sudo[274862]: pam_unix(sudo:session): session closed for user root
Oct 01 17:05:55 compute-0 sudo[274887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 17:05:55 compute-0 sudo[274887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:05:55 compute-0 ceph-mon[74273]: pgmap v1088: 305 pgs: 305 active+clean; 60 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 160 KiB/s wr, 17 op/s
Oct 01 17:05:55 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "r", "format": "json"}]: dispatch
Oct 01 17:05:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Oct 01 17:05:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:05:55 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:05:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Oct 01 17:05:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Oct 01 17:05:55 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Oct 01 17:05:55 compute-0 podman[274950]: 2025-10-01 17:05:55.725927693 +0000 UTC m=+0.043046615 container create 4419f3c031735294b8d357688916a042a93cea4d2d3b2b44909e7f0cba7312d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_driscoll, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:05:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:05:55 compute-0 systemd[1]: Started libpod-conmon-4419f3c031735294b8d357688916a042a93cea4d2d3b2b44909e7f0cba7312d6.scope.
Oct 01 17:05:55 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:05:55 compute-0 podman[274950]: 2025-10-01 17:05:55.793923403 +0000 UTC m=+0.111042295 container init 4419f3c031735294b8d357688916a042a93cea4d2d3b2b44909e7f0cba7312d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_driscoll, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 01 17:05:55 compute-0 podman[274950]: 2025-10-01 17:05:55.704997632 +0000 UTC m=+0.022116534 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:05:55 compute-0 podman[274950]: 2025-10-01 17:05:55.803642738 +0000 UTC m=+0.120761620 container start 4419f3c031735294b8d357688916a042a93cea4d2d3b2b44909e7f0cba7312d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_driscoll, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 01 17:05:55 compute-0 podman[274950]: 2025-10-01 17:05:55.806972002 +0000 UTC m=+0.124090874 container attach 4419f3c031735294b8d357688916a042a93cea4d2d3b2b44909e7f0cba7312d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_driscoll, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 01 17:05:55 compute-0 nifty_driscoll[274967]: 167 167
Oct 01 17:05:55 compute-0 systemd[1]: libpod-4419f3c031735294b8d357688916a042a93cea4d2d3b2b44909e7f0cba7312d6.scope: Deactivated successfully.
Oct 01 17:05:55 compute-0 conmon[274967]: conmon 4419f3c031735294b8d3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4419f3c031735294b8d357688916a042a93cea4d2d3b2b44909e7f0cba7312d6.scope/container/memory.events
Oct 01 17:05:55 compute-0 podman[274950]: 2025-10-01 17:05:55.810416072 +0000 UTC m=+0.127534964 container died 4419f3c031735294b8d357688916a042a93cea4d2d3b2b44909e7f0cba7312d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 01 17:05:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d8188d170585100ce2cefb62d235223de670e047324e7ba401422ce6da1ec2d-merged.mount: Deactivated successfully.
Oct 01 17:05:55 compute-0 podman[274950]: 2025-10-01 17:05:55.858273021 +0000 UTC m=+0.175391913 container remove 4419f3c031735294b8d357688916a042a93cea4d2d3b2b44909e7f0cba7312d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 01 17:05:55 compute-0 systemd[1]: libpod-conmon-4419f3c031735294b8d357688916a042a93cea4d2d3b2b44909e7f0cba7312d6.scope: Deactivated successfully.
Oct 01 17:05:56 compute-0 podman[274990]: 2025-10-01 17:05:56.030871952 +0000 UTC m=+0.037077710 container create 71a5ef5f89732bfdf782d3e1235a7b3fc3ab57e611b1e20bec4d5cc8387c0a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 01 17:05:56 compute-0 systemd[1]: Started libpod-conmon-71a5ef5f89732bfdf782d3e1235a7b3fc3ab57e611b1e20bec4d5cc8387c0a93.scope.
Oct 01 17:05:56 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 60 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 150 KiB/s wr, 16 op/s
Oct 01 17:05:56 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:05:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbb26245fb03598cbe9c532c2519c896133ab8eee54b095ff87006b5c8450c06/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:05:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbb26245fb03598cbe9c532c2519c896133ab8eee54b095ff87006b5c8450c06/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:05:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbb26245fb03598cbe9c532c2519c896133ab8eee54b095ff87006b5c8450c06/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:05:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbb26245fb03598cbe9c532c2519c896133ab8eee54b095ff87006b5c8450c06/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:05:56 compute-0 podman[274990]: 2025-10-01 17:05:56.10929737 +0000 UTC m=+0.115503158 container init 71a5ef5f89732bfdf782d3e1235a7b3fc3ab57e611b1e20bec4d5cc8387c0a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:05:56 compute-0 podman[274990]: 2025-10-01 17:05:56.014922416 +0000 UTC m=+0.021128204 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:05:56 compute-0 podman[274990]: 2025-10-01 17:05:56.114951066 +0000 UTC m=+0.121156834 container start 71a5ef5f89732bfdf782d3e1235a7b3fc3ab57e611b1e20bec4d5cc8387c0a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_driscoll, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 01 17:05:56 compute-0 podman[274990]: 2025-10-01 17:05:56.11828804 +0000 UTC m=+0.124493828 container attach 71a5ef5f89732bfdf782d3e1235a7b3fc3ab57e611b1e20bec4d5cc8387c0a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 01 17:05:56 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "snap_name": "02b52745-7744-41a8-9356-5182381bc1a5", "format": "json"}]: dispatch
Oct 01 17:05:56 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:02b52745-7744-41a8-9356-5182381bc1a5, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:05:56 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:02b52745-7744-41a8-9356-5182381bc1a5, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:05:56 compute-0 ceph-mon[74273]: osdmap e147: 3 total, 3 up, 3 in
Oct 01 17:05:56 compute-0 ceph-mon[74273]: pgmap v1090: 305 pgs: 305 active+clean; 60 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 150 KiB/s wr, 16 op/s
Oct 01 17:05:56 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "snap_name": "02b52745-7744-41a8-9356-5182381bc1a5", "format": "json"}]: dispatch
Oct 01 17:05:56 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f4359a50-b671-43df-8a71-afb168cd35b0", "format": "json"}]: dispatch
Oct 01 17:05:56 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f4359a50-b671-43df-8a71-afb168cd35b0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:05:56 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f4359a50-b671-43df-8a71-afb168cd35b0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:05:56 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:05:56.648+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f4359a50-b671-43df-8a71-afb168cd35b0' of type subvolume
Oct 01 17:05:56 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f4359a50-b671-43df-8a71-afb168cd35b0' of type subvolume
Oct 01 17:05:56 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f4359a50-b671-43df-8a71-afb168cd35b0", "force": true, "format": "json"}]: dispatch
Oct 01 17:05:56 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f4359a50-b671-43df-8a71-afb168cd35b0, vol_name:cephfs) < ""
Oct 01 17:05:56 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f4359a50-b671-43df-8a71-afb168cd35b0'' moved to trashcan
Oct 01 17:05:56 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:05:56 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f4359a50-b671-43df-8a71-afb168cd35b0, vol_name:cephfs) < ""
Oct 01 17:05:56 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1c897d2a-1173-4d28-affc-8c999f770456", "auth_id": "eve47", "format": "json"}]: dispatch
Oct 01 17:05:56 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:1c897d2a-1173-4d28-affc-8c999f770456, vol_name:cephfs) < ""
Oct 01 17:05:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0) v1
Oct 01 17:05:57 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.eve47", "format": "json"}]: dispatch
Oct 01 17:05:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve47"} v 0) v1
Oct 01 17:05:57 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.eve47"}]: dispatch
Oct 01 17:05:57 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished
Oct 01 17:05:57 compute-0 stoic_driscoll[275007]: {
Oct 01 17:05:57 compute-0 stoic_driscoll[275007]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 17:05:57 compute-0 stoic_driscoll[275007]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:05:57 compute-0 stoic_driscoll[275007]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 17:05:57 compute-0 stoic_driscoll[275007]:         "osd_id": 2,
Oct 01 17:05:57 compute-0 stoic_driscoll[275007]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 17:05:57 compute-0 stoic_driscoll[275007]:         "type": "bluestore"
Oct 01 17:05:57 compute-0 stoic_driscoll[275007]:     },
Oct 01 17:05:57 compute-0 stoic_driscoll[275007]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 17:05:57 compute-0 stoic_driscoll[275007]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:05:57 compute-0 stoic_driscoll[275007]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 17:05:57 compute-0 stoic_driscoll[275007]:         "osd_id": 0,
Oct 01 17:05:57 compute-0 stoic_driscoll[275007]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 17:05:57 compute-0 stoic_driscoll[275007]:         "type": "bluestore"
Oct 01 17:05:57 compute-0 stoic_driscoll[275007]:     },
Oct 01 17:05:57 compute-0 stoic_driscoll[275007]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 17:05:57 compute-0 stoic_driscoll[275007]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:05:57 compute-0 stoic_driscoll[275007]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 17:05:57 compute-0 stoic_driscoll[275007]:         "osd_id": 1,
Oct 01 17:05:57 compute-0 stoic_driscoll[275007]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 17:05:57 compute-0 stoic_driscoll[275007]:         "type": "bluestore"
Oct 01 17:05:57 compute-0 stoic_driscoll[275007]:     }
Oct 01 17:05:57 compute-0 stoic_driscoll[275007]: }
Oct 01 17:05:57 compute-0 systemd[1]: libpod-71a5ef5f89732bfdf782d3e1235a7b3fc3ab57e611b1e20bec4d5cc8387c0a93.scope: Deactivated successfully.
Oct 01 17:05:57 compute-0 podman[274990]: 2025-10-01 17:05:57.103988962 +0000 UTC m=+1.110194730 container died 71a5ef5f89732bfdf782d3e1235a7b3fc3ab57e611b1e20bec4d5cc8387c0a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_driscoll, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:05:57 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:1c897d2a-1173-4d28-affc-8c999f770456, vol_name:cephfs) < ""
Oct 01 17:05:57 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1c897d2a-1173-4d28-affc-8c999f770456", "auth_id": "eve47", "format": "json"}]: dispatch
Oct 01 17:05:57 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:1c897d2a-1173-4d28-affc-8c999f770456, vol_name:cephfs) < ""
Oct 01 17:05:57 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve47, client_metadata.root=/volumes/_nogroup/1c897d2a-1173-4d28-affc-8c999f770456/6a685f3e-18b7-4295-a517-5103fbf2a8f4
Oct 01 17:05:57 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=eve47,client_metadata.root=/volumes/_nogroup/1c897d2a-1173-4d28-affc-8c999f770456/6a685f3e-18b7-4295-a517-5103fbf2a8f4],prefix=session evict} (starting...)
Oct 01 17:05:57 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:05:57 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:1c897d2a-1173-4d28-affc-8c999f770456, vol_name:cephfs) < ""
Oct 01 17:05:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbb26245fb03598cbe9c532c2519c896133ab8eee54b095ff87006b5c8450c06-merged.mount: Deactivated successfully.
Oct 01 17:05:57 compute-0 podman[274990]: 2025-10-01 17:05:57.163838774 +0000 UTC m=+1.170044542 container remove 71a5ef5f89732bfdf782d3e1235a7b3fc3ab57e611b1e20bec4d5cc8387c0a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_driscoll, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 17:05:57 compute-0 systemd[1]: libpod-conmon-71a5ef5f89732bfdf782d3e1235a7b3fc3ab57e611b1e20bec4d5cc8387c0a93.scope: Deactivated successfully.
Oct 01 17:05:57 compute-0 sudo[274887]: pam_unix(sudo:session): session closed for user root
Oct 01 17:05:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 17:05:57 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:05:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 17:05:57 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:05:57 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev f8d5b65e-f47b-449e-aaaf-82c8f7785559 does not exist
Oct 01 17:05:57 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev c27f603a-7613-4666-a68b-e0f3d3a0b3ef does not exist
Oct 01 17:05:57 compute-0 sudo[275053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:05:57 compute-0 sudo[275053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:05:57 compute-0 sudo[275053]: pam_unix(sudo:session): session closed for user root
Oct 01 17:05:57 compute-0 sudo[275078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 17:05:57 compute-0 sudo[275078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:05:57 compute-0 sudo[275078]: pam_unix(sudo:session): session closed for user root
Oct 01 17:05:57 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f4359a50-b671-43df-8a71-afb168cd35b0", "format": "json"}]: dispatch
Oct 01 17:05:57 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f4359a50-b671-43df-8a71-afb168cd35b0", "force": true, "format": "json"}]: dispatch
Oct 01 17:05:57 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1c897d2a-1173-4d28-affc-8c999f770456", "auth_id": "eve47", "format": "json"}]: dispatch
Oct 01 17:05:57 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.eve47", "format": "json"}]: dispatch
Oct 01 17:05:57 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.eve47"}]: dispatch
Oct 01 17:05:57 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished
Oct 01 17:05:57 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1c897d2a-1173-4d28-affc-8c999f770456", "auth_id": "eve47", "format": "json"}]: dispatch
Oct 01 17:05:57 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:05:57 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:05:58 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 60 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 150 KiB/s wr, 16 op/s
Oct 01 17:05:58 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "format": "json"}]: dispatch
Oct 01 17:05:58 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:05:58 compute-0 podman[275103]: 2025-10-01 17:05:58.745577928 +0000 UTC m=+0.063457014 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid)
Oct 01 17:05:58 compute-0 ceph-mon[74273]: pgmap v1091: 305 pgs: 305 active+clean; 60 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 150 KiB/s wr, 16 op/s
Oct 01 17:05:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Oct 01 17:05:59 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Oct 01 17:05:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Oct 01 17:05:59 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Oct 01 17:05:59 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Oct 01 17:05:59 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:05:59 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "format": "json"}]: dispatch
Oct 01 17:05:59 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:05:59 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1
Oct 01 17:05:59 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1],prefix=session evict} (starting...)
Oct 01 17:05:59 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:05:59 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:06:00 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5c1c29e1-3838-4336-89e4-29126b32e8a3", "format": "json"}]: dispatch
Oct 01 17:06:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5c1c29e1-3838-4336-89e4-29126b32e8a3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:06:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5c1c29e1-3838-4336-89e4-29126b32e8a3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:06:00 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:06:00.042+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5c1c29e1-3838-4336-89e4-29126b32e8a3' of type subvolume
Oct 01 17:06:00 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5c1c29e1-3838-4336-89e4-29126b32e8a3' of type subvolume
Oct 01 17:06:00 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5c1c29e1-3838-4336-89e4-29126b32e8a3", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5c1c29e1-3838-4336-89e4-29126b32e8a3, vol_name:cephfs) < ""
Oct 01 17:06:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5c1c29e1-3838-4336-89e4-29126b32e8a3'' moved to trashcan
Oct 01 17:06:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:06:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5c1c29e1-3838-4336-89e4-29126b32e8a3, vol_name:cephfs) < ""
Oct 01 17:06:00 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "format": "json"}]: dispatch
Oct 01 17:06:00 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Oct 01 17:06:00 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Oct 01 17:06:00 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Oct 01 17:06:00 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 60 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 150 KiB/s wr, 16 op/s
Oct 01 17:06:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:06:00 compute-0 podman[275125]: 2025-10-01 17:06:00.775450049 +0000 UTC m=+0.085966261 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 01 17:06:00 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "snap_name": "02b52745-7744-41a8-9356-5182381bc1a5_ef8e3ff4-43a7-4d58-be3c-7a40a4de32c8", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:02b52745-7744-41a8-9356-5182381bc1a5_ef8e3ff4-43a7-4d58-be3c-7a40a4de32c8, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:06:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta.tmp'
Oct 01 17:06:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta.tmp' to config b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta'
Oct 01 17:06:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:02b52745-7744-41a8-9356-5182381bc1a5_ef8e3ff4-43a7-4d58-be3c-7a40a4de32c8, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:06:00 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "snap_name": "02b52745-7744-41a8-9356-5182381bc1a5", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:02b52745-7744-41a8-9356-5182381bc1a5, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:06:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta.tmp'
Oct 01 17:06:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta.tmp' to config b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta'
Oct 01 17:06:01 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:02b52745-7744-41a8-9356-5182381bc1a5, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:06:01 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "alice bob", "format": "json"}]: dispatch
Oct 01 17:06:01 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5c1c29e1-3838-4336-89e4-29126b32e8a3", "format": "json"}]: dispatch
Oct 01 17:06:01 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5c1c29e1-3838-4336-89e4-29126b32e8a3", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:01 compute-0 ceph-mon[74273]: pgmap v1092: 305 pgs: 305 active+clean; 60 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 150 KiB/s wr, 16 op/s
Oct 01 17:06:01 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1c897d2a-1173-4d28-affc-8c999f770456", "auth_id": "eve49", "format": "json"}]: dispatch
Oct 01 17:06:01 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:1c897d2a-1173-4d28-affc-8c999f770456, vol_name:cephfs) < ""
Oct 01 17:06:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0) v1
Oct 01 17:06:01 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.eve49", "format": "json"}]: dispatch
Oct 01 17:06:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve49"} v 0) v1
Oct 01 17:06:01 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.eve49"}]: dispatch
Oct 01 17:06:01 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished
Oct 01 17:06:01 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:1c897d2a-1173-4d28-affc-8c999f770456, vol_name:cephfs) < ""
Oct 01 17:06:01 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1c897d2a-1173-4d28-affc-8c999f770456", "auth_id": "eve49", "format": "json"}]: dispatch
Oct 01 17:06:01 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:1c897d2a-1173-4d28-affc-8c999f770456, vol_name:cephfs) < ""
Oct 01 17:06:01 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve49, client_metadata.root=/volumes/_nogroup/1c897d2a-1173-4d28-affc-8c999f770456/6a685f3e-18b7-4295-a517-5103fbf2a8f4
Oct 01 17:06:01 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=eve49,client_metadata.root=/volumes/_nogroup/1c897d2a-1173-4d28-affc-8c999f770456/6a685f3e-18b7-4295-a517-5103fbf2a8f4],prefix=session evict} (starting...)
Oct 01 17:06:01 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:06:01 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:1c897d2a-1173-4d28-affc-8c999f770456, vol_name:cephfs) < ""
Oct 01 17:06:01 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1c897d2a-1173-4d28-affc-8c999f770456", "format": "json"}]: dispatch
Oct 01 17:06:01 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1c897d2a-1173-4d28-affc-8c999f770456, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:06:01 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1c897d2a-1173-4d28-affc-8c999f770456, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:06:01 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:06:01.456+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1c897d2a-1173-4d28-affc-8c999f770456' of type subvolume
Oct 01 17:06:01 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1c897d2a-1173-4d28-affc-8c999f770456' of type subvolume
Oct 01 17:06:01 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1c897d2a-1173-4d28-affc-8c999f770456", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:01 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1c897d2a-1173-4d28-affc-8c999f770456, vol_name:cephfs) < ""
Oct 01 17:06:01 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1c897d2a-1173-4d28-affc-8c999f770456'' moved to trashcan
Oct 01 17:06:01 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:06:01 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1c897d2a-1173-4d28-affc-8c999f770456, vol_name:cephfs) < ""
Oct 01 17:06:01 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "bob", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:06:01 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:06:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) v1
Oct 01 17:06:01 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Oct 01 17:06:01 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID bob with tenant 1841221f332340a299707d253063659f
Oct 01 17:06:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:06:02 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:06:02 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:06:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:06:02 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 60 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 150 KiB/s wr, 16 op/s
Oct 01 17:06:02 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "snap_name": "02b52745-7744-41a8-9356-5182381bc1a5_ef8e3ff4-43a7-4d58-be3c-7a40a4de32c8", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:02 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "snap_name": "02b52745-7744-41a8-9356-5182381bc1a5", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:02 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1c897d2a-1173-4d28-affc-8c999f770456", "auth_id": "eve49", "format": "json"}]: dispatch
Oct 01 17:06:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.eve49", "format": "json"}]: dispatch
Oct 01 17:06:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.eve49"}]: dispatch
Oct 01 17:06:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished
Oct 01 17:06:02 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1c897d2a-1173-4d28-affc-8c999f770456", "auth_id": "eve49", "format": "json"}]: dispatch
Oct 01 17:06:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Oct 01 17:06:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:06:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:06:02 compute-0 unix_chkpwd[275149]: password check failed for user (root)
Oct 01 17:06:02 compute-0 sshd-session[275146]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=185.156.73.233  user=root
Oct 01 17:06:03 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1c897d2a-1173-4d28-affc-8c999f770456", "format": "json"}]: dispatch
Oct 01 17:06:03 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1c897d2a-1173-4d28-affc-8c999f770456", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:03 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "bob", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:06:03 compute-0 ceph-mon[74273]: pgmap v1093: 305 pgs: 305 active+clean; 60 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 150 KiB/s wr, 16 op/s
Oct 01 17:06:03 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "47b2be1b-8457-4141-a779-0c9e30960e86", "auth_id": "admin", "format": "json"}]: dispatch
Oct 01 17:06:03 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:47b2be1b-8457-4141-a779-0c9e30960e86, vol_name:cephfs) < ""
Oct 01 17:06:03 compute-0 ceph-mgr[74571]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin doesn't exist
Oct 01 17:06:03 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:47b2be1b-8457-4141-a779-0c9e30960e86, vol_name:cephfs) < ""
Oct 01 17:06:03 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:06:03.565+0000 7f813a030640 -1 mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist
Oct 01 17:06:03 compute-0 ceph-mgr[74571]: mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist
Oct 01 17:06:03 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "47b2be1b-8457-4141-a779-0c9e30960e86", "format": "json"}]: dispatch
Oct 01 17:06:03 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:47b2be1b-8457-4141-a779-0c9e30960e86, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:06:03 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:47b2be1b-8457-4141-a779-0c9e30960e86, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:06:03 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:06:03.669+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '47b2be1b-8457-4141-a779-0c9e30960e86' of type subvolume
Oct 01 17:06:03 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '47b2be1b-8457-4141-a779-0c9e30960e86' of type subvolume
Oct 01 17:06:03 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "47b2be1b-8457-4141-a779-0c9e30960e86", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:03 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:47b2be1b-8457-4141-a779-0c9e30960e86, vol_name:cephfs) < ""
Oct 01 17:06:03 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/47b2be1b-8457-4141-a779-0c9e30960e86'' moved to trashcan
Oct 01 17:06:03 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:06:03 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:47b2be1b-8457-4141-a779-0c9e30960e86, vol_name:cephfs) < ""
Oct 01 17:06:04 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 61 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 155 KiB/s wr, 16 op/s
Oct 01 17:06:04 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "snap_name": "508b7f89-fa92-48b9-b840-d766077bff91", "format": "json"}]: dispatch
Oct 01 17:06:04 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:508b7f89-fa92-48b9-b840-d766077bff91, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:06:04 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:508b7f89-fa92-48b9-b840-d766077bff91, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:06:05 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "47b2be1b-8457-4141-a779-0c9e30960e86", "auth_id": "admin", "format": "json"}]: dispatch
Oct 01 17:06:05 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "47b2be1b-8457-4141-a779-0c9e30960e86", "format": "json"}]: dispatch
Oct 01 17:06:05 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "47b2be1b-8457-4141-a779-0c9e30960e86", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:05 compute-0 ceph-mon[74273]: pgmap v1094: 305 pgs: 305 active+clean; 61 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 155 KiB/s wr, 16 op/s
Oct 01 17:06:05 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "snap_name": "508b7f89-fa92-48b9-b840-d766077bff91", "format": "json"}]: dispatch
Oct 01 17:06:05 compute-0 sshd-session[275146]: Failed password for root from 185.156.73.233 port 30304 ssh2
Oct 01 17:06:05 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:06:05.546 162304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:71:db', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:60:3f:78:bd:29'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 17:06:05 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:06:05.548 162304 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 17:06:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:06:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Oct 01 17:06:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Oct 01 17:06:05 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Oct 01 17:06:05 compute-0 sshd-session[275146]: Connection closed by authenticating user root 185.156.73.233 port 30304 [preauth]
Oct 01 17:06:06 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 61 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 155 KiB/s wr, 16 op/s
Oct 01 17:06:06 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "59fb41a6-024e-4d1c-873e-759037caf2c4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:06:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:59fb41a6-024e-4d1c-873e-759037caf2c4, vol_name:cephfs) < ""
Oct 01 17:06:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/59fb41a6-024e-4d1c-873e-759037caf2c4/.meta.tmp'
Oct 01 17:06:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/59fb41a6-024e-4d1c-873e-759037caf2c4/.meta.tmp' to config b'/volumes/_nogroup/59fb41a6-024e-4d1c-873e-759037caf2c4/.meta'
Oct 01 17:06:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:59fb41a6-024e-4d1c-873e-759037caf2c4, vol_name:cephfs) < ""
Oct 01 17:06:06 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "59fb41a6-024e-4d1c-873e-759037caf2c4", "format": "json"}]: dispatch
Oct 01 17:06:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:59fb41a6-024e-4d1c-873e-759037caf2c4, vol_name:cephfs) < ""
Oct 01 17:06:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:59fb41a6-024e-4d1c-873e-759037caf2c4, vol_name:cephfs) < ""
Oct 01 17:06:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:06:06 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:06:06 compute-0 ceph-mon[74273]: osdmap e148: 3 total, 3 up, 3 in
Oct 01 17:06:06 compute-0 ceph-mon[74273]: pgmap v1096: 305 pgs: 305 active+clean; 61 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 155 KiB/s wr, 16 op/s
Oct 01 17:06:06 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "59fb41a6-024e-4d1c-873e-759037caf2c4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:06:06 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:06:08 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "snap_name": "508b7f89-fa92-48b9-b840-d766077bff91_d2c9ceb9-9055-45e2-8c01-789f174f4dee", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:08 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:508b7f89-fa92-48b9-b840-d766077bff91_d2c9ceb9-9055-45e2-8c01-789f174f4dee, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:06:08 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 61 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 154 KiB/s wr, 15 op/s
Oct 01 17:06:08 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta.tmp'
Oct 01 17:06:08 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta.tmp' to config b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta'
Oct 01 17:06:08 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:508b7f89-fa92-48b9-b840-d766077bff91_d2c9ceb9-9055-45e2-8c01-789f174f4dee, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:06:08 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "snap_name": "508b7f89-fa92-48b9-b840-d766077bff91", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:08 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:508b7f89-fa92-48b9-b840-d766077bff91, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:06:08 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta.tmp'
Oct 01 17:06:08 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta.tmp' to config b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta'
Oct 01 17:06:08 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:508b7f89-fa92-48b9-b840-d766077bff91, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:06:08 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "59fb41a6-024e-4d1c-873e-759037caf2c4", "format": "json"}]: dispatch
Oct 01 17:06:09 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "snap_name": "508b7f89-fa92-48b9-b840-d766077bff91_d2c9ceb9-9055-45e2-8c01-789f174f4dee", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:09 compute-0 ceph-mon[74273]: pgmap v1097: 305 pgs: 305 active+clean; 61 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 154 KiB/s wr, 15 op/s
Oct 01 17:06:09 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "snap_name": "508b7f89-fa92-48b9-b840-d766077bff91", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:10 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "59fb41a6-024e-4d1c-873e-759037caf2c4", "auth_id": "bob", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:06:10 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:59fb41a6-024e-4d1c-873e-759037caf2c4, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:06:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) v1
Oct 01 17:06:10 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Oct 01 17:06:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1,allow rw path=/volumes/_nogroup/59fb41a6-024e-4d1c-873e-759037caf2c4/651f80d7-2152-4119-82f1-a3dc272356ed", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9,allow rw pool=cephfs.cephfs.data namespace=fsvolumens_59fb41a6-024e-4d1c-873e-759037caf2c4"]} v 0) v1
Oct 01 17:06:10 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1,allow rw path=/volumes/_nogroup/59fb41a6-024e-4d1c-873e-759037caf2c4/651f80d7-2152-4119-82f1-a3dc272356ed", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9,allow rw pool=cephfs.cephfs.data namespace=fsvolumens_59fb41a6-024e-4d1c-873e-759037caf2c4"]}]: dispatch
Oct 01 17:06:10 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1,allow rw path=/volumes/_nogroup/59fb41a6-024e-4d1c-873e-759037caf2c4/651f80d7-2152-4119-82f1-a3dc272356ed", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9,allow rw pool=cephfs.cephfs.data namespace=fsvolumens_59fb41a6-024e-4d1c-873e-759037caf2c4"]}]': finished
Oct 01 17:06:10 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 61 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 123 KiB/s wr, 13 op/s
Oct 01 17:06:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) v1
Oct 01 17:06:10 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Oct 01 17:06:10 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:59fb41a6-024e-4d1c-873e-759037caf2c4, tenant_id:1841221f332340a299707d253063659f, vol_name:cephfs) < ""
Oct 01 17:06:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Oct 01 17:06:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1,allow rw path=/volumes/_nogroup/59fb41a6-024e-4d1c-873e-759037caf2c4/651f80d7-2152-4119-82f1-a3dc272356ed", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9,allow rw pool=cephfs.cephfs.data namespace=fsvolumens_59fb41a6-024e-4d1c-873e-759037caf2c4"]}]: dispatch
Oct 01 17:06:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1,allow rw path=/volumes/_nogroup/59fb41a6-024e-4d1c-873e-759037caf2c4/651f80d7-2152-4119-82f1-a3dc272356ed", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9,allow rw pool=cephfs.cephfs.data namespace=fsvolumens_59fb41a6-024e-4d1c-873e-759037caf2c4"]}]': finished
Oct 01 17:06:10 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Oct 01 17:06:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:06:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Oct 01 17:06:11 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "59fb41a6-024e-4d1c-873e-759037caf2c4", "auth_id": "bob", "tenant_id": "1841221f332340a299707d253063659f", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:06:11 compute-0 ceph-mon[74273]: pgmap v1098: 305 pgs: 305 active+clean; 61 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 123 KiB/s wr, 13 op/s
Oct 01 17:06:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Oct 01 17:06:11 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Oct 01 17:06:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:06:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:06:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:06:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:06:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_17:06:11
Oct 01 17:06:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 17:06:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 17:06:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'images', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'vms', 'backups', 'volumes']
Oct 01 17:06:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 17:06:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:06:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:06:11 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "snap_name": "232f838a-1ec0-42c0-a4cb-74dcdee9927e", "format": "json"}]: dispatch
Oct 01 17:06:11 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:232f838a-1ec0-42c0-a4cb-74dcdee9927e, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:06:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 17:06:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:06:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:06:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:06:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:06:11 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:232f838a-1ec0-42c0-a4cb-74dcdee9927e, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:06:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 17:06:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:06:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:06:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:06:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:06:11 compute-0 podman[275150]: 2025-10-01 17:06:11.785129778 +0000 UTC m=+0.105744117 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:06:11 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "73871f97-0e9f-4424-8c8d-1fc5d34e1441", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:06:11 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:73871f97-0e9f-4424-8c8d-1fc5d34e1441, vol_name:cephfs) < ""
Oct 01 17:06:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/73871f97-0e9f-4424-8c8d-1fc5d34e1441/.meta.tmp'
Oct 01 17:06:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/73871f97-0e9f-4424-8c8d-1fc5d34e1441/.meta.tmp' to config b'/volumes/_nogroup/73871f97-0e9f-4424-8c8d-1fc5d34e1441/.meta'
Oct 01 17:06:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:73871f97-0e9f-4424-8c8d-1fc5d34e1441, vol_name:cephfs) < ""
Oct 01 17:06:12 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "73871f97-0e9f-4424-8c8d-1fc5d34e1441", "format": "json"}]: dispatch
Oct 01 17:06:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:73871f97-0e9f-4424-8c8d-1fc5d34e1441, vol_name:cephfs) < ""
Oct 01 17:06:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:73871f97-0e9f-4424-8c8d-1fc5d34e1441, vol_name:cephfs) < ""
Oct 01 17:06:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:06:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:06:12 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 61 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 36 KiB/s wr, 5 op/s
Oct 01 17:06:12 compute-0 ceph-mon[74273]: osdmap e149: 3 total, 3 up, 3 in
Oct 01 17:06:12 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:06:13 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "snap_name": "232f838a-1ec0-42c0-a4cb-74dcdee9927e", "format": "json"}]: dispatch
Oct 01 17:06:13 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "73871f97-0e9f-4424-8c8d-1fc5d34e1441", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:06:13 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "73871f97-0e9f-4424-8c8d-1fc5d34e1441", "format": "json"}]: dispatch
Oct 01 17:06:13 compute-0 ceph-mon[74273]: pgmap v1100: 305 pgs: 305 active+clean; 61 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 36 KiB/s wr, 5 op/s
Oct 01 17:06:13 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:06:13.550 162304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d2971fc2-5b75-459a-98a0-6e626d0d4d99, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 17:06:13 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "59fb41a6-024e-4d1c-873e-759037caf2c4", "auth_id": "bob", "format": "json"}]: dispatch
Oct 01 17:06:13 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:59fb41a6-024e-4d1c-873e-759037caf2c4, vol_name:cephfs) < ""
Oct 01 17:06:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) v1
Oct 01 17:06:13 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Oct 01 17:06:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9"]} v 0) v1
Oct 01 17:06:13 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9"]}]: dispatch
Oct 01 17:06:13 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9"]}]': finished
Oct 01 17:06:13 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:59fb41a6-024e-4d1c-873e-759037caf2c4, vol_name:cephfs) < ""
Oct 01 17:06:13 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "59fb41a6-024e-4d1c-873e-759037caf2c4", "auth_id": "bob", "format": "json"}]: dispatch
Oct 01 17:06:13 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:59fb41a6-024e-4d1c-873e-759037caf2c4, vol_name:cephfs) < ""
Oct 01 17:06:13 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/59fb41a6-024e-4d1c-873e-759037caf2c4/651f80d7-2152-4119-82f1-a3dc272356ed
Oct 01 17:06:13 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=bob,client_metadata.root=/volumes/_nogroup/59fb41a6-024e-4d1c-873e-759037caf2c4/651f80d7-2152-4119-82f1-a3dc272356ed],prefix=session evict} (starting...)
Oct 01 17:06:13 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:06:13 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:59fb41a6-024e-4d1c-873e-759037caf2c4, vol_name:cephfs) < ""
Oct 01 17:06:14 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 62 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 85 KiB/s wr, 95 op/s
Oct 01 17:06:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Oct 01 17:06:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9"]}]: dispatch
Oct 01 17:06:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0fbcb4a0-2676-4b12-98ef-811f1d5718a9"]}]': finished
Oct 01 17:06:14 compute-0 nova_compute[259504]: 2025-10-01 17:06:14.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:06:15 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "59fb41a6-024e-4d1c-873e-759037caf2c4", "auth_id": "bob", "format": "json"}]: dispatch
Oct 01 17:06:15 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "59fb41a6-024e-4d1c-873e-759037caf2c4", "auth_id": "bob", "format": "json"}]: dispatch
Oct 01 17:06:15 compute-0 ceph-mon[74273]: pgmap v1101: 305 pgs: 305 active+clean; 62 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 85 KiB/s wr, 95 op/s
Oct 01 17:06:15 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "73871f97-0e9f-4424-8c8d-1fc5d34e1441", "auth_id": "tempest-cephx-id-2026451665", "tenant_id": "8c95d7d4c61e4a5bbe4dc2e0f6e03fd9", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:06:15 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2026451665, format:json, prefix:fs subvolume authorize, sub_name:73871f97-0e9f-4424-8c8d-1fc5d34e1441, tenant_id:8c95d7d4c61e4a5bbe4dc2e0f6e03fd9, vol_name:cephfs) < ""
Oct 01 17:06:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2026451665", "format": "json"} v 0) v1
Oct 01 17:06:15 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-2026451665", "format": "json"}]: dispatch
Oct 01 17:06:15 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: Creating meta for ID tempest-cephx-id-2026451665 with tenant 8c95d7d4c61e4a5bbe4dc2e0f6e03fd9
Oct 01 17:06:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2026451665", "caps": ["mds", "allow rw path=/volumes/_nogroup/73871f97-0e9f-4424-8c8d-1fc5d34e1441/7881a37b-f5b6-44e9-9c84-a8bd52bfbee7", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_73871f97-0e9f-4424-8c8d-1fc5d34e1441", "mon", "allow r"], "format": "json"} v 0) v1
Oct 01 17:06:15 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2026451665", "caps": ["mds", "allow rw path=/volumes/_nogroup/73871f97-0e9f-4424-8c8d-1fc5d34e1441/7881a37b-f5b6-44e9-9c84-a8bd52bfbee7", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_73871f97-0e9f-4424-8c8d-1fc5d34e1441", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:06:15 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2026451665", "caps": ["mds", "allow rw path=/volumes/_nogroup/73871f97-0e9f-4424-8c8d-1fc5d34e1441/7881a37b-f5b6-44e9-9c84-a8bd52bfbee7", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_73871f97-0e9f-4424-8c8d-1fc5d34e1441", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:06:15 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2026451665, format:json, prefix:fs subvolume authorize, sub_name:73871f97-0e9f-4424-8c8d-1fc5d34e1441, tenant_id:8c95d7d4c61e4a5bbe4dc2e0f6e03fd9, vol_name:cephfs) < ""
Oct 01 17:06:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:06:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Oct 01 17:06:15 compute-0 podman[275177]: 2025-10-01 17:06:15.777719299 +0000 UTC m=+0.082535359 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 01 17:06:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Oct 01 17:06:15 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Oct 01 17:06:16 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 62 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 87 KiB/s wr, 96 op/s
Oct 01 17:06:16 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-2026451665", "format": "json"}]: dispatch
Oct 01 17:06:16 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2026451665", "caps": ["mds", "allow rw path=/volumes/_nogroup/73871f97-0e9f-4424-8c8d-1fc5d34e1441/7881a37b-f5b6-44e9-9c84-a8bd52bfbee7", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_73871f97-0e9f-4424-8c8d-1fc5d34e1441", "mon", "allow r"], "format": "json"}]: dispatch
Oct 01 17:06:16 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2026451665", "caps": ["mds", "allow rw path=/volumes/_nogroup/73871f97-0e9f-4424-8c8d-1fc5d34e1441/7881a37b-f5b6-44e9-9c84-a8bd52bfbee7", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_73871f97-0e9f-4424-8c8d-1fc5d34e1441", "mon", "allow r"], "format": "json"}]': finished
Oct 01 17:06:16 compute-0 ceph-mon[74273]: osdmap e150: 3 total, 3 up, 3 in
Oct 01 17:06:16 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "73871f97-0e9f-4424-8c8d-1fc5d34e1441", "auth_id": "tempest-cephx-id-2026451665", "format": "json"}]: dispatch
Oct 01 17:06:16 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2026451665, format:json, prefix:fs subvolume deauthorize, sub_name:73871f97-0e9f-4424-8c8d-1fc5d34e1441, vol_name:cephfs) < ""
Oct 01 17:06:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2026451665", "format": "json"} v 0) v1
Oct 01 17:06:16 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-2026451665", "format": "json"}]: dispatch
Oct 01 17:06:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-2026451665"} v 0) v1
Oct 01 17:06:16 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2026451665"}]: dispatch
Oct 01 17:06:16 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2026451665"}]': finished
Oct 01 17:06:16 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2026451665, format:json, prefix:fs subvolume deauthorize, sub_name:73871f97-0e9f-4424-8c8d-1fc5d34e1441, vol_name:cephfs) < ""
Oct 01 17:06:16 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "73871f97-0e9f-4424-8c8d-1fc5d34e1441", "auth_id": "tempest-cephx-id-2026451665", "format": "json"}]: dispatch
Oct 01 17:06:16 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2026451665, format:json, prefix:fs subvolume evict, sub_name:73871f97-0e9f-4424-8c8d-1fc5d34e1441, vol_name:cephfs) < ""
Oct 01 17:06:16 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-2026451665, client_metadata.root=/volumes/_nogroup/73871f97-0e9f-4424-8c8d-1fc5d34e1441/7881a37b-f5b6-44e9-9c84-a8bd52bfbee7
Oct 01 17:06:16 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=tempest-cephx-id-2026451665,client_metadata.root=/volumes/_nogroup/73871f97-0e9f-4424-8c8d-1fc5d34e1441/7881a37b-f5b6-44e9-9c84-a8bd52bfbee7],prefix=session evict} (starting...)
Oct 01 17:06:16 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:06:16 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2026451665, format:json, prefix:fs subvolume evict, sub_name:73871f97-0e9f-4424-8c8d-1fc5d34e1441, vol_name:cephfs) < ""
Oct 01 17:06:17 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "73871f97-0e9f-4424-8c8d-1fc5d34e1441", "format": "json"}]: dispatch
Oct 01 17:06:17 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:73871f97-0e9f-4424-8c8d-1fc5d34e1441, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:06:17 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:73871f97-0e9f-4424-8c8d-1fc5d34e1441, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:06:17 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:06:17.053+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '73871f97-0e9f-4424-8c8d-1fc5d34e1441' of type subvolume
Oct 01 17:06:17 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '73871f97-0e9f-4424-8c8d-1fc5d34e1441' of type subvolume
Oct 01 17:06:17 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "73871f97-0e9f-4424-8c8d-1fc5d34e1441", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:17 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:73871f97-0e9f-4424-8c8d-1fc5d34e1441, vol_name:cephfs) < ""
Oct 01 17:06:17 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/73871f97-0e9f-4424-8c8d-1fc5d34e1441'' moved to trashcan
Oct 01 17:06:17 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:06:17 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:73871f97-0e9f-4424-8c8d-1fc5d34e1441, vol_name:cephfs) < ""
Oct 01 17:06:17 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "bob", "format": "json"}]: dispatch
Oct 01 17:06:17 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:06:17 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "73871f97-0e9f-4424-8c8d-1fc5d34e1441", "auth_id": "tempest-cephx-id-2026451665", "tenant_id": "8c95d7d4c61e4a5bbe4dc2e0f6e03fd9", "access_level": "rw", "format": "json"}]: dispatch
Oct 01 17:06:17 compute-0 ceph-mon[74273]: pgmap v1103: 305 pgs: 305 active+clean; 62 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 87 KiB/s wr, 96 op/s
Oct 01 17:06:17 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-2026451665", "format": "json"}]: dispatch
Oct 01 17:06:17 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2026451665"}]: dispatch
Oct 01 17:06:17 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2026451665"}]': finished
Oct 01 17:06:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) v1
Oct 01 17:06:17 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Oct 01 17:06:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.bob"} v 0) v1
Oct 01 17:06:17 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.bob"}]: dispatch
Oct 01 17:06:17 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished
Oct 01 17:06:17 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:06:17 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "bob", "format": "json"}]: dispatch
Oct 01 17:06:17 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:06:17 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1
Oct 01 17:06:17 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session evict {filters=[auth_name=bob,client_metadata.root=/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9/6bc94b15-f41b-47df-bdd9-fc9b59ccd4a1],prefix=session evict} (starting...)
Oct 01 17:06:17 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Oct 01 17:06:17 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:06:17 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "snap_name": "232f838a-1ec0-42c0-a4cb-74dcdee9927e_382469b8-5c61-468d-9898-1ac64a07b5fe", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:17 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:232f838a-1ec0-42c0-a4cb-74dcdee9927e_382469b8-5c61-468d-9898-1ac64a07b5fe, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:06:17 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta.tmp'
Oct 01 17:06:17 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta.tmp' to config b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta'
Oct 01 17:06:17 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:232f838a-1ec0-42c0-a4cb-74dcdee9927e_382469b8-5c61-468d-9898-1ac64a07b5fe, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:06:17 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "snap_name": "232f838a-1ec0-42c0-a4cb-74dcdee9927e", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:17 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:232f838a-1ec0-42c0-a4cb-74dcdee9927e, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:06:17 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta.tmp'
Oct 01 17:06:17 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta.tmp' to config b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta'
Oct 01 17:06:17 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:232f838a-1ec0-42c0-a4cb-74dcdee9927e, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:06:18 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 63 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 170 KiB/s wr, 102 op/s
Oct 01 17:06:18 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "73871f97-0e9f-4424-8c8d-1fc5d34e1441", "auth_id": "tempest-cephx-id-2026451665", "format": "json"}]: dispatch
Oct 01 17:06:18 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "73871f97-0e9f-4424-8c8d-1fc5d34e1441", "auth_id": "tempest-cephx-id-2026451665", "format": "json"}]: dispatch
Oct 01 17:06:18 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "73871f97-0e9f-4424-8c8d-1fc5d34e1441", "format": "json"}]: dispatch
Oct 01 17:06:18 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "73871f97-0e9f-4424-8c8d-1fc5d34e1441", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:18 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "bob", "format": "json"}]: dispatch
Oct 01 17:06:18 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Oct 01 17:06:18 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth rm", "entity": "client.bob"}]: dispatch
Oct 01 17:06:18 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished
Oct 01 17:06:18 compute-0 nova_compute[259504]: 2025-10-01 17:06:18.749 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:06:18 compute-0 nova_compute[259504]: 2025-10-01 17:06:18.782 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:06:18 compute-0 nova_compute[259504]: 2025-10-01 17:06:18.782 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:06:18 compute-0 nova_compute[259504]: 2025-10-01 17:06:18.782 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:06:18 compute-0 nova_compute[259504]: 2025-10-01 17:06:18.783 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 17:06:18 compute-0 nova_compute[259504]: 2025-10-01 17:06:18.783 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:06:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:06:19 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3496970711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:06:19 compute-0 nova_compute[259504]: 2025-10-01 17:06:19.217 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:06:19 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "auth_id": "bob", "format": "json"}]: dispatch
Oct 01 17:06:19 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "snap_name": "232f838a-1ec0-42c0-a4cb-74dcdee9927e_382469b8-5c61-468d-9898-1ac64a07b5fe", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:19 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "snap_name": "232f838a-1ec0-42c0-a4cb-74dcdee9927e", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:19 compute-0 ceph-mon[74273]: pgmap v1104: 305 pgs: 305 active+clean; 63 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 170 KiB/s wr, 102 op/s
Oct 01 17:06:19 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3496970711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:06:19 compute-0 nova_compute[259504]: 2025-10-01 17:06:19.363 2 WARNING nova.virt.libvirt.driver [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 17:06:19 compute-0 nova_compute[259504]: 2025-10-01 17:06:19.364 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5083MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 17:06:19 compute-0 nova_compute[259504]: 2025-10-01 17:06:19.365 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:06:19 compute-0 nova_compute[259504]: 2025-10-01 17:06:19.365 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:06:19 compute-0 nova_compute[259504]: 2025-10-01 17:06:19.417 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 17:06:19 compute-0 nova_compute[259504]: 2025-10-01 17:06:19.417 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 17:06:19 compute-0 nova_compute[259504]: 2025-10-01 17:06:19.437 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:06:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:06:19 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/426454997' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:06:19 compute-0 nova_compute[259504]: 2025-10-01 17:06:19.863 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:06:19 compute-0 nova_compute[259504]: 2025-10-01 17:06:19.870 2 DEBUG nova.compute.provider_tree [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed in ProviderTree for provider: 2417da73-53f1-4edf-ae4c-fbd9fa470d6b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 17:06:19 compute-0 nova_compute[259504]: 2025-10-01 17:06:19.926 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 17:06:19 compute-0 nova_compute[259504]: 2025-10-01 17:06:19.929 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 17:06:19 compute-0 nova_compute[259504]: 2025-10-01 17:06:19.929 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:06:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:06:19.976 162304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:06:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:06:19.976 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:06:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:06:19.976 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:06:20 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 63 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 153 KiB/s wr, 93 op/s
Oct 01 17:06:20 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/426454997' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:06:20 compute-0 ceph-mon[74273]: pgmap v1105: 305 pgs: 305 active+clean; 63 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 153 KiB/s wr, 93 op/s
Oct 01 17:06:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:06:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 17:06:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:06:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 17:06:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:06:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:06:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:06:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:06:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:06:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:06:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:06:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 01 17:06:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:06:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0003465693576219314 of space, bias 4.0, pg target 0.41588322914631765 quantized to 16 (current 16)
Oct 01 17:06:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:06:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 6.994977860259165e-07 of space, bias 1.0, pg target 0.00020984933580777494 quantized to 32 (current 32)
Oct 01 17:06:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:06:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 17:06:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:06:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 17:06:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:06:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:06:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:06:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 17:06:21 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "format": "json"}]: dispatch
Oct 01 17:06:21 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:06:21 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:06:21 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:06:21.701+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0fbcb4a0-2676-4b12-98ef-811f1d5718a9' of type subvolume
Oct 01 17:06:21 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0fbcb4a0-2676-4b12-98ef-811f1d5718a9' of type subvolume
Oct 01 17:06:21 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:21 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:06:21 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0fbcb4a0-2676-4b12-98ef-811f1d5718a9'' moved to trashcan
Oct 01 17:06:21 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:06:21 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0fbcb4a0-2676-4b12-98ef-811f1d5718a9, vol_name:cephfs) < ""
Oct 01 17:06:22 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 63 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 137 KiB/s wr, 82 op/s
Oct 01 17:06:22 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "snap_name": "dbbe1979-4322-4f6e-adb1-00435680f5b2", "format": "json"}]: dispatch
Oct 01 17:06:22 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:dbbe1979-4322-4f6e-adb1-00435680f5b2, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:06:22 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:dbbe1979-4322-4f6e-adb1-00435680f5b2, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:06:22 compute-0 nova_compute[259504]: 2025-10-01 17:06:22.930 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:06:22 compute-0 nova_compute[259504]: 2025-10-01 17:06:22.931 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:06:22 compute-0 nova_compute[259504]: 2025-10-01 17:06:22.931 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 17:06:22 compute-0 nova_compute[259504]: 2025-10-01 17:06:22.932 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 17:06:22 compute-0 nova_compute[259504]: 2025-10-01 17:06:22.945 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 17:06:22 compute-0 nova_compute[259504]: 2025-10-01 17:06:22.945 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:06:22 compute-0 nova_compute[259504]: 2025-10-01 17:06:22.946 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:06:22 compute-0 nova_compute[259504]: 2025-10-01 17:06:22.946 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:06:22 compute-0 nova_compute[259504]: 2025-10-01 17:06:22.947 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:06:22 compute-0 nova_compute[259504]: 2025-10-01 17:06:22.947 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 17:06:23 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "format": "json"}]: dispatch
Oct 01 17:06:23 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0fbcb4a0-2676-4b12-98ef-811f1d5718a9", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:23 compute-0 ceph-mon[74273]: pgmap v1106: 305 pgs: 305 active+clean; 63 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 137 KiB/s wr, 82 op/s
Oct 01 17:06:23 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "snap_name": "dbbe1979-4322-4f6e-adb1-00435680f5b2", "format": "json"}]: dispatch
Oct 01 17:06:24 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 63 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 125 KiB/s wr, 10 op/s
Oct 01 17:06:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Oct 01 17:06:25 compute-0 ceph-mon[74273]: pgmap v1107: 305 pgs: 305 active+clean; 63 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 125 KiB/s wr, 10 op/s
Oct 01 17:06:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Oct 01 17:06:25 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Oct 01 17:06:25 compute-0 nova_compute[259504]: 2025-10-01 17:06:25.752 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:06:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:06:26 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 63 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 125 KiB/s wr, 10 op/s
Oct 01 17:06:26 compute-0 ceph-mon[74273]: osdmap e151: 3 total, 3 up, 3 in
Oct 01 17:06:27 compute-0 ceph-mon[74273]: pgmap v1109: 305 pgs: 305 active+clean; 63 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 125 KiB/s wr, 10 op/s
Oct 01 17:06:27 compute-0 nova_compute[259504]: 2025-10-01 17:06:27.745 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:06:28 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 63 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 39 KiB/s wr, 4 op/s
Oct 01 17:06:28 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "snap_name": "dbbe1979-4322-4f6e-adb1-00435680f5b2_f50acf3d-60cf-43a2-976c-bfe81b0a66da", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:28 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:dbbe1979-4322-4f6e-adb1-00435680f5b2_f50acf3d-60cf-43a2-976c-bfe81b0a66da, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:06:28 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta.tmp'
Oct 01 17:06:28 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta.tmp' to config b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta'
Oct 01 17:06:28 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:dbbe1979-4322-4f6e-adb1-00435680f5b2_f50acf3d-60cf-43a2-976c-bfe81b0a66da, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:06:28 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "snap_name": "dbbe1979-4322-4f6e-adb1-00435680f5b2", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:28 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:dbbe1979-4322-4f6e-adb1-00435680f5b2, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:06:28 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta.tmp'
Oct 01 17:06:28 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta.tmp' to config b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta'
Oct 01 17:06:28 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:dbbe1979-4322-4f6e-adb1-00435680f5b2, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:06:28 compute-0 ceph-mon[74273]: pgmap v1110: 305 pgs: 305 active+clean; 63 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 39 KiB/s wr, 4 op/s
Oct 01 17:06:29 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "snap_name": "dbbe1979-4322-4f6e-adb1-00435680f5b2_f50acf3d-60cf-43a2-976c-bfe81b0a66da", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:29 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "snap_name": "dbbe1979-4322-4f6e-adb1-00435680f5b2", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:29 compute-0 podman[275242]: 2025-10-01 17:06:29.773761091 +0000 UTC m=+0.083379707 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 01 17:06:30 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 63 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 39 KiB/s wr, 4 op/s
Oct 01 17:06:30 compute-0 ceph-mon[74273]: pgmap v1111: 305 pgs: 305 active+clean; 63 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 39 KiB/s wr, 4 op/s
Oct 01 17:06:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:06:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Oct 01 17:06:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Oct 01 17:06:30 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Oct 01 17:06:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Oct 01 17:06:31 compute-0 podman[275262]: 2025-10-01 17:06:31.788303614 +0000 UTC m=+0.090070586 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 01 17:06:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Oct 01 17:06:31 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Oct 01 17:06:31 compute-0 ceph-mon[74273]: osdmap e152: 3 total, 3 up, 3 in
Oct 01 17:06:32 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 63 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 302 B/s rd, 14 KiB/s wr, 2 op/s
Oct 01 17:06:32 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "snap_name": "994e3294-4a8a-410f-bff7-df7bb18891d7_e46ea2a4-d9e1-42b5-87f3-7f48e3b76778", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:32 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:994e3294-4a8a-410f-bff7-df7bb18891d7_e46ea2a4-d9e1-42b5-87f3-7f48e3b76778, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:06:32 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta.tmp'
Oct 01 17:06:32 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta.tmp' to config b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta'
Oct 01 17:06:32 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:994e3294-4a8a-410f-bff7-df7bb18891d7_e46ea2a4-d9e1-42b5-87f3-7f48e3b76778, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:06:32 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "snap_name": "994e3294-4a8a-410f-bff7-df7bb18891d7", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:32 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:994e3294-4a8a-410f-bff7-df7bb18891d7, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:06:32 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta.tmp'
Oct 01 17:06:32 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta.tmp' to config b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0/.meta'
Oct 01 17:06:32 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:994e3294-4a8a-410f-bff7-df7bb18891d7, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:06:32 compute-0 ceph-mon[74273]: osdmap e153: 3 total, 3 up, 3 in
Oct 01 17:06:32 compute-0 ceph-mon[74273]: pgmap v1114: 305 pgs: 305 active+clean; 63 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 302 B/s rd, 14 KiB/s wr, 2 op/s
Oct 01 17:06:33 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "snap_name": "994e3294-4a8a-410f-bff7-df7bb18891d7_e46ea2a4-d9e1-42b5-87f3-7f48e3b76778", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:33 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "snap_name": "994e3294-4a8a-410f-bff7-df7bb18891d7", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:34 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 63 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 47 KiB/s wr, 5 op/s
Oct 01 17:06:34 compute-0 ceph-mon[74273]: pgmap v1115: 305 pgs: 305 active+clean; 63 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 47 KiB/s wr, 5 op/s
Oct 01 17:06:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:06:36 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 63 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 35 KiB/s wr, 3 op/s
Oct 01 17:06:36 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "format": "json"}]: dispatch
Oct 01 17:06:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:06:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:06:36 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:06:36.345+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ea26f47e-6032-4f84-85c8-3aa43d68e5c0' of type subvolume
Oct 01 17:06:36 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ea26f47e-6032-4f84-85c8-3aa43d68e5c0' of type subvolume
Oct 01 17:06:36 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:06:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ea26f47e-6032-4f84-85c8-3aa43d68e5c0'' moved to trashcan
Oct 01 17:06:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:06:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ea26f47e-6032-4f84-85c8-3aa43d68e5c0, vol_name:cephfs) < ""
Oct 01 17:06:36 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b17b9bf4-9e5c-492b-a894-4f295f7cb7dd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:06:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b17b9bf4-9e5c-492b-a894-4f295f7cb7dd, vol_name:cephfs) < ""
Oct 01 17:06:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b17b9bf4-9e5c-492b-a894-4f295f7cb7dd/.meta.tmp'
Oct 01 17:06:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b17b9bf4-9e5c-492b-a894-4f295f7cb7dd/.meta.tmp' to config b'/volumes/_nogroup/b17b9bf4-9e5c-492b-a894-4f295f7cb7dd/.meta'
Oct 01 17:06:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b17b9bf4-9e5c-492b-a894-4f295f7cb7dd, vol_name:cephfs) < ""
Oct 01 17:06:36 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b17b9bf4-9e5c-492b-a894-4f295f7cb7dd", "format": "json"}]: dispatch
Oct 01 17:06:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b17b9bf4-9e5c-492b-a894-4f295f7cb7dd, vol_name:cephfs) < ""
Oct 01 17:06:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b17b9bf4-9e5c-492b-a894-4f295f7cb7dd, vol_name:cephfs) < ""
Oct 01 17:06:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:06:36 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:06:37 compute-0 ceph-mon[74273]: pgmap v1116: 305 pgs: 305 active+clean; 63 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 35 KiB/s wr, 3 op/s
Oct 01 17:06:37 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "format": "json"}]: dispatch
Oct 01 17:06:37 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ea26f47e-6032-4f84-85c8-3aa43d68e5c0", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:37 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:06:38 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 63 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 66 KiB/s wr, 5 op/s
Oct 01 17:06:38 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b17b9bf4-9e5c-492b-a894-4f295f7cb7dd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:06:38 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b17b9bf4-9e5c-492b-a894-4f295f7cb7dd", "format": "json"}]: dispatch
Oct 01 17:06:38 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "297fd041-f36f-460b-b2b8-961d2f4e4832", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:06:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:297fd041-f36f-460b-b2b8-961d2f4e4832, vol_name:cephfs) < ""
Oct 01 17:06:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/297fd041-f36f-460b-b2b8-961d2f4e4832/.meta.tmp'
Oct 01 17:06:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/297fd041-f36f-460b-b2b8-961d2f4e4832/.meta.tmp' to config b'/volumes/_nogroup/297fd041-f36f-460b-b2b8-961d2f4e4832/.meta'
Oct 01 17:06:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:297fd041-f36f-460b-b2b8-961d2f4e4832, vol_name:cephfs) < ""
Oct 01 17:06:38 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "297fd041-f36f-460b-b2b8-961d2f4e4832", "format": "json"}]: dispatch
Oct 01 17:06:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:297fd041-f36f-460b-b2b8-961d2f4e4832, vol_name:cephfs) < ""
Oct 01 17:06:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:297fd041-f36f-460b-b2b8-961d2f4e4832, vol_name:cephfs) < ""
Oct 01 17:06:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:06:38 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:06:39 compute-0 ceph-mon[74273]: pgmap v1117: 305 pgs: 305 active+clean; 63 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 66 KiB/s wr, 5 op/s
Oct 01 17:06:39 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:06:40 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 63 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 439 B/s rd, 57 KiB/s wr, 5 op/s
Oct 01 17:06:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Oct 01 17:06:40 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "297fd041-f36f-460b-b2b8-961d2f4e4832", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:06:40 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "297fd041-f36f-460b-b2b8-961d2f4e4832", "format": "json"}]: dispatch
Oct 01 17:06:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Oct 01 17:06:40 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Oct 01 17:06:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:06:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Oct 01 17:06:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Oct 01 17:06:40 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Oct 01 17:06:41 compute-0 ceph-mon[74273]: pgmap v1118: 305 pgs: 305 active+clean; 63 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 439 B/s rd, 57 KiB/s wr, 5 op/s
Oct 01 17:06:41 compute-0 ceph-mon[74273]: osdmap e154: 3 total, 3 up, 3 in
Oct 01 17:06:41 compute-0 ceph-mon[74273]: osdmap e155: 3 total, 3 up, 3 in
Oct 01 17:06:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:06:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:06:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:06:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:06:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:06:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:06:42 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "297fd041-f36f-460b-b2b8-961d2f4e4832", "snap_name": "148c2cf7-d72f-4fa6-b40a-74fc8c8c25b7", "format": "json"}]: dispatch
Oct 01 17:06:42 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:148c2cf7-d72f-4fa6-b40a-74fc8c8c25b7, sub_name:297fd041-f36f-460b-b2b8-961d2f4e4832, vol_name:cephfs) < ""
Oct 01 17:06:42 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 63 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 32 KiB/s wr, 2 op/s
Oct 01 17:06:42 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:148c2cf7-d72f-4fa6-b40a-74fc8c8c25b7, sub_name:297fd041-f36f-460b-b2b8-961d2f4e4832, vol_name:cephfs) < ""
Oct 01 17:06:42 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b17b9bf4-9e5c-492b-a894-4f295f7cb7dd", "format": "json"}]: dispatch
Oct 01 17:06:42 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b17b9bf4-9e5c-492b-a894-4f295f7cb7dd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:06:42 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b17b9bf4-9e5c-492b-a894-4f295f7cb7dd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:06:42 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:06:42.406+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b17b9bf4-9e5c-492b-a894-4f295f7cb7dd' of type subvolume
Oct 01 17:06:42 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b17b9bf4-9e5c-492b-a894-4f295f7cb7dd' of type subvolume
Oct 01 17:06:42 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b17b9bf4-9e5c-492b-a894-4f295f7cb7dd", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:42 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b17b9bf4-9e5c-492b-a894-4f295f7cb7dd, vol_name:cephfs) < ""
Oct 01 17:06:42 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b17b9bf4-9e5c-492b-a894-4f295f7cb7dd'' moved to trashcan
Oct 01 17:06:42 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:06:42 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b17b9bf4-9e5c-492b-a894-4f295f7cb7dd, vol_name:cephfs) < ""
Oct 01 17:06:42 compute-0 podman[275283]: 2025-10-01 17:06:42.818484163 +0000 UTC m=+0.132823104 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 01 17:06:43 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "297fd041-f36f-460b-b2b8-961d2f4e4832", "snap_name": "148c2cf7-d72f-4fa6-b40a-74fc8c8c25b7", "format": "json"}]: dispatch
Oct 01 17:06:43 compute-0 ceph-mon[74273]: pgmap v1121: 305 pgs: 305 active+clean; 63 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 32 KiB/s wr, 2 op/s
Oct 01 17:06:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 17:06:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2574907918' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 17:06:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 17:06:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2574907918' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 17:06:44 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 64 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 70 KiB/s wr, 5 op/s
Oct 01 17:06:44 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b17b9bf4-9e5c-492b-a894-4f295f7cb7dd", "format": "json"}]: dispatch
Oct 01 17:06:44 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b17b9bf4-9e5c-492b-a894-4f295f7cb7dd", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2574907918' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 17:06:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2574907918' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 17:06:45 compute-0 ceph-mon[74273]: pgmap v1122: 305 pgs: 305 active+clean; 64 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 70 KiB/s wr, 5 op/s
Oct 01 17:06:45 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "297fd041-f36f-460b-b2b8-961d2f4e4832", "snap_name": "148c2cf7-d72f-4fa6-b40a-74fc8c8c25b7_6f05dd35-7704-4276-8a47-d1965598e3c4", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:45 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:148c2cf7-d72f-4fa6-b40a-74fc8c8c25b7_6f05dd35-7704-4276-8a47-d1965598e3c4, sub_name:297fd041-f36f-460b-b2b8-961d2f4e4832, vol_name:cephfs) < ""
Oct 01 17:06:45 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/297fd041-f36f-460b-b2b8-961d2f4e4832/.meta.tmp'
Oct 01 17:06:45 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/297fd041-f36f-460b-b2b8-961d2f4e4832/.meta.tmp' to config b'/volumes/_nogroup/297fd041-f36f-460b-b2b8-961d2f4e4832/.meta'
Oct 01 17:06:45 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:148c2cf7-d72f-4fa6-b40a-74fc8c8c25b7_6f05dd35-7704-4276-8a47-d1965598e3c4, sub_name:297fd041-f36f-460b-b2b8-961d2f4e4832, vol_name:cephfs) < ""
Oct 01 17:06:45 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "297fd041-f36f-460b-b2b8-961d2f4e4832", "snap_name": "148c2cf7-d72f-4fa6-b40a-74fc8c8c25b7", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:45 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:148c2cf7-d72f-4fa6-b40a-74fc8c8c25b7, sub_name:297fd041-f36f-460b-b2b8-961d2f4e4832, vol_name:cephfs) < ""
Oct 01 17:06:45 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/297fd041-f36f-460b-b2b8-961d2f4e4832/.meta.tmp'
Oct 01 17:06:45 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/297fd041-f36f-460b-b2b8-961d2f4e4832/.meta.tmp' to config b'/volumes/_nogroup/297fd041-f36f-460b-b2b8-961d2f4e4832/.meta'
Oct 01 17:06:45 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:148c2cf7-d72f-4fa6-b40a-74fc8c8c25b7, sub_name:297fd041-f36f-460b-b2b8-961d2f4e4832, vol_name:cephfs) < ""
Oct 01 17:06:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:06:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Oct 01 17:06:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Oct 01 17:06:45 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Oct 01 17:06:46 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 64 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 51 KiB/s wr, 3 op/s
Oct 01 17:06:46 compute-0 podman[275309]: 2025-10-01 17:06:46.731790569 +0000 UTC m=+0.053305268 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 01 17:06:46 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "297fd041-f36f-460b-b2b8-961d2f4e4832", "snap_name": "148c2cf7-d72f-4fa6-b40a-74fc8c8c25b7_6f05dd35-7704-4276-8a47-d1965598e3c4", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:46 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "297fd041-f36f-460b-b2b8-961d2f4e4832", "snap_name": "148c2cf7-d72f-4fa6-b40a-74fc8c8c25b7", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:46 compute-0 ceph-mon[74273]: osdmap e156: 3 total, 3 up, 3 in
Oct 01 17:06:46 compute-0 ceph-mon[74273]: pgmap v1124: 305 pgs: 305 active+clean; 64 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 51 KiB/s wr, 3 op/s
Oct 01 17:06:48 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 64 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 778 B/s rd, 70 KiB/s wr, 6 op/s
Oct 01 17:06:49 compute-0 ceph-mon[74273]: pgmap v1125: 305 pgs: 305 active+clean; 64 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 778 B/s rd, 70 KiB/s wr, 6 op/s
Oct 01 17:06:49 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "297fd041-f36f-460b-b2b8-961d2f4e4832", "format": "json"}]: dispatch
Oct 01 17:06:49 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:297fd041-f36f-460b-b2b8-961d2f4e4832, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:06:49 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:297fd041-f36f-460b-b2b8-961d2f4e4832, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:06:49 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '297fd041-f36f-460b-b2b8-961d2f4e4832' of type subvolume
Oct 01 17:06:49 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:06:49.495+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '297fd041-f36f-460b-b2b8-961d2f4e4832' of type subvolume
Oct 01 17:06:49 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "297fd041-f36f-460b-b2b8-961d2f4e4832", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:49 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:297fd041-f36f-460b-b2b8-961d2f4e4832, vol_name:cephfs) < ""
Oct 01 17:06:49 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/297fd041-f36f-460b-b2b8-961d2f4e4832'' moved to trashcan
Oct 01 17:06:49 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:06:49 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:297fd041-f36f-460b-b2b8-961d2f4e4832, vol_name:cephfs) < ""
Oct 01 17:06:50 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 64 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 660 B/s rd, 59 KiB/s wr, 5 op/s
Oct 01 17:06:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Oct 01 17:06:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Oct 01 17:06:50 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Oct 01 17:06:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:06:51 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "297fd041-f36f-460b-b2b8-961d2f4e4832", "format": "json"}]: dispatch
Oct 01 17:06:51 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "297fd041-f36f-460b-b2b8-961d2f4e4832", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:51 compute-0 ceph-mon[74273]: pgmap v1126: 305 pgs: 305 active+clean; 64 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 660 B/s rd, 59 KiB/s wr, 5 op/s
Oct 01 17:06:51 compute-0 ceph-mon[74273]: osdmap e157: 3 total, 3 up, 3 in
Oct 01 17:06:52 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 64 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 31 KiB/s wr, 3 op/s
Oct 01 17:06:52 compute-0 ceph-mon[74273]: pgmap v1128: 305 pgs: 305 active+clean; 64 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 31 KiB/s wr, 3 op/s
Oct 01 17:06:53 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2f222ba4-6cc6-4b5d-9beb-b4ad27457477", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:06:53 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2f222ba4-6cc6-4b5d-9beb-b4ad27457477, vol_name:cephfs) < ""
Oct 01 17:06:53 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2f222ba4-6cc6-4b5d-9beb-b4ad27457477/.meta.tmp'
Oct 01 17:06:53 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2f222ba4-6cc6-4b5d-9beb-b4ad27457477/.meta.tmp' to config b'/volumes/_nogroup/2f222ba4-6cc6-4b5d-9beb-b4ad27457477/.meta'
Oct 01 17:06:53 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2f222ba4-6cc6-4b5d-9beb-b4ad27457477, vol_name:cephfs) < ""
Oct 01 17:06:53 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2f222ba4-6cc6-4b5d-9beb-b4ad27457477", "format": "json"}]: dispatch
Oct 01 17:06:53 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2f222ba4-6cc6-4b5d-9beb-b4ad27457477, vol_name:cephfs) < ""
Oct 01 17:06:53 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2f222ba4-6cc6-4b5d-9beb-b4ad27457477, vol_name:cephfs) < ""
Oct 01 17:06:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:06:53 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:06:53 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:06:54 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 64 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 738 B/s rd, 58 KiB/s wr, 5 op/s
Oct 01 17:06:54 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2f222ba4-6cc6-4b5d-9beb-b4ad27457477", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:06:54 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2f222ba4-6cc6-4b5d-9beb-b4ad27457477", "format": "json"}]: dispatch
Oct 01 17:06:54 compute-0 ceph-mon[74273]: pgmap v1129: 305 pgs: 305 active+clean; 64 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 738 B/s rd, 58 KiB/s wr, 5 op/s
Oct 01 17:06:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:06:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Oct 01 17:06:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Oct 01 17:06:55 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Oct 01 17:06:56 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 64 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 30 KiB/s wr, 2 op/s
Oct 01 17:06:57 compute-0 ceph-mon[74273]: osdmap e158: 3 total, 3 up, 3 in
Oct 01 17:06:57 compute-0 ceph-mon[74273]: pgmap v1131: 305 pgs: 305 active+clean; 64 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 30 KiB/s wr, 2 op/s
Oct 01 17:06:57 compute-0 sudo[275328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:06:57 compute-0 sudo[275328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:06:57 compute-0 sudo[275328]: pam_unix(sudo:session): session closed for user root
Oct 01 17:06:57 compute-0 sudo[275353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:06:57 compute-0 sudo[275353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:06:57 compute-0 sudo[275353]: pam_unix(sudo:session): session closed for user root
Oct 01 17:06:57 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "efdb0876-5085-415d-ba20-1a51461b6ac1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:06:57 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:efdb0876-5085-415d-ba20-1a51461b6ac1, vol_name:cephfs) < ""
Oct 01 17:06:57 compute-0 sudo[275378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:06:57 compute-0 sudo[275378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:06:57 compute-0 sudo[275378]: pam_unix(sudo:session): session closed for user root
Oct 01 17:06:57 compute-0 sudo[275403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 01 17:06:57 compute-0 sudo[275403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:06:58 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 64 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 68 KiB/s wr, 3 op/s
Oct 01 17:06:58 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/efdb0876-5085-415d-ba20-1a51461b6ac1/.meta.tmp'
Oct 01 17:06:58 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/efdb0876-5085-415d-ba20-1a51461b6ac1/.meta.tmp' to config b'/volumes/_nogroup/efdb0876-5085-415d-ba20-1a51461b6ac1/.meta'
Oct 01 17:06:58 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:efdb0876-5085-415d-ba20-1a51461b6ac1, vol_name:cephfs) < ""
Oct 01 17:06:58 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "efdb0876-5085-415d-ba20-1a51461b6ac1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:06:58 compute-0 ceph-mon[74273]: pgmap v1132: 305 pgs: 305 active+clean; 64 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 68 KiB/s wr, 3 op/s
Oct 01 17:06:58 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "efdb0876-5085-415d-ba20-1a51461b6ac1", "format": "json"}]: dispatch
Oct 01 17:06:58 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:efdb0876-5085-415d-ba20-1a51461b6ac1, vol_name:cephfs) < ""
Oct 01 17:06:58 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:efdb0876-5085-415d-ba20-1a51461b6ac1, vol_name:cephfs) < ""
Oct 01 17:06:58 compute-0 podman[275501]: 2025-10-01 17:06:58.773091282 +0000 UTC m=+0.481479147 container exec bfdaa9b78cc1558959452c7020a00aa78f3da27e3ededf3766f2f88165c2443b (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 01 17:06:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:06:58 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:06:58 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2f222ba4-6cc6-4b5d-9beb-b4ad27457477", "format": "json"}]: dispatch
Oct 01 17:06:58 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2f222ba4-6cc6-4b5d-9beb-b4ad27457477, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:06:58 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2f222ba4-6cc6-4b5d-9beb-b4ad27457477, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:06:58 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2f222ba4-6cc6-4b5d-9beb-b4ad27457477' of type subvolume
Oct 01 17:06:58 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:06:58.907+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2f222ba4-6cc6-4b5d-9beb-b4ad27457477' of type subvolume
Oct 01 17:06:58 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2f222ba4-6cc6-4b5d-9beb-b4ad27457477", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:58 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2f222ba4-6cc6-4b5d-9beb-b4ad27457477, vol_name:cephfs) < ""
Oct 01 17:06:58 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/2f222ba4-6cc6-4b5d-9beb-b4ad27457477'' moved to trashcan
Oct 01 17:06:58 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:06:58 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2f222ba4-6cc6-4b5d-9beb-b4ad27457477, vol_name:cephfs) < ""
Oct 01 17:06:58 compute-0 podman[275501]: 2025-10-01 17:06:58.993066702 +0000 UTC m=+0.701454567 container exec_died bfdaa9b78cc1558959452c7020a00aa78f3da27e3ededf3766f2f88165c2443b (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:06:59 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "efdb0876-5085-415d-ba20-1a51461b6ac1", "format": "json"}]: dispatch
Oct 01 17:06:59 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:06:59 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2f222ba4-6cc6-4b5d-9beb-b4ad27457477", "format": "json"}]: dispatch
Oct 01 17:06:59 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2f222ba4-6cc6-4b5d-9beb-b4ad27457477", "force": true, "format": "json"}]: dispatch
Oct 01 17:06:59 compute-0 sudo[275403]: pam_unix(sudo:session): session closed for user root
Oct 01 17:06:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 17:06:59 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:06:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 17:06:59 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:06:59 compute-0 sudo[275661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:06:59 compute-0 sudo[275661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:06:59 compute-0 sudo[275661]: pam_unix(sudo:session): session closed for user root
Oct 01 17:07:00 compute-0 sudo[275688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:07:00 compute-0 sudo[275688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:07:00 compute-0 sudo[275688]: pam_unix(sudo:session): session closed for user root
Oct 01 17:07:00 compute-0 podman[275685]: 2025-10-01 17:07:00.060183403 +0000 UTC m=+0.100080300 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid)
Oct 01 17:07:00 compute-0 sudo[275732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:07:00 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 64 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 208 B/s rd, 56 KiB/s wr, 3 op/s
Oct 01 17:07:00 compute-0 sudo[275732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:07:00 compute-0 sudo[275732]: pam_unix(sudo:session): session closed for user root
Oct 01 17:07:00 compute-0 sudo[275758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 17:07:00 compute-0 sudo[275758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:07:00 compute-0 sudo[275758]: pam_unix(sudo:session): session closed for user root
Oct 01 17:07:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 01 17:07:00 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 01 17:07:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 17:07:00 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:07:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 17:07:00 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 17:07:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 17:07:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:07:00 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:07:00 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev f171b13d-80b5-498b-a4c5-61316d15eb31 does not exist
Oct 01 17:07:00 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 53b21c39-5776-4157-96b9-55095af93a35 does not exist
Oct 01 17:07:00 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev a1281eff-cf60-4954-ae58-85315115fce0 does not exist
Oct 01 17:07:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 17:07:00 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 17:07:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 17:07:00 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 17:07:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 17:07:00 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:07:00 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:07:00 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:07:00 compute-0 ceph-mon[74273]: pgmap v1133: 305 pgs: 305 active+clean; 64 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 208 B/s rd, 56 KiB/s wr, 3 op/s
Oct 01 17:07:00 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 01 17:07:00 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:07:00 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 17:07:00 compute-0 sudo[275814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:07:00 compute-0 sudo[275814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:07:00 compute-0 sudo[275814]: pam_unix(sudo:session): session closed for user root
Oct 01 17:07:01 compute-0 sudo[275839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:07:01 compute-0 sudo[275839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:07:01 compute-0 sudo[275839]: pam_unix(sudo:session): session closed for user root
Oct 01 17:07:01 compute-0 sudo[275864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:07:01 compute-0 sudo[275864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:07:01 compute-0 sudo[275864]: pam_unix(sudo:session): session closed for user root
Oct 01 17:07:01 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "efdb0876-5085-415d-ba20-1a51461b6ac1", "snap_name": "75cadf3c-64ac-47cc-8482-523c4f46622a", "format": "json"}]: dispatch
Oct 01 17:07:01 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:75cadf3c-64ac-47cc-8482-523c4f46622a, sub_name:efdb0876-5085-415d-ba20-1a51461b6ac1, vol_name:cephfs) < ""
Oct 01 17:07:01 compute-0 sudo[275889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 17:07:01 compute-0 sudo[275889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:07:01 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:75cadf3c-64ac-47cc-8482-523c4f46622a, sub_name:efdb0876-5085-415d-ba20-1a51461b6ac1, vol_name:cephfs) < ""
Oct 01 17:07:01 compute-0 podman[275954]: 2025-10-01 17:07:01.576099433 +0000 UTC m=+0.087374009 container create 0c20f94b768a540a5125799a90e1e9885db051362df7875f147ee00eb710b188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 01 17:07:01 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7eafcb7a-4d98-4306-a178-8f37dd36887c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:07:01 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7eafcb7a-4d98-4306-a178-8f37dd36887c, vol_name:cephfs) < ""
Oct 01 17:07:01 compute-0 podman[275954]: 2025-10-01 17:07:01.51745621 +0000 UTC m=+0.028730856 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:07:01 compute-0 systemd[1]: Started libpod-conmon-0c20f94b768a540a5125799a90e1e9885db051362df7875f147ee00eb710b188.scope.
Oct 01 17:07:01 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:07:01 compute-0 podman[275954]: 2025-10-01 17:07:01.850756264 +0000 UTC m=+0.362030850 container init 0c20f94b768a540a5125799a90e1e9885db051362df7875f147ee00eb710b188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:07:01 compute-0 podman[275954]: 2025-10-01 17:07:01.862737697 +0000 UTC m=+0.374012293 container start 0c20f94b768a540a5125799a90e1e9885db051362df7875f147ee00eb710b188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mclean, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 17:07:01 compute-0 systemd[1]: libpod-0c20f94b768a540a5125799a90e1e9885db051362df7875f147ee00eb710b188.scope: Deactivated successfully.
Oct 01 17:07:01 compute-0 silly_mclean[275971]: 167 167
Oct 01 17:07:01 compute-0 conmon[275971]: conmon 0c20f94b768a540a5125 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0c20f94b768a540a5125799a90e1e9885db051362df7875f147ee00eb710b188.scope/container/memory.events
Oct 01 17:07:01 compute-0 podman[275954]: 2025-10-01 17:07:01.945973142 +0000 UTC m=+0.457247728 container attach 0c20f94b768a540a5125799a90e1e9885db051362df7875f147ee00eb710b188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mclean, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:07:01 compute-0 podman[275954]: 2025-10-01 17:07:01.946464747 +0000 UTC m=+0.457739353 container died 0c20f94b768a540a5125799a90e1e9885db051362df7875f147ee00eb710b188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mclean, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:07:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:07:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 17:07:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 17:07:02 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:07:02 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "efdb0876-5085-415d-ba20-1a51461b6ac1", "snap_name": "75cadf3c-64ac-47cc-8482-523c4f46622a", "format": "json"}]: dispatch
Oct 01 17:07:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7eafcb7a-4d98-4306-a178-8f37dd36887c/.meta.tmp'
Oct 01 17:07:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7eafcb7a-4d98-4306-a178-8f37dd36887c/.meta.tmp' to config b'/volumes/_nogroup/7eafcb7a-4d98-4306-a178-8f37dd36887c/.meta'
Oct 01 17:07:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7eafcb7a-4d98-4306-a178-8f37dd36887c, vol_name:cephfs) < ""
Oct 01 17:07:02 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7eafcb7a-4d98-4306-a178-8f37dd36887c", "format": "json"}]: dispatch
Oct 01 17:07:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7eafcb7a-4d98-4306-a178-8f37dd36887c, vol_name:cephfs) < ""
Oct 01 17:07:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7eafcb7a-4d98-4306-a178-8f37dd36887c, vol_name:cephfs) < ""
Oct 01 17:07:02 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 64 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 55 KiB/s wr, 3 op/s
Oct 01 17:07:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:07:02 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:07:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-48a201d3915af9c68173b0515b0a040d235181827a2c9fe9989dc04a09c0b5a5-merged.mount: Deactivated successfully.
Oct 01 17:07:02 compute-0 podman[275954]: 2025-10-01 17:07:02.431003165 +0000 UTC m=+0.942277721 container remove 0c20f94b768a540a5125799a90e1e9885db051362df7875f147ee00eb710b188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:07:02 compute-0 podman[275978]: 2025-10-01 17:07:02.4928647 +0000 UTC m=+0.591103293 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3)
Oct 01 17:07:02 compute-0 systemd[1]: libpod-conmon-0c20f94b768a540a5125799a90e1e9885db051362df7875f147ee00eb710b188.scope: Deactivated successfully.
Oct 01 17:07:02 compute-0 podman[276016]: 2025-10-01 17:07:02.568909301 +0000 UTC m=+0.021454321 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:07:02 compute-0 podman[276016]: 2025-10-01 17:07:02.668649447 +0000 UTC m=+0.121194427 container create 6f2d18b0c7baeeac49944cfe8f55c7747b086d06985c6ba7b7f9e2e66887b94a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kapitsa, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 01 17:07:02 compute-0 systemd[1]: Started libpod-conmon-6f2d18b0c7baeeac49944cfe8f55c7747b086d06985c6ba7b7f9e2e66887b94a.scope.
Oct 01 17:07:02 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:07:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29f620a7c77b9406609be6d8dec4b694f62326a45534e0aa224cb5f1f6a2435f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:07:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29f620a7c77b9406609be6d8dec4b694f62326a45534e0aa224cb5f1f6a2435f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:07:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29f620a7c77b9406609be6d8dec4b694f62326a45534e0aa224cb5f1f6a2435f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:07:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29f620a7c77b9406609be6d8dec4b694f62326a45534e0aa224cb5f1f6a2435f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:07:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29f620a7c77b9406609be6d8dec4b694f62326a45534e0aa224cb5f1f6a2435f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 17:07:02 compute-0 podman[276016]: 2025-10-01 17:07:02.883624244 +0000 UTC m=+0.336169254 container init 6f2d18b0c7baeeac49944cfe8f55c7747b086d06985c6ba7b7f9e2e66887b94a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kapitsa, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 01 17:07:02 compute-0 podman[276016]: 2025-10-01 17:07:02.900652619 +0000 UTC m=+0.353197589 container start 6f2d18b0c7baeeac49944cfe8f55c7747b086d06985c6ba7b7f9e2e66887b94a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kapitsa, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 01 17:07:02 compute-0 podman[276016]: 2025-10-01 17:07:02.977960063 +0000 UTC m=+0.430505073 container attach 6f2d18b0c7baeeac49944cfe8f55c7747b086d06985c6ba7b7f9e2e66887b94a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kapitsa, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:07:03 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7eafcb7a-4d98-4306-a178-8f37dd36887c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:07:03 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7eafcb7a-4d98-4306-a178-8f37dd36887c", "format": "json"}]: dispatch
Oct 01 17:07:03 compute-0 ceph-mon[74273]: pgmap v1134: 305 pgs: 305 active+clean; 64 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 55 KiB/s wr, 3 op/s
Oct 01 17:07:03 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:07:04 compute-0 adoring_kapitsa[276034]: --> passed data devices: 0 physical, 3 LVM
Oct 01 17:07:04 compute-0 adoring_kapitsa[276034]: --> relative data size: 1.0
Oct 01 17:07:04 compute-0 adoring_kapitsa[276034]: --> All data devices are unavailable
Oct 01 17:07:04 compute-0 systemd[1]: libpod-6f2d18b0c7baeeac49944cfe8f55c7747b086d06985c6ba7b7f9e2e66887b94a.scope: Deactivated successfully.
Oct 01 17:07:04 compute-0 systemd[1]: libpod-6f2d18b0c7baeeac49944cfe8f55c7747b086d06985c6ba7b7f9e2e66887b94a.scope: Consumed 1.084s CPU time.
Oct 01 17:07:04 compute-0 podman[276016]: 2025-10-01 17:07:04.04559791 +0000 UTC m=+1.498142910 container died 6f2d18b0c7baeeac49944cfe8f55c7747b086d06985c6ba7b7f9e2e66887b94a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:07:04 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 65 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 62 KiB/s wr, 4 op/s
Oct 01 17:07:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-29f620a7c77b9406609be6d8dec4b694f62326a45534e0aa224cb5f1f6a2435f-merged.mount: Deactivated successfully.
Oct 01 17:07:04 compute-0 ceph-mon[74273]: pgmap v1135: 305 pgs: 305 active+clean; 65 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 62 KiB/s wr, 4 op/s
Oct 01 17:07:05 compute-0 podman[276016]: 2025-10-01 17:07:05.245646715 +0000 UTC m=+2.698191705 container remove 6f2d18b0c7baeeac49944cfe8f55c7747b086d06985c6ba7b7f9e2e66887b94a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 01 17:07:05 compute-0 sudo[275889]: pam_unix(sudo:session): session closed for user root
Oct 01 17:07:05 compute-0 systemd[1]: libpod-conmon-6f2d18b0c7baeeac49944cfe8f55c7747b086d06985c6ba7b7f9e2e66887b94a.scope: Deactivated successfully.
Oct 01 17:07:05 compute-0 sudo[276075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:07:05 compute-0 sudo[276075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:07:05 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "7eafcb7a-4d98-4306-a178-8f37dd36887c", "snap_name": "ef793e02-9980-4db2-8e9a-aaf0250b4696", "format": "json"}]: dispatch
Oct 01 17:07:05 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ef793e02-9980-4db2-8e9a-aaf0250b4696, sub_name:7eafcb7a-4d98-4306-a178-8f37dd36887c, vol_name:cephfs) < ""
Oct 01 17:07:05 compute-0 sudo[276075]: pam_unix(sudo:session): session closed for user root
Oct 01 17:07:05 compute-0 sudo[276102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:07:05 compute-0 sudo[276102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:07:05 compute-0 sudo[276102]: pam_unix(sudo:session): session closed for user root
Oct 01 17:07:05 compute-0 sudo[276127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:07:05 compute-0 sudo[276127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:07:05 compute-0 sudo[276127]: pam_unix(sudo:session): session closed for user root
Oct 01 17:07:05 compute-0 sudo[276152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 17:07:05 compute-0 sudo[276152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:07:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:07:06 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 65 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 198 B/s rd, 60 KiB/s wr, 4 op/s
Oct 01 17:07:06 compute-0 podman[276217]: 2025-10-01 17:07:06.10085668 +0000 UTC m=+0.039105853 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:07:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ef793e02-9980-4db2-8e9a-aaf0250b4696, sub_name:7eafcb7a-4d98-4306-a178-8f37dd36887c, vol_name:cephfs) < ""
Oct 01 17:07:06 compute-0 podman[276217]: 2025-10-01 17:07:06.468224924 +0000 UTC m=+0.406474027 container create 955a234920d4b87d1ef6c370227421ca6c88ab8d93f6c20d1411210e9c082d10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Oct 01 17:07:06 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "efdb0876-5085-415d-ba20-1a51461b6ac1", "snap_name": "75cadf3c-64ac-47cc-8482-523c4f46622a_ca7f273a-3f2e-41e5-9026-1064e83c48b4", "force": true, "format": "json"}]: dispatch
Oct 01 17:07:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:75cadf3c-64ac-47cc-8482-523c4f46622a_ca7f273a-3f2e-41e5-9026-1064e83c48b4, sub_name:efdb0876-5085-415d-ba20-1a51461b6ac1, vol_name:cephfs) < ""
Oct 01 17:07:06 compute-0 systemd[1]: Started libpod-conmon-955a234920d4b87d1ef6c370227421ca6c88ab8d93f6c20d1411210e9c082d10.scope.
Oct 01 17:07:06 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:07:06 compute-0 podman[276217]: 2025-10-01 17:07:06.866375014 +0000 UTC m=+0.804624207 container init 955a234920d4b87d1ef6c370227421ca6c88ab8d93f6c20d1411210e9c082d10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:07:06 compute-0 podman[276217]: 2025-10-01 17:07:06.87863574 +0000 UTC m=+0.816884873 container start 955a234920d4b87d1ef6c370227421ca6c88ab8d93f6c20d1411210e9c082d10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bouman, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Oct 01 17:07:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/efdb0876-5085-415d-ba20-1a51461b6ac1/.meta.tmp'
Oct 01 17:07:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/efdb0876-5085-415d-ba20-1a51461b6ac1/.meta.tmp' to config b'/volumes/_nogroup/efdb0876-5085-415d-ba20-1a51461b6ac1/.meta'
Oct 01 17:07:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:75cadf3c-64ac-47cc-8482-523c4f46622a_ca7f273a-3f2e-41e5-9026-1064e83c48b4, sub_name:efdb0876-5085-415d-ba20-1a51461b6ac1, vol_name:cephfs) < ""
Oct 01 17:07:06 compute-0 magical_bouman[276234]: 167 167
Oct 01 17:07:06 compute-0 systemd[1]: libpod-955a234920d4b87d1ef6c370227421ca6c88ab8d93f6c20d1411210e9c082d10.scope: Deactivated successfully.
Oct 01 17:07:06 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "efdb0876-5085-415d-ba20-1a51461b6ac1", "snap_name": "75cadf3c-64ac-47cc-8482-523c4f46622a", "force": true, "format": "json"}]: dispatch
Oct 01 17:07:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:75cadf3c-64ac-47cc-8482-523c4f46622a, sub_name:efdb0876-5085-415d-ba20-1a51461b6ac1, vol_name:cephfs) < ""
Oct 01 17:07:07 compute-0 podman[276217]: 2025-10-01 17:07:07.031861053 +0000 UTC m=+0.970110146 container attach 955a234920d4b87d1ef6c370227421ca6c88ab8d93f6c20d1411210e9c082d10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:07:07 compute-0 podman[276217]: 2025-10-01 17:07:07.033229828 +0000 UTC m=+0.971478941 container died 955a234920d4b87d1ef6c370227421ca6c88ab8d93f6c20d1411210e9c082d10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bouman, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 17:07:07 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/efdb0876-5085-415d-ba20-1a51461b6ac1/.meta.tmp'
Oct 01 17:07:07 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/efdb0876-5085-415d-ba20-1a51461b6ac1/.meta.tmp' to config b'/volumes/_nogroup/efdb0876-5085-415d-ba20-1a51461b6ac1/.meta'
Oct 01 17:07:07 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "7eafcb7a-4d98-4306-a178-8f37dd36887c", "snap_name": "ef793e02-9980-4db2-8e9a-aaf0250b4696", "format": "json"}]: dispatch
Oct 01 17:07:07 compute-0 ceph-mon[74273]: pgmap v1136: 305 pgs: 305 active+clean; 65 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 198 B/s rd, 60 KiB/s wr, 4 op/s
Oct 01 17:07:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-67f5692ae5db506bc19fd1395a758cf2149670717afb2d49101ba2ecbaa03ddb-merged.mount: Deactivated successfully.
Oct 01 17:07:08 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 65 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 77 KiB/s wr, 4 op/s
Oct 01 17:07:08 compute-0 podman[276217]: 2025-10-01 17:07:08.741223911 +0000 UTC m=+2.679473004 container remove 955a234920d4b87d1ef6c370227421ca6c88ab8d93f6c20d1411210e9c082d10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:07:08 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "efdb0876-5085-415d-ba20-1a51461b6ac1", "snap_name": "75cadf3c-64ac-47cc-8482-523c4f46622a_ca7f273a-3f2e-41e5-9026-1064e83c48b4", "force": true, "format": "json"}]: dispatch
Oct 01 17:07:08 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "efdb0876-5085-415d-ba20-1a51461b6ac1", "snap_name": "75cadf3c-64ac-47cc-8482-523c4f46622a", "force": true, "format": "json"}]: dispatch
Oct 01 17:07:08 compute-0 ceph-mon[74273]: pgmap v1137: 305 pgs: 305 active+clean; 65 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 77 KiB/s wr, 4 op/s
Oct 01 17:07:08 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:75cadf3c-64ac-47cc-8482-523c4f46622a, sub_name:efdb0876-5085-415d-ba20-1a51461b6ac1, vol_name:cephfs) < ""
Oct 01 17:07:08 compute-0 systemd[1]: libpod-conmon-955a234920d4b87d1ef6c370227421ca6c88ab8d93f6c20d1411210e9c082d10.scope: Deactivated successfully.
Oct 01 17:07:09 compute-0 podman[276258]: 2025-10-01 17:07:08.918478632 +0000 UTC m=+0.029272792 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:07:09 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "7eafcb7a-4d98-4306-a178-8f37dd36887c", "snap_name": "ef793e02-9980-4db2-8e9a-aaf0250b4696", "target_sub_name": "b3a00dd2-1018-47de-9729-c563b1ade8d1", "format": "json"}]: dispatch
Oct 01 17:07:09 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:ef793e02-9980-4db2-8e9a-aaf0250b4696, sub_name:7eafcb7a-4d98-4306-a178-8f37dd36887c, target_sub_name:b3a00dd2-1018-47de-9729-c563b1ade8d1, vol_name:cephfs) < ""
Oct 01 17:07:09 compute-0 podman[276258]: 2025-10-01 17:07:09.340517957 +0000 UTC m=+0.451312107 container create a1a2fde42af6a8bfe1b2220f349b9bee030454bb045f7cf7978b9879f631dfc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:07:09 compute-0 systemd[1]: Started libpod-conmon-a1a2fde42af6a8bfe1b2220f349b9bee030454bb045f7cf7978b9879f631dfc2.scope.
Oct 01 17:07:09 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:07:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccca9643b76c5a2788f13907832e0f82ace4e4c6feaa541568a0b6303cde4822/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:07:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccca9643b76c5a2788f13907832e0f82ace4e4c6feaa541568a0b6303cde4822/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:07:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccca9643b76c5a2788f13907832e0f82ace4e4c6feaa541568a0b6303cde4822/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:07:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccca9643b76c5a2788f13907832e0f82ace4e4c6feaa541568a0b6303cde4822/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:07:10 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "7eafcb7a-4d98-4306-a178-8f37dd36887c", "snap_name": "ef793e02-9980-4db2-8e9a-aaf0250b4696", "target_sub_name": "b3a00dd2-1018-47de-9729-c563b1ade8d1", "format": "json"}]: dispatch
Oct 01 17:07:10 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 65 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 51 KiB/s wr, 3 op/s
Oct 01 17:07:10 compute-0 podman[276258]: 2025-10-01 17:07:10.12838147 +0000 UTC m=+1.239175660 container init a1a2fde42af6a8bfe1b2220f349b9bee030454bb045f7cf7978b9879f631dfc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mccarthy, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 01 17:07:10 compute-0 podman[276258]: 2025-10-01 17:07:10.138887368 +0000 UTC m=+1.249681478 container start a1a2fde42af6a8bfe1b2220f349b9bee030454bb045f7cf7978b9879f631dfc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 17:07:10 compute-0 podman[276258]: 2025-10-01 17:07:10.318190819 +0000 UTC m=+1.428984949 container attach a1a2fde42af6a8bfe1b2220f349b9bee030454bb045f7cf7978b9879f631dfc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Oct 01 17:07:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:07:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]: {
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:     "0": [
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:         {
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             "devices": [
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "/dev/loop3"
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             ],
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             "lv_name": "ceph_lv0",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             "lv_size": "21470642176",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             "name": "ceph_lv0",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             "tags": {
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "ceph.cluster_name": "ceph",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "ceph.crush_device_class": "",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "ceph.encrypted": "0",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "ceph.osd_id": "0",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "ceph.type": "block",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "ceph.vdo": "0"
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             },
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             "type": "block",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             "vg_name": "ceph_vg0"
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:         }
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:     ],
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:     "1": [
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:         {
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             "devices": [
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "/dev/loop4"
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             ],
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             "lv_name": "ceph_lv1",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             "lv_size": "21470642176",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             "name": "ceph_lv1",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             "tags": {
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "ceph.cluster_name": "ceph",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "ceph.crush_device_class": "",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "ceph.encrypted": "0",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "ceph.osd_id": "1",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "ceph.type": "block",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "ceph.vdo": "0"
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             },
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             "type": "block",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             "vg_name": "ceph_vg1"
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:         }
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:     ],
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:     "2": [
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:         {
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             "devices": [
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "/dev/loop5"
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             ],
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             "lv_name": "ceph_lv2",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             "lv_size": "21470642176",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             "name": "ceph_lv2",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             "tags": {
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "ceph.cluster_name": "ceph",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "ceph.crush_device_class": "",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "ceph.encrypted": "0",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "ceph.osd_id": "2",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "ceph.type": "block",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:                 "ceph.vdo": "0"
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             },
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             "type": "block",
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:             "vg_name": "ceph_vg2"
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:         }
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]:     ]
Oct 01 17:07:11 compute-0 optimistic_mccarthy[276274]: }
Oct 01 17:07:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Oct 01 17:07:11 compute-0 systemd[1]: libpod-a1a2fde42af6a8bfe1b2220f349b9bee030454bb045f7cf7978b9879f631dfc2.scope: Deactivated successfully.
Oct 01 17:07:11 compute-0 podman[276283]: 2025-10-01 17:07:11.180154493 +0000 UTC m=+0.025089458 container died a1a2fde42af6a8bfe1b2220f349b9bee030454bb045f7cf7978b9879f631dfc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 01 17:07:11 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Oct 01 17:07:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:07:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:07:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:07:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:07:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_17:07:11
Oct 01 17:07:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 17:07:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 17:07:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'images', '.mgr', '.rgw.root', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'backups', 'vms']
Oct 01 17:07:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 17:07:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:07:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:07:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 17:07:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:07:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:07:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:07:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:07:11 compute-0 ceph-mon[74273]: pgmap v1138: 305 pgs: 305 active+clean; 65 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 51 KiB/s wr, 3 op/s
Oct 01 17:07:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 17:07:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:07:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:07:11 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/b3a00dd2-1018-47de-9729-c563b1ade8d1/.meta.tmp'
Oct 01 17:07:11 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b3a00dd2-1018-47de-9729-c563b1ade8d1/.meta.tmp' to config b'/volumes/_nogroup/b3a00dd2-1018-47de-9729-c563b1ade8d1/.meta'
Oct 01 17:07:11 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 23c1c86e-e007-42b3-b901-06589df83b15 for path b'/volumes/_nogroup/b3a00dd2-1018-47de-9729-c563b1ade8d1'
Oct 01 17:07:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ccca9643b76c5a2788f13907832e0f82ace4e4c6feaa541568a0b6303cde4822-merged.mount: Deactivated successfully.
Oct 01 17:07:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:07:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:07:12 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 65 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 62 KiB/s wr, 4 op/s
Oct 01 17:07:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/7eafcb7a-4d98-4306-a178-8f37dd36887c/.meta.tmp'
Oct 01 17:07:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7eafcb7a-4d98-4306-a178-8f37dd36887c/.meta.tmp' to config b'/volumes/_nogroup/7eafcb7a-4d98-4306-a178-8f37dd36887c/.meta'
Oct 01 17:07:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:07:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:ef793e02-9980-4db2-8e9a-aaf0250b4696, sub_name:7eafcb7a-4d98-4306-a178-8f37dd36887c, target_sub_name:b3a00dd2-1018-47de-9729-c563b1ade8d1, vol_name:cephfs) < ""
Oct 01 17:07:12 compute-0 podman[276283]: 2025-10-01 17:07:12.249211218 +0000 UTC m=+1.094146183 container remove a1a2fde42af6a8bfe1b2220f349b9bee030454bb045f7cf7978b9879f631dfc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mccarthy, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:07:12 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:07:12.246+0000 7f813f03a640 -1 client.0 error registering admin socket command: (17) File exists
Oct 01 17:07:12 compute-0 ceph-mgr[74571]: client.0 error registering admin socket command: (17) File exists
Oct 01 17:07:12 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:07:12.246+0000 7f813f03a640 -1 client.0 error registering admin socket command: (17) File exists
Oct 01 17:07:12 compute-0 ceph-mgr[74571]: client.0 error registering admin socket command: (17) File exists
Oct 01 17:07:12 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:07:12.246+0000 7f813f03a640 -1 client.0 error registering admin socket command: (17) File exists
Oct 01 17:07:12 compute-0 ceph-mgr[74571]: client.0 error registering admin socket command: (17) File exists
Oct 01 17:07:12 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:07:12.246+0000 7f813f03a640 -1 client.0 error registering admin socket command: (17) File exists
Oct 01 17:07:12 compute-0 ceph-mgr[74571]: client.0 error registering admin socket command: (17) File exists
Oct 01 17:07:12 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:07:12.246+0000 7f813f03a640 -1 client.0 error registering admin socket command: (17) File exists
Oct 01 17:07:12 compute-0 ceph-mgr[74571]: client.0 error registering admin socket command: (17) File exists
Oct 01 17:07:12 compute-0 systemd[1]: libpod-conmon-a1a2fde42af6a8bfe1b2220f349b9bee030454bb045f7cf7978b9879f631dfc2.scope: Deactivated successfully.
Oct 01 17:07:12 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b3a00dd2-1018-47de-9729-c563b1ade8d1", "format": "json"}]: dispatch
Oct 01 17:07:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b3a00dd2-1018-47de-9729-c563b1ade8d1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:07:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b3a00dd2-1018-47de-9729-c563b1ade8d1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:07:12 compute-0 sudo[276152]: pam_unix(sudo:session): session closed for user root
Oct 01 17:07:12 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "efdb0876-5085-415d-ba20-1a51461b6ac1", "format": "json"}]: dispatch
Oct 01 17:07:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:efdb0876-5085-415d-ba20-1a51461b6ac1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:07:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:efdb0876-5085-415d-ba20-1a51461b6ac1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:07:12 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:07:12.365+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'efdb0876-5085-415d-ba20-1a51461b6ac1' of type subvolume
Oct 01 17:07:12 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'efdb0876-5085-415d-ba20-1a51461b6ac1' of type subvolume
Oct 01 17:07:12 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "efdb0876-5085-415d-ba20-1a51461b6ac1", "force": true, "format": "json"}]: dispatch
Oct 01 17:07:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:efdb0876-5085-415d-ba20-1a51461b6ac1, vol_name:cephfs) < ""
Oct 01 17:07:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/efdb0876-5085-415d-ba20-1a51461b6ac1'' moved to trashcan
Oct 01 17:07:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:07:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:efdb0876-5085-415d-ba20-1a51461b6ac1, vol_name:cephfs) < ""
Oct 01 17:07:12 compute-0 sudo[276312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:07:12 compute-0 sudo[276312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:07:12 compute-0 sudo[276312]: pam_unix(sudo:session): session closed for user root
Oct 01 17:07:12 compute-0 sudo[276337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:07:12 compute-0 sudo[276337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:07:12 compute-0 sudo[276337]: pam_unix(sudo:session): session closed for user root
Oct 01 17:07:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/b3a00dd2-1018-47de-9729-c563b1ade8d1
Oct 01 17:07:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, b3a00dd2-1018-47de-9729-c563b1ade8d1)
Oct 01 17:07:12 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:07:12.568+0000 7f814003c640 -1 client.0 error registering admin socket command: (17) File exists
Oct 01 17:07:12 compute-0 ceph-mgr[74571]: client.0 error registering admin socket command: (17) File exists
Oct 01 17:07:12 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:07:12.568+0000 7f814003c640 -1 client.0 error registering admin socket command: (17) File exists
Oct 01 17:07:12 compute-0 ceph-mgr[74571]: client.0 error registering admin socket command: (17) File exists
Oct 01 17:07:12 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:07:12.568+0000 7f814003c640 -1 client.0 error registering admin socket command: (17) File exists
Oct 01 17:07:12 compute-0 ceph-mgr[74571]: client.0 error registering admin socket command: (17) File exists
Oct 01 17:07:12 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:07:12.568+0000 7f814003c640 -1 client.0 error registering admin socket command: (17) File exists
Oct 01 17:07:12 compute-0 ceph-mgr[74571]: client.0 error registering admin socket command: (17) File exists
Oct 01 17:07:12 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:07:12.568+0000 7f814003c640 -1 client.0 error registering admin socket command: (17) File exists
Oct 01 17:07:12 compute-0 ceph-mgr[74571]: client.0 error registering admin socket command: (17) File exists
Oct 01 17:07:12 compute-0 sudo[276362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:07:12 compute-0 sudo[276362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:07:12 compute-0 sudo[276362]: pam_unix(sudo:session): session closed for user root
Oct 01 17:07:12 compute-0 sudo[276399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 17:07:12 compute-0 sudo[276399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:07:12 compute-0 ceph-mon[74273]: osdmap e159: 3 total, 3 up, 3 in
Oct 01 17:07:12 compute-0 ceph-mon[74273]: pgmap v1140: 305 pgs: 305 active+clean; 65 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 62 KiB/s wr, 4 op/s
Oct 01 17:07:12 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b3a00dd2-1018-47de-9729-c563b1ade8d1", "format": "json"}]: dispatch
Oct 01 17:07:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, b3a00dd2-1018-47de-9729-c563b1ade8d1) -- by 0 seconds
Oct 01 17:07:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/b3a00dd2-1018-47de-9729-c563b1ade8d1/.meta.tmp'
Oct 01 17:07:12 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b3a00dd2-1018-47de-9729-c563b1ade8d1/.meta.tmp' to config b'/volumes/_nogroup/b3a00dd2-1018-47de-9729-c563b1ade8d1/.meta'
Oct 01 17:07:13 compute-0 podman[276465]: 2025-10-01 17:07:13.018246925 +0000 UTC m=+0.049225709 container create 94f6af67d7867be2c2ead93b2112c56d78f9a0f5a5f431e0ddda59366056b210 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Oct 01 17:07:13 compute-0 systemd[1]: Started libpod-conmon-94f6af67d7867be2c2ead93b2112c56d78f9a0f5a5f431e0ddda59366056b210.scope.
Oct 01 17:07:13 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:07:13 compute-0 podman[276465]: 2025-10-01 17:07:12.993537393 +0000 UTC m=+0.024516207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:07:13 compute-0 podman[276465]: 2025-10-01 17:07:13.106181733 +0000 UTC m=+0.137160557 container init 94f6af67d7867be2c2ead93b2112c56d78f9a0f5a5f431e0ddda59366056b210 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 01 17:07:13 compute-0 podman[276465]: 2025-10-01 17:07:13.116270036 +0000 UTC m=+0.147248820 container start 94f6af67d7867be2c2ead93b2112c56d78f9a0f5a5f431e0ddda59366056b210 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_keldysh, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 01 17:07:13 compute-0 musing_keldysh[276483]: 167 167
Oct 01 17:07:13 compute-0 podman[276465]: 2025-10-01 17:07:13.121717752 +0000 UTC m=+0.152696556 container attach 94f6af67d7867be2c2ead93b2112c56d78f9a0f5a5f431e0ddda59366056b210 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 01 17:07:13 compute-0 systemd[1]: libpod-94f6af67d7867be2c2ead93b2112c56d78f9a0f5a5f431e0ddda59366056b210.scope: Deactivated successfully.
Oct 01 17:07:13 compute-0 podman[276465]: 2025-10-01 17:07:13.123928701 +0000 UTC m=+0.154907485 container died 94f6af67d7867be2c2ead93b2112c56d78f9a0f5a5f431e0ddda59366056b210 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_keldysh, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 01 17:07:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-103f89b63ed0c141ce0237ca22af9a1d59d346866d52025e1e3ffe610d255bba-merged.mount: Deactivated successfully.
Oct 01 17:07:13 compute-0 podman[276479]: 2025-10-01 17:07:13.209460663 +0000 UTC m=+0.144673750 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 01 17:07:13 compute-0 podman[276465]: 2025-10-01 17:07:13.217634132 +0000 UTC m=+0.248612916 container remove 94f6af67d7867be2c2ead93b2112c56d78f9a0f5a5f431e0ddda59366056b210 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:07:13 compute-0 systemd[1]: libpod-conmon-94f6af67d7867be2c2ead93b2112c56d78f9a0f5a5f431e0ddda59366056b210.scope: Deactivated successfully.
Oct 01 17:07:13 compute-0 podman[276530]: 2025-10-01 17:07:13.382104938 +0000 UTC m=+0.052804174 container create ae2387e4ddae1ae96cfc4139d1acfd4903c0d423dd1e24f82f8d3f981ae361bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lewin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 01 17:07:13 compute-0 systemd[1]: Started libpod-conmon-ae2387e4ddae1ae96cfc4139d1acfd4903c0d423dd1e24f82f8d3f981ae361bc.scope.
Oct 01 17:07:13 compute-0 podman[276530]: 2025-10-01 17:07:13.357134776 +0000 UTC m=+0.027834092 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:07:13 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/474b2373a0ed8b50b315593ade4da0deb9a5f683837634eb21dd24731f63aca2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/474b2373a0ed8b50b315593ade4da0deb9a5f683837634eb21dd24731f63aca2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/474b2373a0ed8b50b315593ade4da0deb9a5f683837634eb21dd24731f63aca2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/474b2373a0ed8b50b315593ade4da0deb9a5f683837634eb21dd24731f63aca2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:07:13 compute-0 podman[276530]: 2025-10-01 17:07:13.4971634 +0000 UTC m=+0.167862716 container init ae2387e4ddae1ae96cfc4139d1acfd4903c0d423dd1e24f82f8d3f981ae361bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 01 17:07:13 compute-0 podman[276530]: 2025-10-01 17:07:13.503664771 +0000 UTC m=+0.174364027 container start ae2387e4ddae1ae96cfc4139d1acfd4903c0d423dd1e24f82f8d3f981ae361bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:07:13 compute-0 podman[276530]: 2025-10-01 17:07:13.51219378 +0000 UTC m=+0.182893036 container attach ae2387e4ddae1ae96cfc4139d1acfd4903c0d423dd1e24f82f8d3f981ae361bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lewin, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 01 17:07:13 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:07:13.808 162304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:71:db', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:60:3f:78:bd:29'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 17:07:13 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:07:13.812 162304 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 17:07:13 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "efdb0876-5085-415d-ba20-1a51461b6ac1", "format": "json"}]: dispatch
Oct 01 17:07:13 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "efdb0876-5085-415d-ba20-1a51461b6ac1", "force": true, "format": "json"}]: dispatch
Oct 01 17:07:14 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 65 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 77 KiB/s wr, 4 op/s
Oct 01 17:07:14 compute-0 sleepy_lewin[276547]: {
Oct 01 17:07:14 compute-0 sleepy_lewin[276547]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 17:07:14 compute-0 sleepy_lewin[276547]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:07:14 compute-0 sleepy_lewin[276547]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 17:07:14 compute-0 sleepy_lewin[276547]:         "osd_id": 2,
Oct 01 17:07:14 compute-0 sleepy_lewin[276547]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 17:07:14 compute-0 sleepy_lewin[276547]:         "type": "bluestore"
Oct 01 17:07:14 compute-0 sleepy_lewin[276547]:     },
Oct 01 17:07:14 compute-0 sleepy_lewin[276547]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 17:07:14 compute-0 sleepy_lewin[276547]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:07:14 compute-0 sleepy_lewin[276547]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 17:07:14 compute-0 sleepy_lewin[276547]:         "osd_id": 0,
Oct 01 17:07:14 compute-0 sleepy_lewin[276547]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 17:07:14 compute-0 sleepy_lewin[276547]:         "type": "bluestore"
Oct 01 17:07:14 compute-0 sleepy_lewin[276547]:     },
Oct 01 17:07:14 compute-0 sleepy_lewin[276547]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 17:07:14 compute-0 sleepy_lewin[276547]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:07:14 compute-0 sleepy_lewin[276547]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 17:07:14 compute-0 sleepy_lewin[276547]:         "osd_id": 1,
Oct 01 17:07:14 compute-0 sleepy_lewin[276547]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 17:07:14 compute-0 sleepy_lewin[276547]:         "type": "bluestore"
Oct 01 17:07:14 compute-0 sleepy_lewin[276547]:     }
Oct 01 17:07:14 compute-0 sleepy_lewin[276547]: }
Oct 01 17:07:14 compute-0 systemd[1]: libpod-ae2387e4ddae1ae96cfc4139d1acfd4903c0d423dd1e24f82f8d3f981ae361bc.scope: Deactivated successfully.
Oct 01 17:07:14 compute-0 podman[276530]: 2025-10-01 17:07:14.456496458 +0000 UTC m=+1.127195734 container died ae2387e4ddae1ae96cfc4139d1acfd4903c0d423dd1e24f82f8d3f981ae361bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lewin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 01 17:07:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-474b2373a0ed8b50b315593ade4da0deb9a5f683837634eb21dd24731f63aca2-merged.mount: Deactivated successfully.
Oct 01 17:07:14 compute-0 podman[276530]: 2025-10-01 17:07:14.791764042 +0000 UTC m=+1.462463278 container remove ae2387e4ddae1ae96cfc4139d1acfd4903c0d423dd1e24f82f8d3f981ae361bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 17:07:14 compute-0 systemd[1]: libpod-conmon-ae2387e4ddae1ae96cfc4139d1acfd4903c0d423dd1e24f82f8d3f981ae361bc.scope: Deactivated successfully.
Oct 01 17:07:14 compute-0 sudo[276399]: pam_unix(sudo:session): session closed for user root
Oct 01 17:07:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 17:07:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:07:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 17:07:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:07:14 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 01b60810-a1cc-4c4c-a45d-db6c9c4f2b22 does not exist
Oct 01 17:07:14 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 1cf4335b-9329-4011-88cb-8c56c1bdf261 does not exist
Oct 01 17:07:14 compute-0 ceph-mon[74273]: pgmap v1141: 305 pgs: 305 active+clean; 65 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 77 KiB/s wr, 4 op/s
Oct 01 17:07:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:07:14 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:07:14 compute-0 sudo[276592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:07:14 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.pmbdpj(active, since 33m)
Oct 01 17:07:14 compute-0 sudo[276592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:07:14 compute-0 sudo[276592]: pam_unix(sudo:session): session closed for user root
Oct 01 17:07:15 compute-0 sudo[276617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 17:07:15 compute-0 sudo[276617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:07:15 compute-0 sudo[276617]: pam_unix(sudo:session): session closed for user root
Oct 01 17:07:15 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/7eafcb7a-4d98-4306-a178-8f37dd36887c/.snap/ef793e02-9980-4db2-8e9a-aaf0250b4696/54f21604-7216-436f-9083-3ff149648db9' to b'/volumes/_nogroup/b3a00dd2-1018-47de-9729-c563b1ade8d1/006cbd5a-fd22-4b11-967f-3c1a5049e2ed'
Oct 01 17:07:15 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/b3a00dd2-1018-47de-9729-c563b1ade8d1/.meta.tmp'
Oct 01 17:07:15 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b3a00dd2-1018-47de-9729-c563b1ade8d1/.meta.tmp' to config b'/volumes/_nogroup/b3a00dd2-1018-47de-9729-c563b1ade8d1/.meta'
Oct 01 17:07:15 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.clone_index] untracking 23c1c86e-e007-42b3-b901-06589df83b15
Oct 01 17:07:15 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7eafcb7a-4d98-4306-a178-8f37dd36887c/.meta.tmp'
Oct 01 17:07:15 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7eafcb7a-4d98-4306-a178-8f37dd36887c/.meta.tmp' to config b'/volumes/_nogroup/7eafcb7a-4d98-4306-a178-8f37dd36887c/.meta'
Oct 01 17:07:15 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/b3a00dd2-1018-47de-9729-c563b1ade8d1/.meta.tmp'
Oct 01 17:07:15 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b3a00dd2-1018-47de-9729-c563b1ade8d1/.meta.tmp' to config b'/volumes/_nogroup/b3a00dd2-1018-47de-9729-c563b1ade8d1/.meta'
Oct 01 17:07:15 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, b3a00dd2-1018-47de-9729-c563b1ade8d1)
Oct 01 17:07:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:07:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Oct 01 17:07:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Oct 01 17:07:15 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Oct 01 17:07:15 compute-0 ceph-mon[74273]: mgrmap e15: compute-0.pmbdpj(active, since 33m)
Oct 01 17:07:15 compute-0 ceph-mon[74273]: osdmap e160: 3 total, 3 up, 3 in
Oct 01 17:07:16 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 65 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 58 KiB/s wr, 4 op/s
Oct 01 17:07:16 compute-0 nova_compute[259504]: 2025-10-01 17:07:16.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:07:17 compute-0 ceph-mon[74273]: pgmap v1143: 305 pgs: 305 active+clean; 65 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 58 KiB/s wr, 4 op/s
Oct 01 17:07:17 compute-0 podman[276642]: 2025-10-01 17:07:17.777978695 +0000 UTC m=+0.085663885 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 01 17:07:18 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 66 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 87 KiB/s wr, 7 op/s
Oct 01 17:07:19 compute-0 ceph-mon[74273]: pgmap v1144: 305 pgs: 305 active+clean; 66 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 87 KiB/s wr, 7 op/s
Oct 01 17:07:19 compute-0 nova_compute[259504]: 2025-10-01 17:07:19.749 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:07:19 compute-0 nova_compute[259504]: 2025-10-01 17:07:19.803 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:07:19 compute-0 nova_compute[259504]: 2025-10-01 17:07:19.804 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:07:19 compute-0 nova_compute[259504]: 2025-10-01 17:07:19.804 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:07:19 compute-0 nova_compute[259504]: 2025-10-01 17:07:19.804 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 17:07:19 compute-0 nova_compute[259504]: 2025-10-01 17:07:19.804 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:07:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:07:19.977 162304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:07:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:07:19.977 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:07:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:07:19.977 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:07:20 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 66 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 924 B/s rd, 79 KiB/s wr, 7 op/s
Oct 01 17:07:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:07:20 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2056226482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:07:20 compute-0 nova_compute[259504]: 2025-10-01 17:07:20.240 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:07:20 compute-0 nova_compute[259504]: 2025-10-01 17:07:20.407 2 WARNING nova.virt.libvirt.driver [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 17:07:20 compute-0 nova_compute[259504]: 2025-10-01 17:07:20.408 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5058MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 17:07:20 compute-0 nova_compute[259504]: 2025-10-01 17:07:20.409 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:07:20 compute-0 nova_compute[259504]: 2025-10-01 17:07:20.409 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:07:20 compute-0 nova_compute[259504]: 2025-10-01 17:07:20.465 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 17:07:20 compute-0 nova_compute[259504]: 2025-10-01 17:07:20.465 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 17:07:20 compute-0 nova_compute[259504]: 2025-10-01 17:07:20.480 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:07:20 compute-0 ceph-mon[74273]: pgmap v1145: 305 pgs: 305 active+clean; 66 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 924 B/s rd, 79 KiB/s wr, 7 op/s
Oct 01 17:07:20 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2056226482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:07:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:07:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:07:20 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2640209281' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:07:20 compute-0 nova_compute[259504]: 2025-10-01 17:07:20.888 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:07:20 compute-0 nova_compute[259504]: 2025-10-01 17:07:20.892 2 DEBUG nova.compute.provider_tree [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed in ProviderTree for provider: 2417da73-53f1-4edf-ae4c-fbd9fa470d6b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 17:07:20 compute-0 nova_compute[259504]: 2025-10-01 17:07:20.957 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 17:07:20 compute-0 nova_compute[259504]: 2025-10-01 17:07:20.959 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 17:07:20 compute-0 nova_compute[259504]: 2025-10-01 17:07:20.959 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:07:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 17:07:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:07:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 17:07:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:07:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:07:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:07:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:07:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:07:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:07:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:07:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 01 17:07:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:07:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00039725115175490004 of space, bias 4.0, pg target 0.47670138210588003 quantized to 16 (current 16)
Oct 01 17:07:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:07:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 01 17:07:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:07:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 17:07:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:07:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 17:07:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:07:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:07:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:07:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 17:07:21 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2640209281' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:07:22 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 66 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 70 KiB/s wr, 6 op/s
Oct 01 17:07:22 compute-0 ceph-mon[74273]: pgmap v1146: 305 pgs: 305 active+clean; 66 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 70 KiB/s wr, 6 op/s
Oct 01 17:07:22 compute-0 nova_compute[259504]: 2025-10-01 17:07:22.960 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:07:22 compute-0 nova_compute[259504]: 2025-10-01 17:07:22.961 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 17:07:22 compute-0 nova_compute[259504]: 2025-10-01 17:07:22.961 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 17:07:22 compute-0 nova_compute[259504]: 2025-10-01 17:07:22.981 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 17:07:22 compute-0 nova_compute[259504]: 2025-10-01 17:07:22.982 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:07:23 compute-0 nova_compute[259504]: 2025-10-01 17:07:23.749 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:07:23 compute-0 nova_compute[259504]: 2025-10-01 17:07:23.749 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:07:23 compute-0 nova_compute[259504]: 2025-10-01 17:07:23.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:07:23 compute-0 nova_compute[259504]: 2025-10-01 17:07:23.750 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 17:07:23 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:07:23.815 162304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d2971fc2-5b75-459a-98a0-6e626d0d4d99, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 17:07:24 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 66 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 36 KiB/s wr, 4 op/s
Oct 01 17:07:24 compute-0 nova_compute[259504]: 2025-10-01 17:07:24.745 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:07:25 compute-0 ceph-mon[74273]: pgmap v1147: 305 pgs: 305 active+clean; 66 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 36 KiB/s wr, 4 op/s
Oct 01 17:07:25 compute-0 nova_compute[259504]: 2025-10-01 17:07:25.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:07:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:07:26 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 66 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 595 B/s rd, 35 KiB/s wr, 4 op/s
Oct 01 17:07:27 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a69f0a4d-9f3f-4666-81e2-d1b6803ae7a8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:07:27 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a69f0a4d-9f3f-4666-81e2-d1b6803ae7a8, vol_name:cephfs) < ""
Oct 01 17:07:27 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a69f0a4d-9f3f-4666-81e2-d1b6803ae7a8/.meta.tmp'
Oct 01 17:07:27 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a69f0a4d-9f3f-4666-81e2-d1b6803ae7a8/.meta.tmp' to config b'/volumes/_nogroup/a69f0a4d-9f3f-4666-81e2-d1b6803ae7a8/.meta'
Oct 01 17:07:27 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a69f0a4d-9f3f-4666-81e2-d1b6803ae7a8, vol_name:cephfs) < ""
Oct 01 17:07:27 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a69f0a4d-9f3f-4666-81e2-d1b6803ae7a8", "format": "json"}]: dispatch
Oct 01 17:07:27 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a69f0a4d-9f3f-4666-81e2-d1b6803ae7a8, vol_name:cephfs) < ""
Oct 01 17:07:27 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a69f0a4d-9f3f-4666-81e2-d1b6803ae7a8, vol_name:cephfs) < ""
Oct 01 17:07:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:07:27 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:07:27 compute-0 ceph-mon[74273]: pgmap v1148: 305 pgs: 305 active+clean; 66 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 595 B/s rd, 35 KiB/s wr, 4 op/s
Oct 01 17:07:27 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:07:28 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 66 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 44 KiB/s wr, 4 op/s
Oct 01 17:07:28 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a69f0a4d-9f3f-4666-81e2-d1b6803ae7a8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:07:28 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a69f0a4d-9f3f-4666-81e2-d1b6803ae7a8", "format": "json"}]: dispatch
Oct 01 17:07:29 compute-0 ceph-mon[74273]: pgmap v1149: 305 pgs: 305 active+clean; 66 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 44 KiB/s wr, 4 op/s
Oct 01 17:07:30 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 66 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 25 KiB/s wr, 2 op/s
Oct 01 17:07:30 compute-0 ceph-mon[74273]: pgmap v1150: 305 pgs: 305 active+clean; 66 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 25 KiB/s wr, 2 op/s
Oct 01 17:07:30 compute-0 podman[276707]: 2025-10-01 17:07:30.790651704 +0000 UTC m=+0.089975748 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3)
Oct 01 17:07:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:07:30 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a4bff048-9ed8-4cdd-9086-a185cea04d3c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:07:30 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a4bff048-9ed8-4cdd-9086-a185cea04d3c, vol_name:cephfs) < ""
Oct 01 17:07:31 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a4bff048-9ed8-4cdd-9086-a185cea04d3c/.meta.tmp'
Oct 01 17:07:31 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a4bff048-9ed8-4cdd-9086-a185cea04d3c/.meta.tmp' to config b'/volumes/_nogroup/a4bff048-9ed8-4cdd-9086-a185cea04d3c/.meta'
Oct 01 17:07:31 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a4bff048-9ed8-4cdd-9086-a185cea04d3c, vol_name:cephfs) < ""
Oct 01 17:07:31 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a4bff048-9ed8-4cdd-9086-a185cea04d3c", "format": "json"}]: dispatch
Oct 01 17:07:31 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a4bff048-9ed8-4cdd-9086-a185cea04d3c, vol_name:cephfs) < ""
Oct 01 17:07:31 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a4bff048-9ed8-4cdd-9086-a185cea04d3c, vol_name:cephfs) < ""
Oct 01 17:07:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:07:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:07:31 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a4bff048-9ed8-4cdd-9086-a185cea04d3c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:07:31 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a4bff048-9ed8-4cdd-9086-a185cea04d3c", "format": "json"}]: dispatch
Oct 01 17:07:31 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:07:32 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 66 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 25 KiB/s wr, 1 op/s
Oct 01 17:07:32 compute-0 ceph-mon[74273]: pgmap v1151: 305 pgs: 305 active+clean; 66 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 25 KiB/s wr, 1 op/s
Oct 01 17:07:32 compute-0 podman[276727]: 2025-10-01 17:07:32.786062729 +0000 UTC m=+0.089548518 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:07:33 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "afbc5b28-4a3d-4f5a-9775-80288db0083b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:07:33 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:afbc5b28-4a3d-4f5a-9775-80288db0083b, vol_name:cephfs) < ""
Oct 01 17:07:33 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/afbc5b28-4a3d-4f5a-9775-80288db0083b/.meta.tmp'
Oct 01 17:07:33 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/afbc5b28-4a3d-4f5a-9775-80288db0083b/.meta.tmp' to config b'/volumes/_nogroup/afbc5b28-4a3d-4f5a-9775-80288db0083b/.meta'
Oct 01 17:07:33 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:afbc5b28-4a3d-4f5a-9775-80288db0083b, vol_name:cephfs) < ""
Oct 01 17:07:33 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "afbc5b28-4a3d-4f5a-9775-80288db0083b", "format": "json"}]: dispatch
Oct 01 17:07:33 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:afbc5b28-4a3d-4f5a-9775-80288db0083b, vol_name:cephfs) < ""
Oct 01 17:07:33 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:afbc5b28-4a3d-4f5a-9775-80288db0083b, vol_name:cephfs) < ""
Oct 01 17:07:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:07:33 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:07:33 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "afbc5b28-4a3d-4f5a-9775-80288db0083b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:07:33 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:07:34 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 66 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 37 KiB/s wr, 2 op/s
Oct 01 17:07:34 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a4bff048-9ed8-4cdd-9086-a185cea04d3c", "format": "json"}]: dispatch
Oct 01 17:07:34 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a4bff048-9ed8-4cdd-9086-a185cea04d3c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:07:34 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a4bff048-9ed8-4cdd-9086-a185cea04d3c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:07:34 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:07:34.538+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a4bff048-9ed8-4cdd-9086-a185cea04d3c' of type subvolume
Oct 01 17:07:34 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a4bff048-9ed8-4cdd-9086-a185cea04d3c' of type subvolume
Oct 01 17:07:34 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a4bff048-9ed8-4cdd-9086-a185cea04d3c", "force": true, "format": "json"}]: dispatch
Oct 01 17:07:34 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a4bff048-9ed8-4cdd-9086-a185cea04d3c, vol_name:cephfs) < ""
Oct 01 17:07:34 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a4bff048-9ed8-4cdd-9086-a185cea04d3c'' moved to trashcan
Oct 01 17:07:34 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:07:34 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a4bff048-9ed8-4cdd-9086-a185cea04d3c, vol_name:cephfs) < ""
Oct 01 17:07:34 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Oct 01 17:07:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:07:34.667643) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 17:07:34 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Oct 01 17:07:34 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338454667697, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1782, "num_deletes": 258, "total_data_size": 2405009, "memory_usage": 2483440, "flush_reason": "Manual Compaction"}
Oct 01 17:07:34 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Oct 01 17:07:34 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "afbc5b28-4a3d-4f5a-9775-80288db0083b", "format": "json"}]: dispatch
Oct 01 17:07:34 compute-0 ceph-mon[74273]: pgmap v1152: 305 pgs: 305 active+clean; 66 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 37 KiB/s wr, 2 op/s
Oct 01 17:07:34 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338454690316, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 2333183, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24440, "largest_seqno": 26221, "table_properties": {"data_size": 2324839, "index_size": 4902, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19864, "raw_average_key_size": 21, "raw_value_size": 2307131, "raw_average_value_size": 2478, "num_data_blocks": 219, "num_entries": 931, "num_filter_entries": 931, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759338345, "oldest_key_time": 1759338345, "file_creation_time": 1759338454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Oct 01 17:07:34 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 22738 microseconds, and 9681 cpu microseconds.
Oct 01 17:07:34 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 17:07:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:07:34.690382) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 2333183 bytes OK
Oct 01 17:07:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:07:34.690409) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Oct 01 17:07:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:07:34.692143) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Oct 01 17:07:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:07:34.692167) EVENT_LOG_v1 {"time_micros": 1759338454692159, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 17:07:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:07:34.692194) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 17:07:34 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 2396741, prev total WAL file size 2396741, number of live WAL files 2.
Oct 01 17:07:34 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 17:07:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:07:34.693550) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Oct 01 17:07:34 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 17:07:34 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(2278KB)], [56(9342KB)]
Oct 01 17:07:34 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338454693628, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 11899681, "oldest_snapshot_seqno": -1}
Oct 01 17:07:34 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5583 keys, 10064106 bytes, temperature: kUnknown
Oct 01 17:07:34 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338454773863, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 10064106, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10023013, "index_size": 26007, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14021, "raw_key_size": 138917, "raw_average_key_size": 24, "raw_value_size": 9919093, "raw_average_value_size": 1776, "num_data_blocks": 1081, "num_entries": 5583, "num_filter_entries": 5583, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336399, "oldest_key_time": 0, "file_creation_time": 1759338454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Oct 01 17:07:34 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 17:07:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:07:34.774495) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 10064106 bytes
Oct 01 17:07:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:07:34.776127) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.8 rd, 125.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 9.1 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(9.4) write-amplify(4.3) OK, records in: 6116, records dropped: 533 output_compression: NoCompression
Oct 01 17:07:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:07:34.776171) EVENT_LOG_v1 {"time_micros": 1759338454776147, "job": 30, "event": "compaction_finished", "compaction_time_micros": 80525, "compaction_time_cpu_micros": 41993, "output_level": 6, "num_output_files": 1, "total_output_size": 10064106, "num_input_records": 6116, "num_output_records": 5583, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 17:07:34 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 17:07:34 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338454777192, "job": 30, "event": "table_file_deletion", "file_number": 58}
Oct 01 17:07:34 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 17:07:34 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338454780987, "job": 30, "event": "table_file_deletion", "file_number": 56}
Oct 01 17:07:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:07:34.693381) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:07:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:07:34.781053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:07:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:07:34.781061) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:07:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:07:34.781064) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:07:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:07:34.781384) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:07:34 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:07:34.781388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:07:35 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a4bff048-9ed8-4cdd-9086-a185cea04d3c", "format": "json"}]: dispatch
Oct 01 17:07:35 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a4bff048-9ed8-4cdd-9086-a185cea04d3c", "force": true, "format": "json"}]: dispatch
Oct 01 17:07:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:07:36 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 66 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s wr, 1 op/s
Oct 01 17:07:36 compute-0 ceph-mon[74273]: pgmap v1153: 305 pgs: 305 active+clean; 66 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s wr, 1 op/s
Oct 01 17:07:36 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "afbc5b28-4a3d-4f5a-9775-80288db0083b", "snap_name": "b235c18d-212f-4b02-bc6d-e45b017ebaf4", "format": "json"}]: dispatch
Oct 01 17:07:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:b235c18d-212f-4b02-bc6d-e45b017ebaf4, sub_name:afbc5b28-4a3d-4f5a-9775-80288db0083b, vol_name:cephfs) < ""
Oct 01 17:07:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:b235c18d-212f-4b02-bc6d-e45b017ebaf4, sub_name:afbc5b28-4a3d-4f5a-9775-80288db0083b, vol_name:cephfs) < ""
Oct 01 17:07:37 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "afbc5b28-4a3d-4f5a-9775-80288db0083b", "snap_name": "b235c18d-212f-4b02-bc6d-e45b017ebaf4", "format": "json"}]: dispatch
Oct 01 17:07:37 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a69f0a4d-9f3f-4666-81e2-d1b6803ae7a8", "format": "json"}]: dispatch
Oct 01 17:07:37 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a69f0a4d-9f3f-4666-81e2-d1b6803ae7a8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:07:37 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a69f0a4d-9f3f-4666-81e2-d1b6803ae7a8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:07:37 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:07:37.992+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a69f0a4d-9f3f-4666-81e2-d1b6803ae7a8' of type subvolume
Oct 01 17:07:37 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a69f0a4d-9f3f-4666-81e2-d1b6803ae7a8' of type subvolume
Oct 01 17:07:37 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a69f0a4d-9f3f-4666-81e2-d1b6803ae7a8", "force": true, "format": "json"}]: dispatch
Oct 01 17:07:37 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a69f0a4d-9f3f-4666-81e2-d1b6803ae7a8, vol_name:cephfs) < ""
Oct 01 17:07:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a69f0a4d-9f3f-4666-81e2-d1b6803ae7a8'' moved to trashcan
Oct 01 17:07:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:07:38 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a69f0a4d-9f3f-4666-81e2-d1b6803ae7a8, vol_name:cephfs) < ""
Oct 01 17:07:38 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 305 active+clean; 67 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 65 KiB/s wr, 3 op/s
Oct 01 17:07:38 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a69f0a4d-9f3f-4666-81e2-d1b6803ae7a8", "format": "json"}]: dispatch
Oct 01 17:07:38 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a69f0a4d-9f3f-4666-81e2-d1b6803ae7a8", "force": true, "format": "json"}]: dispatch
Oct 01 17:07:38 compute-0 ceph-mon[74273]: pgmap v1154: 305 pgs: 305 active+clean; 67 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 65 KiB/s wr, 3 op/s
Oct 01 17:07:40 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 67 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 50 KiB/s wr, 2 op/s
Oct 01 17:07:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:07:41 compute-0 ceph-mon[74273]: pgmap v1155: 305 pgs: 305 active+clean; 67 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 50 KiB/s wr, 2 op/s
Oct 01 17:07:41 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "1a287eed-b2b2-4a6a-a484-5e9b49075f78", "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:07:41 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:1a287eed-b2b2-4a6a-a484-5e9b49075f78, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Oct 01 17:07:41 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:1a287eed-b2b2-4a6a-a484-5e9b49075f78, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Oct 01 17:07:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:07:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:07:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:07:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:07:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:07:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:07:41 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b8cb71a4-9f4b-4317-bdd2-cb9e0a520e09", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:07:41 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b8cb71a4-9f4b-4317-bdd2-cb9e0a520e09, vol_name:cephfs) < ""
Oct 01 17:07:41 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b8cb71a4-9f4b-4317-bdd2-cb9e0a520e09/.meta.tmp'
Oct 01 17:07:41 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b8cb71a4-9f4b-4317-bdd2-cb9e0a520e09/.meta.tmp' to config b'/volumes/_nogroup/b8cb71a4-9f4b-4317-bdd2-cb9e0a520e09/.meta'
Oct 01 17:07:41 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b8cb71a4-9f4b-4317-bdd2-cb9e0a520e09, vol_name:cephfs) < ""
Oct 01 17:07:41 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b8cb71a4-9f4b-4317-bdd2-cb9e0a520e09", "format": "json"}]: dispatch
Oct 01 17:07:41 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b8cb71a4-9f4b-4317-bdd2-cb9e0a520e09, vol_name:cephfs) < ""
Oct 01 17:07:41 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b8cb71a4-9f4b-4317-bdd2-cb9e0a520e09, vol_name:cephfs) < ""
Oct 01 17:07:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:07:41 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:07:42 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 67 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 50 KiB/s wr, 2 op/s
Oct 01 17:07:42 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "1a287eed-b2b2-4a6a-a484-5e9b49075f78", "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:07:42 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:07:43 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b8cb71a4-9f4b-4317-bdd2-cb9e0a520e09", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:07:43 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b8cb71a4-9f4b-4317-bdd2-cb9e0a520e09", "format": "json"}]: dispatch
Oct 01 17:07:43 compute-0 ceph-mon[74273]: pgmap v1156: 305 pgs: 305 active+clean; 67 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 50 KiB/s wr, 2 op/s
Oct 01 17:07:43 compute-0 podman[276747]: 2025-10-01 17:07:43.808396521 +0000 UTC m=+0.120511064 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 01 17:07:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 17:07:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3480104580' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 17:07:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 17:07:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3480104580' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 17:07:44 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 67 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 82 KiB/s wr, 5 op/s
Oct 01 17:07:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3480104580' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 17:07:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3480104580' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 17:07:44 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "1a287eed-b2b2-4a6a-a484-5e9b49075f78", "force": true, "format": "json"}]: dispatch
Oct 01 17:07:44 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:1a287eed-b2b2-4a6a-a484-5e9b49075f78, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Oct 01 17:07:44 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:1a287eed-b2b2-4a6a-a484-5e9b49075f78, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Oct 01 17:07:44 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "897be154-b47e-498f-a0a1-97bcf60e50f1", "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:07:44 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:897be154-b47e-498f-a0a1-97bcf60e50f1, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Oct 01 17:07:45 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:897be154-b47e-498f-a0a1-97bcf60e50f1, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Oct 01 17:07:45 compute-0 ceph-mon[74273]: pgmap v1157: 305 pgs: 305 active+clean; 67 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 82 KiB/s wr, 5 op/s
Oct 01 17:07:45 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b8cb71a4-9f4b-4317-bdd2-cb9e0a520e09", "format": "json"}]: dispatch
Oct 01 17:07:45 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b8cb71a4-9f4b-4317-bdd2-cb9e0a520e09, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:07:45 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b8cb71a4-9f4b-4317-bdd2-cb9e0a520e09, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:07:45 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:07:45.366+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b8cb71a4-9f4b-4317-bdd2-cb9e0a520e09' of type subvolume
Oct 01 17:07:45 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b8cb71a4-9f4b-4317-bdd2-cb9e0a520e09' of type subvolume
Oct 01 17:07:45 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b8cb71a4-9f4b-4317-bdd2-cb9e0a520e09", "force": true, "format": "json"}]: dispatch
Oct 01 17:07:45 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b8cb71a4-9f4b-4317-bdd2-cb9e0a520e09, vol_name:cephfs) < ""
Oct 01 17:07:45 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b8cb71a4-9f4b-4317-bdd2-cb9e0a520e09'' moved to trashcan
Oct 01 17:07:45 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:07:45 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b8cb71a4-9f4b-4317-bdd2-cb9e0a520e09, vol_name:cephfs) < ""
Oct 01 17:07:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:07:46 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 305 active+clean; 67 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 70 KiB/s wr, 4 op/s
Oct 01 17:07:46 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "1a287eed-b2b2-4a6a-a484-5e9b49075f78", "force": true, "format": "json"}]: dispatch
Oct 01 17:07:46 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "897be154-b47e-498f-a0a1-97bcf60e50f1", "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:07:46 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b8cb71a4-9f4b-4317-bdd2-cb9e0a520e09", "format": "json"}]: dispatch
Oct 01 17:07:47 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b8cb71a4-9f4b-4317-bdd2-cb9e0a520e09", "force": true, "format": "json"}]: dispatch
Oct 01 17:07:47 compute-0 ceph-mon[74273]: pgmap v1158: 305 pgs: 305 active+clean; 67 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 70 KiB/s wr, 4 op/s
Oct 01 17:07:48 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 67 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 97 KiB/s wr, 5 op/s
Oct 01 17:07:48 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "897be154-b47e-498f-a0a1-97bcf60e50f1", "force": true, "format": "json"}]: dispatch
Oct 01 17:07:48 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:897be154-b47e-498f-a0a1-97bcf60e50f1, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Oct 01 17:07:48 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:897be154-b47e-498f-a0a1-97bcf60e50f1, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Oct 01 17:07:48 compute-0 podman[276775]: 2025-10-01 17:07:48.790104014 +0000 UTC m=+0.091005317 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 01 17:07:48 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ebb81bb7-5ad0-430e-bce8-857817637159", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:07:48 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ebb81bb7-5ad0-430e-bce8-857817637159, vol_name:cephfs) < ""
Oct 01 17:07:48 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ebb81bb7-5ad0-430e-bce8-857817637159/.meta.tmp'
Oct 01 17:07:48 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ebb81bb7-5ad0-430e-bce8-857817637159/.meta.tmp' to config b'/volumes/_nogroup/ebb81bb7-5ad0-430e-bce8-857817637159/.meta'
Oct 01 17:07:48 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ebb81bb7-5ad0-430e-bce8-857817637159, vol_name:cephfs) < ""
Oct 01 17:07:48 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ebb81bb7-5ad0-430e-bce8-857817637159", "format": "json"}]: dispatch
Oct 01 17:07:48 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ebb81bb7-5ad0-430e-bce8-857817637159, vol_name:cephfs) < ""
Oct 01 17:07:48 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ebb81bb7-5ad0-430e-bce8-857817637159, vol_name:cephfs) < ""
Oct 01 17:07:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:07:48 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:07:49 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "dda50ff1-4d8c-4915-a4cd-5da1c827bac3", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:07:49 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:dda50ff1-4d8c-4915-a4cd-5da1c827bac3, vol_name:cephfs) < ""
Oct 01 17:07:49 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/dda50ff1-4d8c-4915-a4cd-5da1c827bac3/.meta.tmp'
Oct 01 17:07:49 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/dda50ff1-4d8c-4915-a4cd-5da1c827bac3/.meta.tmp' to config b'/volumes/_nogroup/dda50ff1-4d8c-4915-a4cd-5da1c827bac3/.meta'
Oct 01 17:07:49 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:dda50ff1-4d8c-4915-a4cd-5da1c827bac3, vol_name:cephfs) < ""
Oct 01 17:07:49 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "dda50ff1-4d8c-4915-a4cd-5da1c827bac3", "format": "json"}]: dispatch
Oct 01 17:07:49 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dda50ff1-4d8c-4915-a4cd-5da1c827bac3, vol_name:cephfs) < ""
Oct 01 17:07:49 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dda50ff1-4d8c-4915-a4cd-5da1c827bac3, vol_name:cephfs) < ""
Oct 01 17:07:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:07:49 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:07:49 compute-0 ceph-mon[74273]: pgmap v1159: 305 pgs: 305 active+clean; 67 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 97 KiB/s wr, 5 op/s
Oct 01 17:07:49 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "897be154-b47e-498f-a0a1-97bcf60e50f1", "force": true, "format": "json"}]: dispatch
Oct 01 17:07:49 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:07:49 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:07:50 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 67 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 59 KiB/s wr, 3 op/s
Oct 01 17:07:50 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ebb81bb7-5ad0-430e-bce8-857817637159", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:07:50 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ebb81bb7-5ad0-430e-bce8-857817637159", "format": "json"}]: dispatch
Oct 01 17:07:50 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "dda50ff1-4d8c-4915-a4cd-5da1c827bac3", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:07:50 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "dda50ff1-4d8c-4915-a4cd-5da1c827bac3", "format": "json"}]: dispatch
Oct 01 17:07:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:07:51 compute-0 ceph-mon[74273]: pgmap v1160: 305 pgs: 305 active+clean; 67 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 59 KiB/s wr, 3 op/s
Oct 01 17:07:52 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 67 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 59 KiB/s wr, 3 op/s
Oct 01 17:07:52 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ebb81bb7-5ad0-430e-bce8-857817637159", "format": "json"}]: dispatch
Oct 01 17:07:52 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ebb81bb7-5ad0-430e-bce8-857817637159, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:07:52 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ebb81bb7-5ad0-430e-bce8-857817637159, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:07:52 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:07:52.509+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ebb81bb7-5ad0-430e-bce8-857817637159' of type subvolume
Oct 01 17:07:52 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ebb81bb7-5ad0-430e-bce8-857817637159' of type subvolume
Oct 01 17:07:52 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ebb81bb7-5ad0-430e-bce8-857817637159", "force": true, "format": "json"}]: dispatch
Oct 01 17:07:52 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ebb81bb7-5ad0-430e-bce8-857817637159, vol_name:cephfs) < ""
Oct 01 17:07:52 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ebb81bb7-5ad0-430e-bce8-857817637159'' moved to trashcan
Oct 01 17:07:52 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:07:52 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ebb81bb7-5ad0-430e-bce8-857817637159, vol_name:cephfs) < ""
Oct 01 17:07:52 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dda50ff1-4d8c-4915-a4cd-5da1c827bac3", "format": "json"}]: dispatch
Oct 01 17:07:52 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:dda50ff1-4d8c-4915-a4cd-5da1c827bac3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:07:52 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:dda50ff1-4d8c-4915-a4cd-5da1c827bac3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:07:52 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:07:52.946+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dda50ff1-4d8c-4915-a4cd-5da1c827bac3' of type subvolume
Oct 01 17:07:52 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dda50ff1-4d8c-4915-a4cd-5da1c827bac3' of type subvolume
Oct 01 17:07:52 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "dda50ff1-4d8c-4915-a4cd-5da1c827bac3", "force": true, "format": "json"}]: dispatch
Oct 01 17:07:52 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dda50ff1-4d8c-4915-a4cd-5da1c827bac3, vol_name:cephfs) < ""
Oct 01 17:07:52 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/dda50ff1-4d8c-4915-a4cd-5da1c827bac3'' moved to trashcan
Oct 01 17:07:52 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:07:52 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dda50ff1-4d8c-4915-a4cd-5da1c827bac3, vol_name:cephfs) < ""
Oct 01 17:07:53 compute-0 ceph-mon[74273]: pgmap v1161: 305 pgs: 305 active+clean; 67 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 59 KiB/s wr, 3 op/s
Oct 01 17:07:54 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 68 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 97 KiB/s wr, 5 op/s
Oct 01 17:07:54 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ebb81bb7-5ad0-430e-bce8-857817637159", "format": "json"}]: dispatch
Oct 01 17:07:54 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ebb81bb7-5ad0-430e-bce8-857817637159", "force": true, "format": "json"}]: dispatch
Oct 01 17:07:54 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dda50ff1-4d8c-4915-a4cd-5da1c827bac3", "format": "json"}]: dispatch
Oct 01 17:07:54 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "dda50ff1-4d8c-4915-a4cd-5da1c827bac3", "force": true, "format": "json"}]: dispatch
Oct 01 17:07:55 compute-0 ceph-mon[74273]: pgmap v1162: 305 pgs: 305 active+clean; 68 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 97 KiB/s wr, 5 op/s
Oct 01 17:07:55 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b3a00dd2-1018-47de-9729-c563b1ade8d1", "format": "json"}]: dispatch
Oct 01 17:07:55 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b3a00dd2-1018-47de-9729-c563b1ade8d1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:07:55 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:07:56 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 68 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 65 KiB/s wr, 3 op/s
Oct 01 17:07:56 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b3a00dd2-1018-47de-9729-c563b1ade8d1", "format": "json"}]: dispatch
Oct 01 17:07:56 compute-0 ceph-mon[74273]: pgmap v1163: 305 pgs: 305 active+clean; 68 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 65 KiB/s wr, 3 op/s
Oct 01 17:07:58 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 68 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 83 KiB/s wr, 5 op/s
Oct 01 17:07:59 compute-0 ceph-mon[74273]: pgmap v1164: 305 pgs: 305 active+clean; 68 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 83 KiB/s wr, 5 op/s
Oct 01 17:08:00 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 68 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 56 KiB/s wr, 3 op/s
Oct 01 17:08:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b3a00dd2-1018-47de-9729-c563b1ade8d1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:08:00 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b3a00dd2-1018-47de-9729-c563b1ade8d1", "format": "json"}]: dispatch
Oct 01 17:08:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b3a00dd2-1018-47de-9729-c563b1ade8d1, vol_name:cephfs) < ""
Oct 01 17:08:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b3a00dd2-1018-47de-9729-c563b1ade8d1, vol_name:cephfs) < ""
Oct 01 17:08:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:08:00 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:08:00 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "df8b44c3-6adf-44f9-88cd-1ac785cf76d5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:08:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:df8b44c3-6adf-44f9-88cd-1ac785cf76d5, vol_name:cephfs) < ""
Oct 01 17:08:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/df8b44c3-6adf-44f9-88cd-1ac785cf76d5/.meta.tmp'
Oct 01 17:08:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/df8b44c3-6adf-44f9-88cd-1ac785cf76d5/.meta.tmp' to config b'/volumes/_nogroup/df8b44c3-6adf-44f9-88cd-1ac785cf76d5/.meta'
Oct 01 17:08:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:df8b44c3-6adf-44f9-88cd-1ac785cf76d5, vol_name:cephfs) < ""
Oct 01 17:08:00 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "df8b44c3-6adf-44f9-88cd-1ac785cf76d5", "format": "json"}]: dispatch
Oct 01 17:08:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:df8b44c3-6adf-44f9-88cd-1ac785cf76d5, vol_name:cephfs) < ""
Oct 01 17:08:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:df8b44c3-6adf-44f9-88cd-1ac785cf76d5, vol_name:cephfs) < ""
Oct 01 17:08:00 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "697d2422-31c0-4917-8660-133fa4e39c6e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:08:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:697d2422-31c0-4917-8660-133fa4e39c6e, vol_name:cephfs) < ""
Oct 01 17:08:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/697d2422-31c0-4917-8660-133fa4e39c6e/.meta.tmp'
Oct 01 17:08:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/697d2422-31c0-4917-8660-133fa4e39c6e/.meta.tmp' to config b'/volumes/_nogroup/697d2422-31c0-4917-8660-133fa4e39c6e/.meta'
Oct 01 17:08:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:697d2422-31c0-4917-8660-133fa4e39c6e, vol_name:cephfs) < ""
Oct 01 17:08:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:08:00 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:08:00 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "697d2422-31c0-4917-8660-133fa4e39c6e", "format": "json"}]: dispatch
Oct 01 17:08:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:697d2422-31c0-4917-8660-133fa4e39c6e, vol_name:cephfs) < ""
Oct 01 17:08:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:697d2422-31c0-4917-8660-133fa4e39c6e, vol_name:cephfs) < ""
Oct 01 17:08:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:08:00 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:08:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:08:01 compute-0 ceph-mon[74273]: pgmap v1165: 305 pgs: 305 active+clean; 68 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 56 KiB/s wr, 3 op/s
Oct 01 17:08:01 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b3a00dd2-1018-47de-9729-c563b1ade8d1", "format": "json"}]: dispatch
Oct 01 17:08:01 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:08:01 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "df8b44c3-6adf-44f9-88cd-1ac785cf76d5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:08:01 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:08:01 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:08:01 compute-0 podman[276794]: 2025-10-01 17:08:01.754877805 +0000 UTC m=+0.067650502 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 01 17:08:02 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 68 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 56 KiB/s wr, 3 op/s
Oct 01 17:08:02 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "df8b44c3-6adf-44f9-88cd-1ac785cf76d5", "format": "json"}]: dispatch
Oct 01 17:08:02 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "697d2422-31c0-4917-8660-133fa4e39c6e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:08:02 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "697d2422-31c0-4917-8660-133fa4e39c6e", "format": "json"}]: dispatch
Oct 01 17:08:02 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "697d2422-31c0-4917-8660-133fa4e39c6e", "format": "json"}]: dispatch
Oct 01 17:08:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:697d2422-31c0-4917-8660-133fa4e39c6e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:08:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:697d2422-31c0-4917-8660-133fa4e39c6e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:08:02 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:08:02.602+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '697d2422-31c0-4917-8660-133fa4e39c6e' of type subvolume
Oct 01 17:08:02 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '697d2422-31c0-4917-8660-133fa4e39c6e' of type subvolume
Oct 01 17:08:02 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "697d2422-31c0-4917-8660-133fa4e39c6e", "force": true, "format": "json"}]: dispatch
Oct 01 17:08:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:697d2422-31c0-4917-8660-133fa4e39c6e, vol_name:cephfs) < ""
Oct 01 17:08:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/697d2422-31c0-4917-8660-133fa4e39c6e'' moved to trashcan
Oct 01 17:08:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:08:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:697d2422-31c0-4917-8660-133fa4e39c6e, vol_name:cephfs) < ""
Oct 01 17:08:03 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "df8b44c3-6adf-44f9-88cd-1ac785cf76d5", "format": "json"}]: dispatch
Oct 01 17:08:03 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:df8b44c3-6adf-44f9-88cd-1ac785cf76d5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:08:03 compute-0 ceph-mon[74273]: pgmap v1166: 305 pgs: 305 active+clean; 68 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 56 KiB/s wr, 3 op/s
Oct 01 17:08:03 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:df8b44c3-6adf-44f9-88cd-1ac785cf76d5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:08:03 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:08:03.218+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'df8b44c3-6adf-44f9-88cd-1ac785cf76d5' of type subvolume
Oct 01 17:08:03 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'df8b44c3-6adf-44f9-88cd-1ac785cf76d5' of type subvolume
Oct 01 17:08:03 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "df8b44c3-6adf-44f9-88cd-1ac785cf76d5", "force": true, "format": "json"}]: dispatch
Oct 01 17:08:03 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:df8b44c3-6adf-44f9-88cd-1ac785cf76d5, vol_name:cephfs) < ""
Oct 01 17:08:03 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/df8b44c3-6adf-44f9-88cd-1ac785cf76d5'' moved to trashcan
Oct 01 17:08:03 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:08:03 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:df8b44c3-6adf-44f9-88cd-1ac785cf76d5, vol_name:cephfs) < ""
Oct 01 17:08:03 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b3a00dd2-1018-47de-9729-c563b1ade8d1", "format": "json"}]: dispatch
Oct 01 17:08:03 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b3a00dd2-1018-47de-9729-c563b1ade8d1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:08:03 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b3a00dd2-1018-47de-9729-c563b1ade8d1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:08:03 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b3a00dd2-1018-47de-9729-c563b1ade8d1", "force": true, "format": "json"}]: dispatch
Oct 01 17:08:03 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b3a00dd2-1018-47de-9729-c563b1ade8d1, vol_name:cephfs) < ""
Oct 01 17:08:03 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b3a00dd2-1018-47de-9729-c563b1ade8d1'' moved to trashcan
Oct 01 17:08:03 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:08:03 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b3a00dd2-1018-47de-9729-c563b1ade8d1, vol_name:cephfs) < ""
Oct 01 17:08:03 compute-0 podman[276813]: 2025-10-01 17:08:03.760018979 +0000 UTC m=+0.074703635 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 01 17:08:04 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 68 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 87 KiB/s wr, 5 op/s
Oct 01 17:08:04 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "697d2422-31c0-4917-8660-133fa4e39c6e", "format": "json"}]: dispatch
Oct 01 17:08:04 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "697d2422-31c0-4917-8660-133fa4e39c6e", "force": true, "format": "json"}]: dispatch
Oct 01 17:08:04 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "df8b44c3-6adf-44f9-88cd-1ac785cf76d5", "format": "json"}]: dispatch
Oct 01 17:08:04 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "df8b44c3-6adf-44f9-88cd-1ac785cf76d5", "force": true, "format": "json"}]: dispatch
Oct 01 17:08:05 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b3a00dd2-1018-47de-9729-c563b1ade8d1", "format": "json"}]: dispatch
Oct 01 17:08:05 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b3a00dd2-1018-47de-9729-c563b1ade8d1", "force": true, "format": "json"}]: dispatch
Oct 01 17:08:05 compute-0 ceph-mon[74273]: pgmap v1167: 305 pgs: 305 active+clean; 68 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 87 KiB/s wr, 5 op/s
Oct 01 17:08:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:08:06 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 68 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 49 KiB/s wr, 3 op/s
Oct 01 17:08:06 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "267a009d-a8d2-496f-8ea4-b7694aa0a844", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:08:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:267a009d-a8d2-496f-8ea4-b7694aa0a844, vol_name:cephfs) < ""
Oct 01 17:08:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/267a009d-a8d2-496f-8ea4-b7694aa0a844/.meta.tmp'
Oct 01 17:08:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/267a009d-a8d2-496f-8ea4-b7694aa0a844/.meta.tmp' to config b'/volumes/_nogroup/267a009d-a8d2-496f-8ea4-b7694aa0a844/.meta'
Oct 01 17:08:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:267a009d-a8d2-496f-8ea4-b7694aa0a844, vol_name:cephfs) < ""
Oct 01 17:08:06 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "267a009d-a8d2-496f-8ea4-b7694aa0a844", "format": "json"}]: dispatch
Oct 01 17:08:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:267a009d-a8d2-496f-8ea4-b7694aa0a844, vol_name:cephfs) < ""
Oct 01 17:08:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:267a009d-a8d2-496f-8ea4-b7694aa0a844, vol_name:cephfs) < ""
Oct 01 17:08:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:08:06 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:08:06 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7eafcb7a-4d98-4306-a178-8f37dd36887c", "snap_name": "ef793e02-9980-4db2-8e9a-aaf0250b4696_15352bd5-cb03-495a-99ef-36274a4b1dde", "force": true, "format": "json"}]: dispatch
Oct 01 17:08:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ef793e02-9980-4db2-8e9a-aaf0250b4696_15352bd5-cb03-495a-99ef-36274a4b1dde, sub_name:7eafcb7a-4d98-4306-a178-8f37dd36887c, vol_name:cephfs) < ""
Oct 01 17:08:07 compute-0 ceph-mon[74273]: pgmap v1168: 305 pgs: 305 active+clean; 68 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 49 KiB/s wr, 3 op/s
Oct 01 17:08:07 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:08:08 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 69 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 76 KiB/s wr, 5 op/s
Oct 01 17:08:08 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "267a009d-a8d2-496f-8ea4-b7694aa0a844", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:08:08 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "267a009d-a8d2-496f-8ea4-b7694aa0a844", "format": "json"}]: dispatch
Oct 01 17:08:08 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7eafcb7a-4d98-4306-a178-8f37dd36887c", "snap_name": "ef793e02-9980-4db2-8e9a-aaf0250b4696_15352bd5-cb03-495a-99ef-36274a4b1dde", "force": true, "format": "json"}]: dispatch
Oct 01 17:08:09 compute-0 ceph-mon[74273]: pgmap v1169: 305 pgs: 305 active+clean; 69 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 76 KiB/s wr, 5 op/s
Oct 01 17:08:10 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 69 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 58 KiB/s wr, 4 op/s
Oct 01 17:08:10 compute-0 ceph-mon[74273]: pgmap v1170: 305 pgs: 305 active+clean; 69 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 58 KiB/s wr, 4 op/s
Oct 01 17:08:10 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7eafcb7a-4d98-4306-a178-8f37dd36887c/.meta.tmp'
Oct 01 17:08:10 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7eafcb7a-4d98-4306-a178-8f37dd36887c/.meta.tmp' to config b'/volumes/_nogroup/7eafcb7a-4d98-4306-a178-8f37dd36887c/.meta'
Oct 01 17:08:10 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ef793e02-9980-4db2-8e9a-aaf0250b4696_15352bd5-cb03-495a-99ef-36274a4b1dde, sub_name:7eafcb7a-4d98-4306-a178-8f37dd36887c, vol_name:cephfs) < ""
Oct 01 17:08:10 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7eafcb7a-4d98-4306-a178-8f37dd36887c", "snap_name": "ef793e02-9980-4db2-8e9a-aaf0250b4696", "force": true, "format": "json"}]: dispatch
Oct 01 17:08:10 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ef793e02-9980-4db2-8e9a-aaf0250b4696, sub_name:7eafcb7a-4d98-4306-a178-8f37dd36887c, vol_name:cephfs) < ""
Oct 01 17:08:10 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7eafcb7a-4d98-4306-a178-8f37dd36887c/.meta.tmp'
Oct 01 17:08:10 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7eafcb7a-4d98-4306-a178-8f37dd36887c/.meta.tmp' to config b'/volumes/_nogroup/7eafcb7a-4d98-4306-a178-8f37dd36887c/.meta'
Oct 01 17:08:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:08:11 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ef793e02-9980-4db2-8e9a-aaf0250b4696, sub_name:7eafcb7a-4d98-4306-a178-8f37dd36887c, vol_name:cephfs) < ""
Oct 01 17:08:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:08:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:08:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:08:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:08:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_17:08:11
Oct 01 17:08:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 17:08:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 17:08:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['volumes', 'vms', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'images', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control']
Oct 01 17:08:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 17:08:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:08:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:08:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 17:08:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:08:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:08:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:08:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:08:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 17:08:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:08:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:08:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:08:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:08:12 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 69 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 58 KiB/s wr, 4 op/s
Oct 01 17:08:12 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7eafcb7a-4d98-4306-a178-8f37dd36887c", "snap_name": "ef793e02-9980-4db2-8e9a-aaf0250b4696", "force": true, "format": "json"}]: dispatch
Oct 01 17:08:13 compute-0 ceph-mon[74273]: pgmap v1171: 305 pgs: 305 active+clean; 69 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 58 KiB/s wr, 4 op/s
Oct 01 17:08:13 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7eafcb7a-4d98-4306-a178-8f37dd36887c", "format": "json"}]: dispatch
Oct 01 17:08:13 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7eafcb7a-4d98-4306-a178-8f37dd36887c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:08:13 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7eafcb7a-4d98-4306-a178-8f37dd36887c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:08:13 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7eafcb7a-4d98-4306-a178-8f37dd36887c' of type subvolume
Oct 01 17:08:13 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:08:13.601+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7eafcb7a-4d98-4306-a178-8f37dd36887c' of type subvolume
Oct 01 17:08:13 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7eafcb7a-4d98-4306-a178-8f37dd36887c", "force": true, "format": "json"}]: dispatch
Oct 01 17:08:13 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7eafcb7a-4d98-4306-a178-8f37dd36887c, vol_name:cephfs) < ""
Oct 01 17:08:13 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7eafcb7a-4d98-4306-a178-8f37dd36887c'' moved to trashcan
Oct 01 17:08:13 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:08:13 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7eafcb7a-4d98-4306-a178-8f37dd36887c, vol_name:cephfs) < ""
Oct 01 17:08:13 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "267a009d-a8d2-496f-8ea4-b7694aa0a844", "format": "json"}]: dispatch
Oct 01 17:08:13 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:267a009d-a8d2-496f-8ea4-b7694aa0a844, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:08:13 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:267a009d-a8d2-496f-8ea4-b7694aa0a844, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:08:13 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '267a009d-a8d2-496f-8ea4-b7694aa0a844' of type subvolume
Oct 01 17:08:13 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:08:13.992+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '267a009d-a8d2-496f-8ea4-b7694aa0a844' of type subvolume
Oct 01 17:08:13 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "267a009d-a8d2-496f-8ea4-b7694aa0a844", "force": true, "format": "json"}]: dispatch
Oct 01 17:08:13 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:267a009d-a8d2-496f-8ea4-b7694aa0a844, vol_name:cephfs) < ""
Oct 01 17:08:14 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/267a009d-a8d2-496f-8ea4-b7694aa0a844'' moved to trashcan
Oct 01 17:08:14 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:08:14 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:267a009d-a8d2-496f-8ea4-b7694aa0a844, vol_name:cephfs) < ""
Oct 01 17:08:14 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 69 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 84 KiB/s wr, 6 op/s
Oct 01 17:08:14 compute-0 podman[276834]: 2025-10-01 17:08:14.809400751 +0000 UTC m=+0.124136534 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 01 17:08:15 compute-0 sudo[276861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:08:15 compute-0 sudo[276861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:08:15 compute-0 sudo[276861]: pam_unix(sudo:session): session closed for user root
Oct 01 17:08:15 compute-0 sudo[276886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:08:15 compute-0 sudo[276886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:08:15 compute-0 sudo[276886]: pam_unix(sudo:session): session closed for user root
Oct 01 17:08:15 compute-0 sudo[276911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:08:15 compute-0 sudo[276911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:08:15 compute-0 sudo[276911]: pam_unix(sudo:session): session closed for user root
Oct 01 17:08:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Oct 01 17:08:15 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7eafcb7a-4d98-4306-a178-8f37dd36887c", "format": "json"}]: dispatch
Oct 01 17:08:15 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7eafcb7a-4d98-4306-a178-8f37dd36887c", "force": true, "format": "json"}]: dispatch
Oct 01 17:08:15 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "267a009d-a8d2-496f-8ea4-b7694aa0a844", "format": "json"}]: dispatch
Oct 01 17:08:15 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "267a009d-a8d2-496f-8ea4-b7694aa0a844", "force": true, "format": "json"}]: dispatch
Oct 01 17:08:15 compute-0 ceph-mon[74273]: pgmap v1172: 305 pgs: 305 active+clean; 69 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 84 KiB/s wr, 6 op/s
Oct 01 17:08:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Oct 01 17:08:15 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Oct 01 17:08:15 compute-0 sudo[276936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 17:08:15 compute-0 sudo[276936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:08:15 compute-0 sudo[276936]: pam_unix(sudo:session): session closed for user root
Oct 01 17:08:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:08:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 17:08:15 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:08:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 17:08:15 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 17:08:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 17:08:15 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:08:15 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 7a1355d0-73ed-4261-9c3f-928afb675ff9 does not exist
Oct 01 17:08:15 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev ff948e5a-cf8c-475d-9c8d-341b65aa0593 does not exist
Oct 01 17:08:15 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 38199ff4-54b9-4c27-996c-639a67050f78 does not exist
Oct 01 17:08:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 17:08:15 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 17:08:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 17:08:15 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 17:08:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 17:08:15 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:08:15 compute-0 sudo[276992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:08:16 compute-0 sudo[276992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:08:16 compute-0 sudo[276992]: pam_unix(sudo:session): session closed for user root
Oct 01 17:08:16 compute-0 sudo[277017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:08:16 compute-0 sudo[277017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:08:16 compute-0 sudo[277017]: pam_unix(sudo:session): session closed for user root
Oct 01 17:08:16 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 69 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 64 KiB/s wr, 5 op/s
Oct 01 17:08:16 compute-0 sudo[277042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:08:16 compute-0 sudo[277042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:08:16 compute-0 sudo[277042]: pam_unix(sudo:session): session closed for user root
Oct 01 17:08:16 compute-0 sudo[277067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 17:08:16 compute-0 sudo[277067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:08:16 compute-0 ceph-mon[74273]: osdmap e161: 3 total, 3 up, 3 in
Oct 01 17:08:16 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:08:16 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 17:08:16 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:08:16 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 17:08:16 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 17:08:16 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:08:16 compute-0 podman[277131]: 2025-10-01 17:08:16.589130222 +0000 UTC m=+0.042426469 container create 04150068399c657508e8e2312fcfe23e58f0eeb6c86677ad79ff922ac2bdacd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 17:08:16 compute-0 systemd[1]: Started libpod-conmon-04150068399c657508e8e2312fcfe23e58f0eeb6c86677ad79ff922ac2bdacd0.scope.
Oct 01 17:08:16 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:08:16 compute-0 podman[277131]: 2025-10-01 17:08:16.567978121 +0000 UTC m=+0.021274398 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:08:16 compute-0 podman[277131]: 2025-10-01 17:08:16.668445794 +0000 UTC m=+0.121742051 container init 04150068399c657508e8e2312fcfe23e58f0eeb6c86677ad79ff922ac2bdacd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:08:16 compute-0 podman[277131]: 2025-10-01 17:08:16.673718703 +0000 UTC m=+0.127014940 container start 04150068399c657508e8e2312fcfe23e58f0eeb6c86677ad79ff922ac2bdacd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 01 17:08:16 compute-0 podman[277131]: 2025-10-01 17:08:16.676855462 +0000 UTC m=+0.130151719 container attach 04150068399c657508e8e2312fcfe23e58f0eeb6c86677ad79ff922ac2bdacd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:08:16 compute-0 cool_hoover[277147]: 167 167
Oct 01 17:08:16 compute-0 systemd[1]: libpod-04150068399c657508e8e2312fcfe23e58f0eeb6c86677ad79ff922ac2bdacd0.scope: Deactivated successfully.
Oct 01 17:08:16 compute-0 podman[277131]: 2025-10-01 17:08:16.682714359 +0000 UTC m=+0.136010606 container died 04150068399c657508e8e2312fcfe23e58f0eeb6c86677ad79ff922ac2bdacd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:08:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-aaa29690c067650fa0a7b54d5c87457351de35d76cde0beb8bd8186565e44d8f-merged.mount: Deactivated successfully.
Oct 01 17:08:16 compute-0 podman[277131]: 2025-10-01 17:08:16.732424771 +0000 UTC m=+0.185721018 container remove 04150068399c657508e8e2312fcfe23e58f0eeb6c86677ad79ff922ac2bdacd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:08:16 compute-0 systemd[1]: libpod-conmon-04150068399c657508e8e2312fcfe23e58f0eeb6c86677ad79ff922ac2bdacd0.scope: Deactivated successfully.
Oct 01 17:08:16 compute-0 podman[277170]: 2025-10-01 17:08:16.929046034 +0000 UTC m=+0.060739315 container create 5d3ff7421a7b192aee0ce303d2bd8582e503f03f8446afa806ef44b723ed94fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 01 17:08:16 compute-0 systemd[1]: Started libpod-conmon-5d3ff7421a7b192aee0ce303d2bd8582e503f03f8446afa806ef44b723ed94fa.scope.
Oct 01 17:08:16 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:08:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1f68cba86e4fd067cb44161e1b8829d07f5d2d75e4032fc570c62c0eb98d493/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:08:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1f68cba86e4fd067cb44161e1b8829d07f5d2d75e4032fc570c62c0eb98d493/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:08:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1f68cba86e4fd067cb44161e1b8829d07f5d2d75e4032fc570c62c0eb98d493/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:08:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1f68cba86e4fd067cb44161e1b8829d07f5d2d75e4032fc570c62c0eb98d493/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:08:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1f68cba86e4fd067cb44161e1b8829d07f5d2d75e4032fc570c62c0eb98d493/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 17:08:17 compute-0 podman[277170]: 2025-10-01 17:08:17.002400995 +0000 UTC m=+0.134094256 container init 5d3ff7421a7b192aee0ce303d2bd8582e503f03f8446afa806ef44b723ed94fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_keldysh, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 01 17:08:17 compute-0 podman[277170]: 2025-10-01 17:08:17.008410749 +0000 UTC m=+0.140104000 container start 5d3ff7421a7b192aee0ce303d2bd8582e503f03f8446afa806ef44b723ed94fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 01 17:08:17 compute-0 podman[277170]: 2025-10-01 17:08:16.913399484 +0000 UTC m=+0.045092755 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:08:17 compute-0 podman[277170]: 2025-10-01 17:08:17.011428662 +0000 UTC m=+0.143121913 container attach 5d3ff7421a7b192aee0ce303d2bd8582e503f03f8446afa806ef44b723ed94fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_keldysh, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 01 17:08:17 compute-0 ceph-mon[74273]: pgmap v1174: 305 pgs: 305 active+clean; 69 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 64 KiB/s wr, 5 op/s
Oct 01 17:08:17 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "09ee21e6-5d89-4173-b2b4-f6be5a3351ee", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:08:17 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:09ee21e6-5d89-4173-b2b4-f6be5a3351ee, vol_name:cephfs) < ""
Oct 01 17:08:17 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/09ee21e6-5d89-4173-b2b4-f6be5a3351ee/.meta.tmp'
Oct 01 17:08:17 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/09ee21e6-5d89-4173-b2b4-f6be5a3351ee/.meta.tmp' to config b'/volumes/_nogroup/09ee21e6-5d89-4173-b2b4-f6be5a3351ee/.meta'
Oct 01 17:08:17 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:09ee21e6-5d89-4173-b2b4-f6be5a3351ee, vol_name:cephfs) < ""
Oct 01 17:08:17 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "09ee21e6-5d89-4173-b2b4-f6be5a3351ee", "format": "json"}]: dispatch
Oct 01 17:08:17 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:09ee21e6-5d89-4173-b2b4-f6be5a3351ee, vol_name:cephfs) < ""
Oct 01 17:08:17 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:09ee21e6-5d89-4173-b2b4-f6be5a3351ee, vol_name:cephfs) < ""
Oct 01 17:08:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:08:17 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:08:17 compute-0 nova_compute[259504]: 2025-10-01 17:08:17.749 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:08:17 compute-0 trusting_keldysh[277186]: --> passed data devices: 0 physical, 3 LVM
Oct 01 17:08:17 compute-0 trusting_keldysh[277186]: --> relative data size: 1.0
Oct 01 17:08:17 compute-0 trusting_keldysh[277186]: --> All data devices are unavailable
Oct 01 17:08:18 compute-0 systemd[1]: libpod-5d3ff7421a7b192aee0ce303d2bd8582e503f03f8446afa806ef44b723ed94fa.scope: Deactivated successfully.
Oct 01 17:08:18 compute-0 podman[277170]: 2025-10-01 17:08:18.030294461 +0000 UTC m=+1.161987712 container died 5d3ff7421a7b192aee0ce303d2bd8582e503f03f8446afa806ef44b723ed94fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_keldysh, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:08:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1f68cba86e4fd067cb44161e1b8829d07f5d2d75e4032fc570c62c0eb98d493-merged.mount: Deactivated successfully.
Oct 01 17:08:18 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 69 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 75 KiB/s wr, 5 op/s
Oct 01 17:08:18 compute-0 podman[277170]: 2025-10-01 17:08:18.211498535 +0000 UTC m=+1.343191786 container remove 5d3ff7421a7b192aee0ce303d2bd8582e503f03f8446afa806ef44b723ed94fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_keldysh, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 01 17:08:18 compute-0 systemd[1]: libpod-conmon-5d3ff7421a7b192aee0ce303d2bd8582e503f03f8446afa806ef44b723ed94fa.scope: Deactivated successfully.
Oct 01 17:08:18 compute-0 sudo[277067]: pam_unix(sudo:session): session closed for user root
Oct 01 17:08:18 compute-0 sudo[277229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:08:18 compute-0 sudo[277229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:08:18 compute-0 sudo[277229]: pam_unix(sudo:session): session closed for user root
Oct 01 17:08:18 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:08:18 compute-0 sudo[277254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:08:18 compute-0 sudo[277254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:08:18 compute-0 sudo[277254]: pam_unix(sudo:session): session closed for user root
Oct 01 17:08:18 compute-0 sudo[277279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:08:18 compute-0 sudo[277279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:08:18 compute-0 sudo[277279]: pam_unix(sudo:session): session closed for user root
Oct 01 17:08:18 compute-0 sudo[277304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 17:08:18 compute-0 sudo[277304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:08:19 compute-0 podman[277370]: 2025-10-01 17:08:19.096953782 +0000 UTC m=+0.105094078 container create 20032755a5b0df5fda0f912d0148772eca494cb5e13e143148e318c7a0498086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 01 17:08:19 compute-0 podman[277370]: 2025-10-01 17:08:19.023381736 +0000 UTC m=+0.031522002 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:08:19 compute-0 systemd[1]: Started libpod-conmon-20032755a5b0df5fda0f912d0148772eca494cb5e13e143148e318c7a0498086.scope.
Oct 01 17:08:19 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:08:19 compute-0 podman[277370]: 2025-10-01 17:08:19.360796288 +0000 UTC m=+0.368936564 container init 20032755a5b0df5fda0f912d0148772eca494cb5e13e143148e318c7a0498086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 01 17:08:19 compute-0 podman[277370]: 2025-10-01 17:08:19.37844773 +0000 UTC m=+0.386588026 container start 20032755a5b0df5fda0f912d0148772eca494cb5e13e143148e318c7a0498086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_roentgen, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:08:19 compute-0 systemd[1]: libpod-20032755a5b0df5fda0f912d0148772eca494cb5e13e143148e318c7a0498086.scope: Deactivated successfully.
Oct 01 17:08:19 compute-0 conmon[277400]: conmon 20032755a5b0df5fda0f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-20032755a5b0df5fda0f912d0148772eca494cb5e13e143148e318c7a0498086.scope/container/memory.events
Oct 01 17:08:19 compute-0 blissful_roentgen[277400]: 167 167
Oct 01 17:08:19 compute-0 podman[277370]: 2025-10-01 17:08:19.449425509 +0000 UTC m=+0.457565765 container attach 20032755a5b0df5fda0f912d0148772eca494cb5e13e143148e318c7a0498086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_roentgen, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:08:19 compute-0 podman[277370]: 2025-10-01 17:08:19.450965736 +0000 UTC m=+0.459106022 container died 20032755a5b0df5fda0f912d0148772eca494cb5e13e143148e318c7a0498086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_roentgen, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 01 17:08:19 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "09ee21e6-5d89-4173-b2b4-f6be5a3351ee", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:08:19 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "09ee21e6-5d89-4173-b2b4-f6be5a3351ee", "format": "json"}]: dispatch
Oct 01 17:08:19 compute-0 ceph-mon[74273]: pgmap v1175: 305 pgs: 305 active+clean; 69 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 75 KiB/s wr, 5 op/s
Oct 01 17:08:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-26f5497583cc170f1ee24734435f4123e524fdeda0284a88c5f70626dc2f9751-merged.mount: Deactivated successfully.
Oct 01 17:08:19 compute-0 podman[277370]: 2025-10-01 17:08:19.762359912 +0000 UTC m=+0.770500208 container remove 20032755a5b0df5fda0f912d0148772eca494cb5e13e143148e318c7a0498086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_roentgen, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:08:19 compute-0 systemd[1]: libpod-conmon-20032755a5b0df5fda0f912d0148772eca494cb5e13e143148e318c7a0498086.scope: Deactivated successfully.
Oct 01 17:08:19 compute-0 podman[277384]: 2025-10-01 17:08:19.812078963 +0000 UTC m=+0.671107036 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct 01 17:08:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:08:19.978 162304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:08:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:08:19.979 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:08:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:08:19.980 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:08:20 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:08:20.011 162304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:71:db', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:60:3f:78:bd:29'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 17:08:20 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:08:20.013 162304 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 17:08:20 compute-0 podman[277432]: 2025-10-01 17:08:20.018109779 +0000 UTC m=+0.091022659 container create efc9a33b7cbdcb594f62a94963a777ca189db3f278fecef9b98d178341763bb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Oct 01 17:08:20 compute-0 podman[277432]: 2025-10-01 17:08:19.978987416 +0000 UTC m=+0.051900286 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:08:20 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 69 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 75 KiB/s wr, 5 op/s
Oct 01 17:08:20 compute-0 systemd[1]: Started libpod-conmon-efc9a33b7cbdcb594f62a94963a777ca189db3f278fecef9b98d178341763bb1.scope.
Oct 01 17:08:20 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:08:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/167aca9494e43c417563adb683e23e5e62f5e8daa3dd000e956b50adb42e9ff7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:08:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/167aca9494e43c417563adb683e23e5e62f5e8daa3dd000e956b50adb42e9ff7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:08:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/167aca9494e43c417563adb683e23e5e62f5e8daa3dd000e956b50adb42e9ff7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:08:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/167aca9494e43c417563adb683e23e5e62f5e8daa3dd000e956b50adb42e9ff7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:08:20 compute-0 podman[277432]: 2025-10-01 17:08:20.306985464 +0000 UTC m=+0.379898384 container init efc9a33b7cbdcb594f62a94963a777ca189db3f278fecef9b98d178341763bb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 01 17:08:20 compute-0 podman[277432]: 2025-10-01 17:08:20.316776182 +0000 UTC m=+0.389689022 container start efc9a33b7cbdcb594f62a94963a777ca189db3f278fecef9b98d178341763bb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_boyd, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:08:20 compute-0 podman[277432]: 2025-10-01 17:08:20.430659977 +0000 UTC m=+0.503572867 container attach efc9a33b7cbdcb594f62a94963a777ca189db3f278fecef9b98d178341763bb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 01 17:08:20 compute-0 ceph-mon[74273]: pgmap v1176: 305 pgs: 305 active+clean; 69 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 75 KiB/s wr, 5 op/s
Oct 01 17:08:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:08:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Oct 01 17:08:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Oct 01 17:08:20 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]: {
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:     "0": [
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:         {
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             "devices": [
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "/dev/loop3"
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             ],
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             "lv_name": "ceph_lv0",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             "lv_size": "21470642176",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             "name": "ceph_lv0",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             "tags": {
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "ceph.cluster_name": "ceph",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "ceph.crush_device_class": "",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "ceph.encrypted": "0",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "ceph.osd_id": "0",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "ceph.type": "block",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "ceph.vdo": "0"
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             },
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             "type": "block",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             "vg_name": "ceph_vg0"
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:         }
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:     ],
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:     "1": [
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:         {
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             "devices": [
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "/dev/loop4"
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             ],
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             "lv_name": "ceph_lv1",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             "lv_size": "21470642176",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             "name": "ceph_lv1",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             "tags": {
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "ceph.cluster_name": "ceph",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "ceph.crush_device_class": "",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "ceph.encrypted": "0",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "ceph.osd_id": "1",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "ceph.type": "block",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "ceph.vdo": "0"
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             },
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             "type": "block",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             "vg_name": "ceph_vg1"
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:         }
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:     ],
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:     "2": [
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:         {
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             "devices": [
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "/dev/loop5"
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             ],
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             "lv_name": "ceph_lv2",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             "lv_size": "21470642176",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             "name": "ceph_lv2",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             "tags": {
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "ceph.cluster_name": "ceph",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "ceph.crush_device_class": "",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "ceph.encrypted": "0",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "ceph.osd_id": "2",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "ceph.type": "block",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:                 "ceph.vdo": "0"
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             },
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             "type": "block",
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:             "vg_name": "ceph_vg2"
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:         }
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]:     ]
Oct 01 17:08:21 compute-0 intelligent_boyd[277448]: }
Oct 01 17:08:21 compute-0 systemd[1]: libpod-efc9a33b7cbdcb594f62a94963a777ca189db3f278fecef9b98d178341763bb1.scope: Deactivated successfully.
Oct 01 17:08:21 compute-0 podman[277432]: 2025-10-01 17:08:21.17103013 +0000 UTC m=+1.243942980 container died efc9a33b7cbdcb594f62a94963a777ca189db3f278fecef9b98d178341763bb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:08:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 17:08:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:08:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 17:08:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:08:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:08:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:08:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:08:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:08:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:08:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:08:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 01 17:08:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:08:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00045454637950120467 of space, bias 4.0, pg target 0.5454556554014456 quantized to 16 (current 16)
Oct 01 17:08:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:08:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 17:08:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:08:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 17:08:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:08:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 17:08:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:08:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:08:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:08:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 17:08:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-167aca9494e43c417563adb683e23e5e62f5e8daa3dd000e956b50adb42e9ff7-merged.mount: Deactivated successfully.
Oct 01 17:08:21 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "09ee21e6-5d89-4173-b2b4-f6be5a3351ee", "format": "json"}]: dispatch
Oct 01 17:08:21 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:09ee21e6-5d89-4173-b2b4-f6be5a3351ee, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:08:21 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:09ee21e6-5d89-4173-b2b4-f6be5a3351ee, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:08:21 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:08:21.401+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '09ee21e6-5d89-4173-b2b4-f6be5a3351ee' of type subvolume
Oct 01 17:08:21 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '09ee21e6-5d89-4173-b2b4-f6be5a3351ee' of type subvolume
Oct 01 17:08:21 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "09ee21e6-5d89-4173-b2b4-f6be5a3351ee", "force": true, "format": "json"}]: dispatch
Oct 01 17:08:21 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:09ee21e6-5d89-4173-b2b4-f6be5a3351ee, vol_name:cephfs) < ""
Oct 01 17:08:21 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/09ee21e6-5d89-4173-b2b4-f6be5a3351ee'' moved to trashcan
Oct 01 17:08:21 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:08:21 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:09ee21e6-5d89-4173-b2b4-f6be5a3351ee, vol_name:cephfs) < ""
Oct 01 17:08:21 compute-0 podman[277432]: 2025-10-01 17:08:21.523371202 +0000 UTC m=+1.596284062 container remove efc9a33b7cbdcb594f62a94963a777ca189db3f278fecef9b98d178341763bb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_boyd, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:08:21 compute-0 systemd[1]: libpod-conmon-efc9a33b7cbdcb594f62a94963a777ca189db3f278fecef9b98d178341763bb1.scope: Deactivated successfully.
Oct 01 17:08:21 compute-0 sudo[277304]: pam_unix(sudo:session): session closed for user root
Oct 01 17:08:21 compute-0 sudo[277471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:08:21 compute-0 sudo[277471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:08:21 compute-0 sudo[277471]: pam_unix(sudo:session): session closed for user root
Oct 01 17:08:21 compute-0 nova_compute[259504]: 2025-10-01 17:08:21.749 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:08:21 compute-0 nova_compute[259504]: 2025-10-01 17:08:21.772 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:08:21 compute-0 nova_compute[259504]: 2025-10-01 17:08:21.773 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:08:21 compute-0 nova_compute[259504]: 2025-10-01 17:08:21.774 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:08:21 compute-0 nova_compute[259504]: 2025-10-01 17:08:21.774 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 17:08:21 compute-0 nova_compute[259504]: 2025-10-01 17:08:21.775 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:08:21 compute-0 sudo[277496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:08:21 compute-0 sudo[277496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:08:21 compute-0 sudo[277496]: pam_unix(sudo:session): session closed for user root
Oct 01 17:08:21 compute-0 sudo[277522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:08:21 compute-0 sudo[277522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:08:21 compute-0 sudo[277522]: pam_unix(sudo:session): session closed for user root
Oct 01 17:08:21 compute-0 ceph-mon[74273]: osdmap e162: 3 total, 3 up, 3 in
Oct 01 17:08:21 compute-0 sudo[277551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 17:08:21 compute-0 sudo[277551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:08:22 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 69 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 55 KiB/s wr, 2 op/s
Oct 01 17:08:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:08:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1981260345' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:08:22 compute-0 nova_compute[259504]: 2025-10-01 17:08:22.254 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:08:22 compute-0 nova_compute[259504]: 2025-10-01 17:08:22.401 2 WARNING nova.virt.libvirt.driver [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 17:08:22 compute-0 nova_compute[259504]: 2025-10-01 17:08:22.402 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5020MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 17:08:22 compute-0 nova_compute[259504]: 2025-10-01 17:08:22.403 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:08:22 compute-0 nova_compute[259504]: 2025-10-01 17:08:22.403 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:08:22 compute-0 podman[277634]: 2025-10-01 17:08:22.405784023 +0000 UTC m=+0.115228634 container create 76313c34dae689d93b9f397a0ebca412a51bf0251eef31d73990f5591f91e394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jones, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:08:22 compute-0 podman[277634]: 2025-10-01 17:08:22.322568668 +0000 UTC m=+0.032013309 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:08:22 compute-0 nova_compute[259504]: 2025-10-01 17:08:22.464 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 17:08:22 compute-0 nova_compute[259504]: 2025-10-01 17:08:22.464 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 17:08:22 compute-0 systemd[1]: Started libpod-conmon-76313c34dae689d93b9f397a0ebca412a51bf0251eef31d73990f5591f91e394.scope.
Oct 01 17:08:22 compute-0 nova_compute[259504]: 2025-10-01 17:08:22.481 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:08:22 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:08:22 compute-0 podman[277634]: 2025-10-01 17:08:22.534687586 +0000 UTC m=+0.244132187 container init 76313c34dae689d93b9f397a0ebca412a51bf0251eef31d73990f5591f91e394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jones, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 01 17:08:22 compute-0 podman[277634]: 2025-10-01 17:08:22.541089409 +0000 UTC m=+0.250534030 container start 76313c34dae689d93b9f397a0ebca412a51bf0251eef31d73990f5591f91e394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:08:22 compute-0 quirky_jones[277650]: 167 167
Oct 01 17:08:22 compute-0 systemd[1]: libpod-76313c34dae689d93b9f397a0ebca412a51bf0251eef31d73990f5591f91e394.scope: Deactivated successfully.
Oct 01 17:08:22 compute-0 podman[277634]: 2025-10-01 17:08:22.670271768 +0000 UTC m=+0.379716389 container attach 76313c34dae689d93b9f397a0ebca412a51bf0251eef31d73990f5591f91e394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jones, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:08:22 compute-0 podman[277634]: 2025-10-01 17:08:22.670669367 +0000 UTC m=+0.380113968 container died 76313c34dae689d93b9f397a0ebca412a51bf0251eef31d73990f5591f91e394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:08:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-521162c10d1bbe28dba839ee16ee82417dcae83edeb62fac7bbf71da28f577b1-merged.mount: Deactivated successfully.
Oct 01 17:08:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:08:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3818186690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:08:22 compute-0 nova_compute[259504]: 2025-10-01 17:08:22.880 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:08:22 compute-0 nova_compute[259504]: 2025-10-01 17:08:22.889 2 DEBUG nova.compute.provider_tree [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed in ProviderTree for provider: 2417da73-53f1-4edf-ae4c-fbd9fa470d6b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 17:08:22 compute-0 nova_compute[259504]: 2025-10-01 17:08:22.919 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 17:08:22 compute-0 nova_compute[259504]: 2025-10-01 17:08:22.921 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 17:08:22 compute-0 nova_compute[259504]: 2025-10-01 17:08:22.921 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.518s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:08:23 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "09ee21e6-5d89-4173-b2b4-f6be5a3351ee", "format": "json"}]: dispatch
Oct 01 17:08:23 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "09ee21e6-5d89-4173-b2b4-f6be5a3351ee", "force": true, "format": "json"}]: dispatch
Oct 01 17:08:23 compute-0 ceph-mon[74273]: pgmap v1178: 305 pgs: 305 active+clean; 69 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 55 KiB/s wr, 2 op/s
Oct 01 17:08:23 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1981260345' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:08:23 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3818186690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:08:23 compute-0 podman[277634]: 2025-10-01 17:08:23.586884181 +0000 UTC m=+1.296328832 container remove 76313c34dae689d93b9f397a0ebca412a51bf0251eef31d73990f5591f91e394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jones, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Oct 01 17:08:23 compute-0 systemd[1]: libpod-conmon-76313c34dae689d93b9f397a0ebca412a51bf0251eef31d73990f5591f91e394.scope: Deactivated successfully.
Oct 01 17:08:23 compute-0 podman[277696]: 2025-10-01 17:08:23.893860448 +0000 UTC m=+0.095664232 container create 2f85a5f064b206e8f1c6230d5b65ad504ab01592bb699f38870943cb959b9c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_germain, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:08:23 compute-0 podman[277696]: 2025-10-01 17:08:23.827941414 +0000 UTC m=+0.029745228 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:08:23 compute-0 nova_compute[259504]: 2025-10-01 17:08:23.922 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:08:23 compute-0 nova_compute[259504]: 2025-10-01 17:08:23.923 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 17:08:23 compute-0 nova_compute[259504]: 2025-10-01 17:08:23.923 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 17:08:23 compute-0 systemd[1]: Started libpod-conmon-2f85a5f064b206e8f1c6230d5b65ad504ab01592bb699f38870943cb959b9c3c.scope.
Oct 01 17:08:23 compute-0 nova_compute[259504]: 2025-10-01 17:08:23.966 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 17:08:23 compute-0 nova_compute[259504]: 2025-10-01 17:08:23.967 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:08:23 compute-0 nova_compute[259504]: 2025-10-01 17:08:23.967 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:08:23 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:08:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcafc210af154b50ba2a51f46a7079c6fec72b835d8bdd4f96effd450a302fff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:08:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcafc210af154b50ba2a51f46a7079c6fec72b835d8bdd4f96effd450a302fff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:08:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcafc210af154b50ba2a51f46a7079c6fec72b835d8bdd4f96effd450a302fff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:08:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcafc210af154b50ba2a51f46a7079c6fec72b835d8bdd4f96effd450a302fff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:08:24 compute-0 podman[277696]: 2025-10-01 17:08:24.000764864 +0000 UTC m=+0.202568698 container init 2f85a5f064b206e8f1c6230d5b65ad504ab01592bb699f38870943cb959b9c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 01 17:08:24 compute-0 podman[277696]: 2025-10-01 17:08:24.016051405 +0000 UTC m=+0.217855189 container start 2f85a5f064b206e8f1c6230d5b65ad504ab01592bb699f38870943cb959b9c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 01 17:08:24 compute-0 podman[277696]: 2025-10-01 17:08:24.019967504 +0000 UTC m=+0.221771308 container attach 2f85a5f064b206e8f1c6230d5b65ad504ab01592bb699f38870943cb959b9c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_germain, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 01 17:08:24 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 70 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 231 B/s rd, 86 KiB/s wr, 4 op/s
Oct 01 17:08:24 compute-0 nova_compute[259504]: 2025-10-01 17:08:24.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:08:24 compute-0 nova_compute[259504]: 2025-10-01 17:08:24.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:08:24 compute-0 ceph-mon[74273]: pgmap v1179: 305 pgs: 305 active+clean; 70 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 231 B/s rd, 86 KiB/s wr, 4 op/s
Oct 01 17:08:25 compute-0 funny_germain[277712]: {
Oct 01 17:08:25 compute-0 funny_germain[277712]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 17:08:25 compute-0 funny_germain[277712]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:08:25 compute-0 funny_germain[277712]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 17:08:25 compute-0 funny_germain[277712]:         "osd_id": 2,
Oct 01 17:08:25 compute-0 funny_germain[277712]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 17:08:25 compute-0 funny_germain[277712]:         "type": "bluestore"
Oct 01 17:08:25 compute-0 funny_germain[277712]:     },
Oct 01 17:08:25 compute-0 funny_germain[277712]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 17:08:25 compute-0 funny_germain[277712]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:08:25 compute-0 funny_germain[277712]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 17:08:25 compute-0 funny_germain[277712]:         "osd_id": 0,
Oct 01 17:08:25 compute-0 funny_germain[277712]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 17:08:25 compute-0 funny_germain[277712]:         "type": "bluestore"
Oct 01 17:08:25 compute-0 funny_germain[277712]:     },
Oct 01 17:08:25 compute-0 funny_germain[277712]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 17:08:25 compute-0 funny_germain[277712]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:08:25 compute-0 funny_germain[277712]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 17:08:25 compute-0 funny_germain[277712]:         "osd_id": 1,
Oct 01 17:08:25 compute-0 funny_germain[277712]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 17:08:25 compute-0 funny_germain[277712]:         "type": "bluestore"
Oct 01 17:08:25 compute-0 funny_germain[277712]:     }
Oct 01 17:08:25 compute-0 funny_germain[277712]: }
Oct 01 17:08:25 compute-0 systemd[1]: libpod-2f85a5f064b206e8f1c6230d5b65ad504ab01592bb699f38870943cb959b9c3c.scope: Deactivated successfully.
Oct 01 17:08:25 compute-0 podman[277696]: 2025-10-01 17:08:25.179483729 +0000 UTC m=+1.381287543 container died 2f85a5f064b206e8f1c6230d5b65ad504ab01592bb699f38870943cb959b9c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_germain, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Oct 01 17:08:25 compute-0 systemd[1]: libpod-2f85a5f064b206e8f1c6230d5b65ad504ab01592bb699f38870943cb959b9c3c.scope: Consumed 1.168s CPU time.
Oct 01 17:08:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-fcafc210af154b50ba2a51f46a7079c6fec72b835d8bdd4f96effd450a302fff-merged.mount: Deactivated successfully.
Oct 01 17:08:25 compute-0 podman[277696]: 2025-10-01 17:08:25.416040378 +0000 UTC m=+1.617844152 container remove 2f85a5f064b206e8f1c6230d5b65ad504ab01592bb699f38870943cb959b9c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_germain, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 01 17:08:25 compute-0 systemd[1]: libpod-conmon-2f85a5f064b206e8f1c6230d5b65ad504ab01592bb699f38870943cb959b9c3c.scope: Deactivated successfully.
Oct 01 17:08:25 compute-0 sudo[277551]: pam_unix(sudo:session): session closed for user root
Oct 01 17:08:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 17:08:25 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:08:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 17:08:25 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:08:25 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 2aceb345-4af4-46a5-bfdb-ffca29f596bb does not exist
Oct 01 17:08:25 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 181304b4-5e90-4133-9aaf-f2245368c88d does not exist
Oct 01 17:08:25 compute-0 sudo[277759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:08:25 compute-0 sudo[277759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:08:25 compute-0 sudo[277759]: pam_unix(sudo:session): session closed for user root
Oct 01 17:08:25 compute-0 sudo[277784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 17:08:25 compute-0 sudo[277784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:08:25 compute-0 sudo[277784]: pam_unix(sudo:session): session closed for user root
Oct 01 17:08:25 compute-0 nova_compute[259504]: 2025-10-01 17:08:25.749 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:08:25 compute-0 nova_compute[259504]: 2025-10-01 17:08:25.750 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 17:08:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:08:25 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afbc5b28-4a3d-4f5a-9775-80288db0083b", "snap_name": "b235c18d-212f-4b02-bc6d-e45b017ebaf4_0ae42b28-a7d0-4b1b-8156-5bc992766afc", "force": true, "format": "json"}]: dispatch
Oct 01 17:08:25 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b235c18d-212f-4b02-bc6d-e45b017ebaf4_0ae42b28-a7d0-4b1b-8156-5bc992766afc, sub_name:afbc5b28-4a3d-4f5a-9775-80288db0083b, vol_name:cephfs) < ""
Oct 01 17:08:26 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/afbc5b28-4a3d-4f5a-9775-80288db0083b/.meta.tmp'
Oct 01 17:08:26 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/afbc5b28-4a3d-4f5a-9775-80288db0083b/.meta.tmp' to config b'/volumes/_nogroup/afbc5b28-4a3d-4f5a-9775-80288db0083b/.meta'
Oct 01 17:08:26 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b235c18d-212f-4b02-bc6d-e45b017ebaf4_0ae42b28-a7d0-4b1b-8156-5bc992766afc, sub_name:afbc5b28-4a3d-4f5a-9775-80288db0083b, vol_name:cephfs) < ""
Oct 01 17:08:26 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afbc5b28-4a3d-4f5a-9775-80288db0083b", "snap_name": "b235c18d-212f-4b02-bc6d-e45b017ebaf4", "force": true, "format": "json"}]: dispatch
Oct 01 17:08:26 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b235c18d-212f-4b02-bc6d-e45b017ebaf4, sub_name:afbc5b28-4a3d-4f5a-9775-80288db0083b, vol_name:cephfs) < ""
Oct 01 17:08:26 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/afbc5b28-4a3d-4f5a-9775-80288db0083b/.meta.tmp'
Oct 01 17:08:26 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/afbc5b28-4a3d-4f5a-9775-80288db0083b/.meta.tmp' to config b'/volumes/_nogroup/afbc5b28-4a3d-4f5a-9775-80288db0083b/.meta'
Oct 01 17:08:26 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 70 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 76 KiB/s wr, 3 op/s
Oct 01 17:08:26 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b235c18d-212f-4b02-bc6d-e45b017ebaf4, sub_name:afbc5b28-4a3d-4f5a-9775-80288db0083b, vol_name:cephfs) < ""
Oct 01 17:08:26 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:08:26 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:08:26 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afbc5b28-4a3d-4f5a-9775-80288db0083b", "snap_name": "b235c18d-212f-4b02-bc6d-e45b017ebaf4_0ae42b28-a7d0-4b1b-8156-5bc992766afc", "force": true, "format": "json"}]: dispatch
Oct 01 17:08:26 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afbc5b28-4a3d-4f5a-9775-80288db0083b", "snap_name": "b235c18d-212f-4b02-bc6d-e45b017ebaf4", "force": true, "format": "json"}]: dispatch
Oct 01 17:08:26 compute-0 ceph-mon[74273]: pgmap v1180: 305 pgs: 305 active+clean; 70 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 76 KiB/s wr, 3 op/s
Oct 01 17:08:26 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8bab47f1-016a-4c5f-8b58-46e57c02ad64", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:08:26 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8bab47f1-016a-4c5f-8b58-46e57c02ad64, vol_name:cephfs) < ""
Oct 01 17:08:27 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8bab47f1-016a-4c5f-8b58-46e57c02ad64/.meta.tmp'
Oct 01 17:08:27 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8bab47f1-016a-4c5f-8b58-46e57c02ad64/.meta.tmp' to config b'/volumes/_nogroup/8bab47f1-016a-4c5f-8b58-46e57c02ad64/.meta'
Oct 01 17:08:27 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8bab47f1-016a-4c5f-8b58-46e57c02ad64, vol_name:cephfs) < ""
Oct 01 17:08:27 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8bab47f1-016a-4c5f-8b58-46e57c02ad64", "format": "json"}]: dispatch
Oct 01 17:08:27 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8bab47f1-016a-4c5f-8b58-46e57c02ad64, vol_name:cephfs) < ""
Oct 01 17:08:27 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8bab47f1-016a-4c5f-8b58-46e57c02ad64, vol_name:cephfs) < ""
Oct 01 17:08:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:08:27 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:08:27 compute-0 nova_compute[259504]: 2025-10-01 17:08:27.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:08:27 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8bab47f1-016a-4c5f-8b58-46e57c02ad64", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:08:27 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8bab47f1-016a-4c5f-8b58-46e57c02ad64", "format": "json"}]: dispatch
Oct 01 17:08:27 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:08:28 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 70 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 55 KiB/s wr, 4 op/s
Oct 01 17:08:28 compute-0 ceph-mon[74273]: pgmap v1181: 305 pgs: 305 active+clean; 70 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 55 KiB/s wr, 4 op/s
Oct 01 17:08:29 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:08:29.015 162304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d2971fc2-5b75-459a-98a0-6e626d0d4d99, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 17:08:30 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "afbc5b28-4a3d-4f5a-9775-80288db0083b", "format": "json"}]: dispatch
Oct 01 17:08:30 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:afbc5b28-4a3d-4f5a-9775-80288db0083b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:08:30 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:afbc5b28-4a3d-4f5a-9775-80288db0083b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:08:30 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:08:30.038+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'afbc5b28-4a3d-4f5a-9775-80288db0083b' of type subvolume
Oct 01 17:08:30 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'afbc5b28-4a3d-4f5a-9775-80288db0083b' of type subvolume
Oct 01 17:08:30 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "afbc5b28-4a3d-4f5a-9775-80288db0083b", "force": true, "format": "json"}]: dispatch
Oct 01 17:08:30 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:afbc5b28-4a3d-4f5a-9775-80288db0083b, vol_name:cephfs) < ""
Oct 01 17:08:30 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/afbc5b28-4a3d-4f5a-9775-80288db0083b'' moved to trashcan
Oct 01 17:08:30 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:08:30 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:afbc5b28-4a3d-4f5a-9775-80288db0083b, vol_name:cephfs) < ""
Oct 01 17:08:30 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 70 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 55 KiB/s wr, 4 op/s
Oct 01 17:08:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Oct 01 17:08:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Oct 01 17:08:30 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Oct 01 17:08:30 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8bab47f1-016a-4c5f-8b58-46e57c02ad64", "snap_name": "f40cff96-8848-426c-90c1-e99c5be0398c", "format": "json"}]: dispatch
Oct 01 17:08:30 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f40cff96-8848-426c-90c1-e99c5be0398c, sub_name:8bab47f1-016a-4c5f-8b58-46e57c02ad64, vol_name:cephfs) < ""
Oct 01 17:08:30 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f40cff96-8848-426c-90c1-e99c5be0398c, sub_name:8bab47f1-016a-4c5f-8b58-46e57c02ad64, vol_name:cephfs) < ""
Oct 01 17:08:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:08:31 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "afbc5b28-4a3d-4f5a-9775-80288db0083b", "format": "json"}]: dispatch
Oct 01 17:08:31 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "afbc5b28-4a3d-4f5a-9775-80288db0083b", "force": true, "format": "json"}]: dispatch
Oct 01 17:08:31 compute-0 ceph-mon[74273]: pgmap v1182: 305 pgs: 305 active+clean; 70 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 55 KiB/s wr, 4 op/s
Oct 01 17:08:31 compute-0 ceph-mon[74273]: osdmap e163: 3 total, 3 up, 3 in
Oct 01 17:08:31 compute-0 nova_compute[259504]: 2025-10-01 17:08:31.745 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:08:32 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 70 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 55 KiB/s wr, 4 op/s
Oct 01 17:08:32 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8bab47f1-016a-4c5f-8b58-46e57c02ad64", "snap_name": "f40cff96-8848-426c-90c1-e99c5be0398c", "format": "json"}]: dispatch
Oct 01 17:08:32 compute-0 podman[277809]: 2025-10-01 17:08:32.765767582 +0000 UTC m=+0.081220558 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 01 17:08:33 compute-0 ceph-mon[74273]: pgmap v1184: 305 pgs: 305 active+clean; 70 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 55 KiB/s wr, 4 op/s
Oct 01 17:08:34 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 70 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 57 KiB/s wr, 5 op/s
Oct 01 17:08:34 compute-0 podman[277831]: 2025-10-01 17:08:34.808402166 +0000 UTC m=+0.108155986 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0)
Oct 01 17:08:35 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "8bab47f1-016a-4c5f-8b58-46e57c02ad64", "snap_name": "f40cff96-8848-426c-90c1-e99c5be0398c", "target_sub_name": "8814b9a0-ee27-4d85-bc47-1d08537fe868", "format": "json"}]: dispatch
Oct 01 17:08:35 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:f40cff96-8848-426c-90c1-e99c5be0398c, sub_name:8bab47f1-016a-4c5f-8b58-46e57c02ad64, target_sub_name:8814b9a0-ee27-4d85-bc47-1d08537fe868, vol_name:cephfs) < ""
Oct 01 17:08:35 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/8814b9a0-ee27-4d85-bc47-1d08537fe868/.meta.tmp'
Oct 01 17:08:35 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8814b9a0-ee27-4d85-bc47-1d08537fe868/.meta.tmp' to config b'/volumes/_nogroup/8814b9a0-ee27-4d85-bc47-1d08537fe868/.meta'
Oct 01 17:08:35 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.clone_index] tracking-id c3330fd6-a9c0-4c92-bcee-9e746476d37e for path b'/volumes/_nogroup/8814b9a0-ee27-4d85-bc47-1d08537fe868'
Oct 01 17:08:35 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/8bab47f1-016a-4c5f-8b58-46e57c02ad64/.meta.tmp'
Oct 01 17:08:35 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8bab47f1-016a-4c5f-8b58-46e57c02ad64/.meta.tmp' to config b'/volumes/_nogroup/8bab47f1-016a-4c5f-8b58-46e57c02ad64/.meta'
Oct 01 17:08:35 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:08:35 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:f40cff96-8848-426c-90c1-e99c5be0398c, sub_name:8bab47f1-016a-4c5f-8b58-46e57c02ad64, target_sub_name:8814b9a0-ee27-4d85-bc47-1d08537fe868, vol_name:cephfs) < ""
Oct 01 17:08:35 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/8814b9a0-ee27-4d85-bc47-1d08537fe868
Oct 01 17:08:35 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 8814b9a0-ee27-4d85-bc47-1d08537fe868)
Oct 01 17:08:35 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8814b9a0-ee27-4d85-bc47-1d08537fe868", "format": "json"}]: dispatch
Oct 01 17:08:35 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8814b9a0-ee27-4d85-bc47-1d08537fe868, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:08:35 compute-0 ceph-mon[74273]: pgmap v1185: 305 pgs: 305 active+clean; 70 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 57 KiB/s wr, 5 op/s
Oct 01 17:08:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:08:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Oct 01 17:08:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Oct 01 17:08:35 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Oct 01 17:08:36 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 70 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 42 KiB/s wr, 2 op/s
Oct 01 17:08:36 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "8bab47f1-016a-4c5f-8b58-46e57c02ad64", "snap_name": "f40cff96-8848-426c-90c1-e99c5be0398c", "target_sub_name": "8814b9a0-ee27-4d85-bc47-1d08537fe868", "format": "json"}]: dispatch
Oct 01 17:08:36 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8814b9a0-ee27-4d85-bc47-1d08537fe868", "format": "json"}]: dispatch
Oct 01 17:08:36 compute-0 ceph-mon[74273]: osdmap e164: 3 total, 3 up, 3 in
Oct 01 17:08:37 compute-0 ceph-mon[74273]: pgmap v1187: 305 pgs: 305 active+clean; 70 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 42 KiB/s wr, 2 op/s
Oct 01 17:08:38 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 70 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 81 KiB/s wr, 4 op/s
Oct 01 17:08:39 compute-0 ceph-mon[74273]: pgmap v1188: 305 pgs: 305 active+clean; 70 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 81 KiB/s wr, 4 op/s
Oct 01 17:08:40 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 305 active+clean; 70 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 309 B/s rd, 65 KiB/s wr, 3 op/s
Oct 01 17:08:40 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 8814b9a0-ee27-4d85-bc47-1d08537fe868) -- by 0 seconds
Oct 01 17:08:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:08:41 compute-0 ceph-mon[74273]: pgmap v1189: 305 pgs: 305 active+clean; 70 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 309 B/s rd, 65 KiB/s wr, 3 op/s
Oct 01 17:08:41 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/8814b9a0-ee27-4d85-bc47-1d08537fe868/.meta.tmp'
Oct 01 17:08:41 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8814b9a0-ee27-4d85-bc47-1d08537fe868/.meta.tmp' to config b'/volumes/_nogroup/8814b9a0-ee27-4d85-bc47-1d08537fe868/.meta'
Oct 01 17:08:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:08:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:08:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:08:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:08:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:08:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:08:42 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8814b9a0-ee27-4d85-bc47-1d08537fe868, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:08:42 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 70 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 65 KiB/s wr, 3 op/s
Oct 01 17:08:42 compute-0 ceph-mon[74273]: pgmap v1190: 305 pgs: 305 active+clean; 70 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 65 KiB/s wr, 3 op/s
Oct 01 17:08:43 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f1653434-fb19-4bc0-85ed-a5473c6d290d", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:08:43 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:f1653434-fb19-4bc0-85ed-a5473c6d290d, vol_name:cephfs) < ""
Oct 01 17:08:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 17:08:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2029114404' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 17:08:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 17:08:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2029114404' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 17:08:44 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f1653434-fb19-4bc0-85ed-a5473c6d290d", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:08:44 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 71 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 48 KiB/s wr, 3 op/s
Oct 01 17:08:45 compute-0 podman[277851]: 2025-10-01 17:08:45.827314219 +0000 UTC m=+0.137578369 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 01 17:08:45 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/8bab47f1-016a-4c5f-8b58-46e57c02ad64/.snap/f40cff96-8848-426c-90c1-e99c5be0398c/a3f4a43a-ff4d-4720-991f-35deb0990245' to b'/volumes/_nogroup/8814b9a0-ee27-4d85-bc47-1d08537fe868/636cf65b-e4ac-4299-bc82-8f8b9c2d6caa'
Oct 01 17:08:45 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2029114404' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 17:08:45 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2029114404' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 17:08:45 compute-0 ceph-mon[74273]: pgmap v1191: 305 pgs: 305 active+clean; 71 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 48 KiB/s wr, 3 op/s
Oct 01 17:08:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:08:46 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 71 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 299 B/s rd, 47 KiB/s wr, 3 op/s
Oct 01 17:08:47 compute-0 ceph-mon[74273]: pgmap v1192: 305 pgs: 305 active+clean; 71 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 299 B/s rd, 47 KiB/s wr, 3 op/s
Oct 01 17:08:48 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1193: 305 pgs: 305 active+clean; 71 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 59 KiB/s wr, 3 op/s
Oct 01 17:08:49 compute-0 ceph-mon[74273]: pgmap v1193: 305 pgs: 305 active+clean; 71 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 59 KiB/s wr, 3 op/s
Oct 01 17:08:50 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 71 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 33 KiB/s wr, 2 op/s
Oct 01 17:08:50 compute-0 podman[277878]: 2025-10-01 17:08:50.759871151 +0000 UTC m=+0.070691631 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 17:08:50 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f1653434-fb19-4bc0-85ed-a5473c6d290d/.meta.tmp'
Oct 01 17:08:50 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f1653434-fb19-4bc0-85ed-a5473c6d290d/.meta.tmp' to config b'/volumes/_nogroup/f1653434-fb19-4bc0-85ed-a5473c6d290d/.meta'
Oct 01 17:08:50 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:f1653434-fb19-4bc0-85ed-a5473c6d290d, vol_name:cephfs) < ""
Oct 01 17:08:50 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f1653434-fb19-4bc0-85ed-a5473c6d290d", "format": "json"}]: dispatch
Oct 01 17:08:50 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f1653434-fb19-4bc0-85ed-a5473c6d290d, vol_name:cephfs) < ""
Oct 01 17:08:51 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/8814b9a0-ee27-4d85-bc47-1d08537fe868/.meta.tmp'
Oct 01 17:08:51 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8814b9a0-ee27-4d85-bc47-1d08537fe868/.meta.tmp' to config b'/volumes/_nogroup/8814b9a0-ee27-4d85-bc47-1d08537fe868/.meta'
Oct 01 17:08:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:08:51 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.clone_index] untracking c3330fd6-a9c0-4c92-bcee-9e746476d37e
Oct 01 17:08:52 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 71 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 33 KiB/s wr, 2 op/s
Oct 01 17:08:52 compute-0 ceph-mon[74273]: pgmap v1194: 305 pgs: 305 active+clean; 71 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 33 KiB/s wr, 2 op/s
Oct 01 17:08:52 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8bab47f1-016a-4c5f-8b58-46e57c02ad64/.meta.tmp'
Oct 01 17:08:52 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8bab47f1-016a-4c5f-8b58-46e57c02ad64/.meta.tmp' to config b'/volumes/_nogroup/8bab47f1-016a-4c5f-8b58-46e57c02ad64/.meta'
Oct 01 17:08:53 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/8814b9a0-ee27-4d85-bc47-1d08537fe868/.meta.tmp'
Oct 01 17:08:53 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8814b9a0-ee27-4d85-bc47-1d08537fe868/.meta.tmp' to config b'/volumes/_nogroup/8814b9a0-ee27-4d85-bc47-1d08537fe868/.meta'
Oct 01 17:08:53 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 8814b9a0-ee27-4d85-bc47-1d08537fe868)
Oct 01 17:08:53 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f1653434-fb19-4bc0-85ed-a5473c6d290d, vol_name:cephfs) < ""
Oct 01 17:08:53 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:08:53 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:08:53 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f1653434-fb19-4bc0-85ed-a5473c6d290d", "format": "json"}]: dispatch
Oct 01 17:08:53 compute-0 ceph-mon[74273]: pgmap v1195: 305 pgs: 305 active+clean; 71 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 33 KiB/s wr, 2 op/s
Oct 01 17:08:54 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 71 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 57 KiB/s wr, 3 op/s
Oct 01 17:08:55 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:08:55 compute-0 ceph-mon[74273]: pgmap v1196: 305 pgs: 305 active+clean; 71 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 57 KiB/s wr, 3 op/s
Oct 01 17:08:56 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8814b9a0-ee27-4d85-bc47-1d08537fe868", "format": "json"}]: dispatch
Oct 01 17:08:56 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8814b9a0-ee27-4d85-bc47-1d08537fe868, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:08:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:08:56 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 71 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 43 KiB/s wr, 2 op/s
Oct 01 17:08:56 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8814b9a0-ee27-4d85-bc47-1d08537fe868, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:08:56 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8814b9a0-ee27-4d85-bc47-1d08537fe868", "format": "json"}]: dispatch
Oct 01 17:08:56 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8814b9a0-ee27-4d85-bc47-1d08537fe868, vol_name:cephfs) < ""
Oct 01 17:08:56 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8814b9a0-ee27-4d85-bc47-1d08537fe868, vol_name:cephfs) < ""
Oct 01 17:08:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:08:56 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:08:56 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "f1653434-fb19-4bc0-85ed-a5473c6d290d", "snap_name": "b0203c1e-2e73-4e30-956c-1b8e7eb6fbb1", "format": "json"}]: dispatch
Oct 01 17:08:56 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:b0203c1e-2e73-4e30-956c-1b8e7eb6fbb1, sub_name:f1653434-fb19-4bc0-85ed-a5473c6d290d, vol_name:cephfs) < ""
Oct 01 17:08:56 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:b0203c1e-2e73-4e30-956c-1b8e7eb6fbb1, sub_name:f1653434-fb19-4bc0-85ed-a5473c6d290d, vol_name:cephfs) < ""
Oct 01 17:08:57 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8814b9a0-ee27-4d85-bc47-1d08537fe868", "format": "json"}]: dispatch
Oct 01 17:08:57 compute-0 ceph-mon[74273]: pgmap v1197: 305 pgs: 305 active+clean; 71 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 43 KiB/s wr, 2 op/s
Oct 01 17:08:57 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8814b9a0-ee27-4d85-bc47-1d08537fe868", "format": "json"}]: dispatch
Oct 01 17:08:57 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:08:57 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "60e01a62-d5ac-404d-b890-d73b6e36f5cd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:08:57 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:60e01a62-d5ac-404d-b890-d73b6e36f5cd, vol_name:cephfs) < ""
Oct 01 17:08:58 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 71 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 75 KiB/s wr, 4 op/s
Oct 01 17:08:58 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "f1653434-fb19-4bc0-85ed-a5473c6d290d", "snap_name": "b0203c1e-2e73-4e30-956c-1b8e7eb6fbb1", "format": "json"}]: dispatch
Oct 01 17:08:58 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "60e01a62-d5ac-404d-b890-d73b6e36f5cd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:08:58 compute-0 ceph-mon[74273]: pgmap v1198: 305 pgs: 305 active+clean; 71 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 75 KiB/s wr, 4 op/s
Oct 01 17:08:59 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/60e01a62-d5ac-404d-b890-d73b6e36f5cd/.meta.tmp'
Oct 01 17:08:59 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/60e01a62-d5ac-404d-b890-d73b6e36f5cd/.meta.tmp' to config b'/volumes/_nogroup/60e01a62-d5ac-404d-b890-d73b6e36f5cd/.meta'
Oct 01 17:08:59 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:60e01a62-d5ac-404d-b890-d73b6e36f5cd, vol_name:cephfs) < ""
Oct 01 17:08:59 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "60e01a62-d5ac-404d-b890-d73b6e36f5cd", "format": "json"}]: dispatch
Oct 01 17:08:59 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:60e01a62-d5ac-404d-b890-d73b6e36f5cd, vol_name:cephfs) < ""
Oct 01 17:08:59 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:60e01a62-d5ac-404d-b890-d73b6e36f5cd, vol_name:cephfs) < ""
Oct 01 17:08:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:08:59 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:08:59 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f1653434-fb19-4bc0-85ed-a5473c6d290d", "snap_name": "b0203c1e-2e73-4e30-956c-1b8e7eb6fbb1_a0cf2fa8-f1f6-4269-a744-da5e4a160b4f", "force": true, "format": "json"}]: dispatch
Oct 01 17:08:59 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b0203c1e-2e73-4e30-956c-1b8e7eb6fbb1_a0cf2fa8-f1f6-4269-a744-da5e4a160b4f, sub_name:f1653434-fb19-4bc0-85ed-a5473c6d290d, vol_name:cephfs) < ""
Oct 01 17:09:00 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 71 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 56 KiB/s wr, 3 op/s
Oct 01 17:09:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f1653434-fb19-4bc0-85ed-a5473c6d290d/.meta.tmp'
Oct 01 17:09:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f1653434-fb19-4bc0-85ed-a5473c6d290d/.meta.tmp' to config b'/volumes/_nogroup/f1653434-fb19-4bc0-85ed-a5473c6d290d/.meta'
Oct 01 17:09:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b0203c1e-2e73-4e30-956c-1b8e7eb6fbb1_a0cf2fa8-f1f6-4269-a744-da5e4a160b4f, sub_name:f1653434-fb19-4bc0-85ed-a5473c6d290d, vol_name:cephfs) < ""
Oct 01 17:09:00 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f1653434-fb19-4bc0-85ed-a5473c6d290d", "snap_name": "b0203c1e-2e73-4e30-956c-1b8e7eb6fbb1", "force": true, "format": "json"}]: dispatch
Oct 01 17:09:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b0203c1e-2e73-4e30-956c-1b8e7eb6fbb1, sub_name:f1653434-fb19-4bc0-85ed-a5473c6d290d, vol_name:cephfs) < ""
Oct 01 17:09:00 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "60e01a62-d5ac-404d-b890-d73b6e36f5cd", "format": "json"}]: dispatch
Oct 01 17:09:00 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:09:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:09:01 compute-0 anacron[4037]: Job `cron.monthly' started
Oct 01 17:09:01 compute-0 anacron[4037]: Job `cron.monthly' terminated
Oct 01 17:09:01 compute-0 anacron[4037]: Normal exit (3 jobs run)
Oct 01 17:09:02 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f1653434-fb19-4bc0-85ed-a5473c6d290d", "snap_name": "b0203c1e-2e73-4e30-956c-1b8e7eb6fbb1_a0cf2fa8-f1f6-4269-a744-da5e4a160b4f", "force": true, "format": "json"}]: dispatch
Oct 01 17:09:02 compute-0 ceph-mon[74273]: pgmap v1199: 305 pgs: 305 active+clean; 71 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 56 KiB/s wr, 3 op/s
Oct 01 17:09:02 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f1653434-fb19-4bc0-85ed-a5473c6d290d", "snap_name": "b0203c1e-2e73-4e30-956c-1b8e7eb6fbb1", "force": true, "format": "json"}]: dispatch
Oct 01 17:09:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f1653434-fb19-4bc0-85ed-a5473c6d290d/.meta.tmp'
Oct 01 17:09:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f1653434-fb19-4bc0-85ed-a5473c6d290d/.meta.tmp' to config b'/volumes/_nogroup/f1653434-fb19-4bc0-85ed-a5473c6d290d/.meta'
Oct 01 17:09:02 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 71 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 56 KiB/s wr, 3 op/s
Oct 01 17:09:03 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b0203c1e-2e73-4e30-956c-1b8e7eb6fbb1, sub_name:f1653434-fb19-4bc0-85ed-a5473c6d290d, vol_name:cephfs) < ""
Oct 01 17:09:03 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "60e01a62-d5ac-404d-b890-d73b6e36f5cd", "new_size": 2147483648, "format": "json"}]: dispatch
Oct 01 17:09:03 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:60e01a62-d5ac-404d-b890-d73b6e36f5cd, vol_name:cephfs) < ""
Oct 01 17:09:03 compute-0 podman[277902]: 2025-10-01 17:09:03.741227208 +0000 UTC m=+0.062560784 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 01 17:09:03 compute-0 ceph-mon[74273]: pgmap v1200: 305 pgs: 305 active+clean; 71 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 56 KiB/s wr, 3 op/s
Oct 01 17:09:04 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:60e01a62-d5ac-404d-b890-d73b6e36f5cd, vol_name:cephfs) < ""
Oct 01 17:09:04 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 72 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 KiB/s wr, 5 op/s
Oct 01 17:09:05 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "60e01a62-d5ac-404d-b890-d73b6e36f5cd", "new_size": 2147483648, "format": "json"}]: dispatch
Oct 01 17:09:05 compute-0 ceph-mon[74273]: pgmap v1201: 305 pgs: 305 active+clean; 72 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 KiB/s wr, 5 op/s
Oct 01 17:09:05 compute-0 podman[277922]: 2025-10-01 17:09:05.764954476 +0000 UTC m=+0.077403509 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 17:09:05 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Oct 01 17:09:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Oct 01 17:09:06 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f1653434-fb19-4bc0-85ed-a5473c6d290d", "format": "json"}]: dispatch
Oct 01 17:09:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f1653434-fb19-4bc0-85ed-a5473c6d290d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:09:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f1653434-fb19-4bc0-85ed-a5473c6d290d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:09:06 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:09:06.131+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f1653434-fb19-4bc0-85ed-a5473c6d290d' of type subvolume
Oct 01 17:09:06 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f1653434-fb19-4bc0-85ed-a5473c6d290d' of type subvolume
Oct 01 17:09:06 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f1653434-fb19-4bc0-85ed-a5473c6d290d", "force": true, "format": "json"}]: dispatch
Oct 01 17:09:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f1653434-fb19-4bc0-85ed-a5473c6d290d, vol_name:cephfs) < ""
Oct 01 17:09:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f1653434-fb19-4bc0-85ed-a5473c6d290d'' moved to trashcan
Oct 01 17:09:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:09:06 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f1653434-fb19-4bc0-85ed-a5473c6d290d, vol_name:cephfs) < ""
Oct 01 17:09:06 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 72 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 62 KiB/s wr, 3 op/s
Oct 01 17:09:06 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Oct 01 17:09:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:09:07 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f1653434-fb19-4bc0-85ed-a5473c6d290d", "format": "json"}]: dispatch
Oct 01 17:09:07 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f1653434-fb19-4bc0-85ed-a5473c6d290d", "force": true, "format": "json"}]: dispatch
Oct 01 17:09:07 compute-0 ceph-mon[74273]: pgmap v1202: 305 pgs: 305 active+clean; 72 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 62 KiB/s wr, 3 op/s
Oct 01 17:09:07 compute-0 ceph-mon[74273]: osdmap e165: 3 total, 3 up, 3 in
Oct 01 17:09:07 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "60e01a62-d5ac-404d-b890-d73b6e36f5cd", "format": "json"}]: dispatch
Oct 01 17:09:07 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:60e01a62-d5ac-404d-b890-d73b6e36f5cd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:09:07 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:60e01a62-d5ac-404d-b890-d73b6e36f5cd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:09:07 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:09:07.275+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '60e01a62-d5ac-404d-b890-d73b6e36f5cd' of type subvolume
Oct 01 17:09:07 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '60e01a62-d5ac-404d-b890-d73b6e36f5cd' of type subvolume
Oct 01 17:09:07 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "60e01a62-d5ac-404d-b890-d73b6e36f5cd", "force": true, "format": "json"}]: dispatch
Oct 01 17:09:07 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:60e01a62-d5ac-404d-b890-d73b6e36f5cd, vol_name:cephfs) < ""
Oct 01 17:09:07 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/60e01a62-d5ac-404d-b890-d73b6e36f5cd'' moved to trashcan
Oct 01 17:09:07 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:09:07 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:60e01a62-d5ac-404d-b890-d73b6e36f5cd, vol_name:cephfs) < ""
Oct 01 17:09:08 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 72 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 92 KiB/s wr, 4 op/s
Oct 01 17:09:08 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "60e01a62-d5ac-404d-b890-d73b6e36f5cd", "format": "json"}]: dispatch
Oct 01 17:09:08 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "60e01a62-d5ac-404d-b890-d73b6e36f5cd", "force": true, "format": "json"}]: dispatch
Oct 01 17:09:09 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b4fd18da-d185-4d9e-9718-4a7a16198c6c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:09:09 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b4fd18da-d185-4d9e-9718-4a7a16198c6c, vol_name:cephfs) < ""
Oct 01 17:09:09 compute-0 ceph-mon[74273]: pgmap v1204: 305 pgs: 305 active+clean; 72 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 92 KiB/s wr, 4 op/s
Oct 01 17:09:10 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 72 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 92 KiB/s wr, 4 op/s
Oct 01 17:09:10 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b4fd18da-d185-4d9e-9718-4a7a16198c6c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:09:10 compute-0 ceph-mon[74273]: pgmap v1205: 305 pgs: 305 active+clean; 72 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 92 KiB/s wr, 4 op/s
Oct 01 17:09:11 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b4fd18da-d185-4d9e-9718-4a7a16198c6c/.meta.tmp'
Oct 01 17:09:11 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b4fd18da-d185-4d9e-9718-4a7a16198c6c/.meta.tmp' to config b'/volumes/_nogroup/b4fd18da-d185-4d9e-9718-4a7a16198c6c/.meta'
Oct 01 17:09:11 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b4fd18da-d185-4d9e-9718-4a7a16198c6c, vol_name:cephfs) < ""
Oct 01 17:09:11 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b4fd18da-d185-4d9e-9718-4a7a16198c6c", "format": "json"}]: dispatch
Oct 01 17:09:11 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b4fd18da-d185-4d9e-9718-4a7a16198c6c, vol_name:cephfs) < ""
Oct 01 17:09:11 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b4fd18da-d185-4d9e-9718-4a7a16198c6c, vol_name:cephfs) < ""
Oct 01 17:09:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:09:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:09:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:09:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Oct 01 17:09:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Oct 01 17:09:11 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Oct 01 17:09:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:09:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:09:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:09:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:09:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_17:09:11
Oct 01 17:09:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 17:09:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 17:09:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['vms', '.rgw.root', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'volumes', 'backups']
Oct 01 17:09:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 17:09:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:09:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:09:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 17:09:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:09:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:09:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:09:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:09:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 17:09:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:09:11 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:09:12 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b4fd18da-d185-4d9e-9718-4a7a16198c6c", "format": "json"}]: dispatch
Oct 01 17:09:12 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:09:12 compute-0 ceph-mon[74273]: osdmap e166: 3 total, 3 up, 3 in
Oct 01 17:09:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:09:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:09:12 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 72 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 70 KiB/s wr, 3 op/s
Oct 01 17:09:13 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "b4fd18da-d185-4d9e-9718-4a7a16198c6c", "snap_name": "f6cc2e23-37ce-4b1d-a221-64aa3ffccc62", "format": "json"}]: dispatch
Oct 01 17:09:13 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f6cc2e23-37ce-4b1d-a221-64aa3ffccc62, sub_name:b4fd18da-d185-4d9e-9718-4a7a16198c6c, vol_name:cephfs) < ""
Oct 01 17:09:13 compute-0 ceph-mon[74273]: pgmap v1207: 305 pgs: 305 active+clean; 72 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 70 KiB/s wr, 3 op/s
Oct 01 17:09:13 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f6cc2e23-37ce-4b1d-a221-64aa3ffccc62, sub_name:b4fd18da-d185-4d9e-9718-4a7a16198c6c, vol_name:cephfs) < ""
Oct 01 17:09:14 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1208: 305 pgs: 305 active+clean; 72 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 95 KiB/s wr, 5 op/s
Oct 01 17:09:14 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "b4fd18da-d185-4d9e-9718-4a7a16198c6c", "snap_name": "f6cc2e23-37ce-4b1d-a221-64aa3ffccc62", "format": "json"}]: dispatch
Oct 01 17:09:14 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7262037d-5a43-4a8e-99a6-10daf7eb5d63", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:09:14 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:7262037d-5a43-4a8e-99a6-10daf7eb5d63, vol_name:cephfs) < ""
Oct 01 17:09:15 compute-0 ceph-mon[74273]: pgmap v1208: 305 pgs: 305 active+clean; 72 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 95 KiB/s wr, 5 op/s
Oct 01 17:09:16 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 72 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 616 B/s rd, 77 KiB/s wr, 4 op/s
Oct 01 17:09:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:09:16 compute-0 podman[277943]: 2025-10-01 17:09:16.836830399 +0000 UTC m=+0.143493116 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 01 17:09:17 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7262037d-5a43-4a8e-99a6-10daf7eb5d63", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:09:17 compute-0 ceph-mon[74273]: pgmap v1209: 305 pgs: 305 active+clean; 72 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 616 B/s rd, 77 KiB/s wr, 4 op/s
Oct 01 17:09:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7262037d-5a43-4a8e-99a6-10daf7eb5d63/.meta.tmp'
Oct 01 17:09:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7262037d-5a43-4a8e-99a6-10daf7eb5d63/.meta.tmp' to config b'/volumes/_nogroup/7262037d-5a43-4a8e-99a6-10daf7eb5d63/.meta'
Oct 01 17:09:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:7262037d-5a43-4a8e-99a6-10daf7eb5d63, vol_name:cephfs) < ""
Oct 01 17:09:18 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7262037d-5a43-4a8e-99a6-10daf7eb5d63", "format": "json"}]: dispatch
Oct 01 17:09:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7262037d-5a43-4a8e-99a6-10daf7eb5d63, vol_name:cephfs) < ""
Oct 01 17:09:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7262037d-5a43-4a8e-99a6-10daf7eb5d63, vol_name:cephfs) < ""
Oct 01 17:09:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:09:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:09:18 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 73 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 64 KiB/s wr, 4 op/s
Oct 01 17:09:18 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b4fd18da-d185-4d9e-9718-4a7a16198c6c", "snap_name": "f6cc2e23-37ce-4b1d-a221-64aa3ffccc62_09e42390-3062-4d04-b349-f2f366d945b9", "force": true, "format": "json"}]: dispatch
Oct 01 17:09:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f6cc2e23-37ce-4b1d-a221-64aa3ffccc62_09e42390-3062-4d04-b349-f2f366d945b9, sub_name:b4fd18da-d185-4d9e-9718-4a7a16198c6c, vol_name:cephfs) < ""
Oct 01 17:09:18 compute-0 nova_compute[259504]: 2025-10-01 17:09:18.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:09:18 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7262037d-5a43-4a8e-99a6-10daf7eb5d63", "format": "json"}]: dispatch
Oct 01 17:09:18 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:09:18 compute-0 ceph-mon[74273]: pgmap v1210: 305 pgs: 305 active+clean; 73 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 64 KiB/s wr, 4 op/s
Oct 01 17:09:18 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b4fd18da-d185-4d9e-9718-4a7a16198c6c", "snap_name": "f6cc2e23-37ce-4b1d-a221-64aa3ffccc62_09e42390-3062-4d04-b349-f2f366d945b9", "force": true, "format": "json"}]: dispatch
Oct 01 17:09:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b4fd18da-d185-4d9e-9718-4a7a16198c6c/.meta.tmp'
Oct 01 17:09:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b4fd18da-d185-4d9e-9718-4a7a16198c6c/.meta.tmp' to config b'/volumes/_nogroup/b4fd18da-d185-4d9e-9718-4a7a16198c6c/.meta'
Oct 01 17:09:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f6cc2e23-37ce-4b1d-a221-64aa3ffccc62_09e42390-3062-4d04-b349-f2f366d945b9, sub_name:b4fd18da-d185-4d9e-9718-4a7a16198c6c, vol_name:cephfs) < ""
Oct 01 17:09:18 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b4fd18da-d185-4d9e-9718-4a7a16198c6c", "snap_name": "f6cc2e23-37ce-4b1d-a221-64aa3ffccc62", "force": true, "format": "json"}]: dispatch
Oct 01 17:09:18 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f6cc2e23-37ce-4b1d-a221-64aa3ffccc62, sub_name:b4fd18da-d185-4d9e-9718-4a7a16198c6c, vol_name:cephfs) < ""
Oct 01 17:09:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:09:19.978 162304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:09:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:09:19.979 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:09:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:09:19.979 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:09:19 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b4fd18da-d185-4d9e-9718-4a7a16198c6c/.meta.tmp'
Oct 01 17:09:19 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b4fd18da-d185-4d9e-9718-4a7a16198c6c/.meta.tmp' to config b'/volumes/_nogroup/b4fd18da-d185-4d9e-9718-4a7a16198c6c/.meta'
Oct 01 17:09:20 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 73 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 64 KiB/s wr, 3 op/s
Oct 01 17:09:20 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b4fd18da-d185-4d9e-9718-4a7a16198c6c", "snap_name": "f6cc2e23-37ce-4b1d-a221-64aa3ffccc62", "force": true, "format": "json"}]: dispatch
Oct 01 17:09:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 17:09:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:09:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 17:09:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:09:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:09:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:09:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:09:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:09:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:09:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:09:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 01 17:09:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:09:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005134313749430228 of space, bias 4.0, pg target 0.6161176499316273 quantized to 16 (current 16)
Oct 01 17:09:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:09:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00013354048642312953 quantized to 32 (current 32)
Oct 01 17:09:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:09:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 17:09:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:09:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 17:09:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:09:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:09:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:09:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 17:09:21 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f6cc2e23-37ce-4b1d-a221-64aa3ffccc62, sub_name:b4fd18da-d185-4d9e-9718-4a7a16198c6c, vol_name:cephfs) < ""
Oct 01 17:09:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:09:21 compute-0 ceph-mon[74273]: pgmap v1211: 305 pgs: 305 active+clean; 73 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 64 KiB/s wr, 3 op/s
Oct 01 17:09:21 compute-0 podman[277970]: 2025-10-01 17:09:21.784472917 +0000 UTC m=+0.082758663 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 01 17:09:22 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 305 active+clean; 73 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 284 B/s rd, 59 KiB/s wr, 3 op/s
Oct 01 17:09:22 compute-0 ceph-mon[74273]: pgmap v1212: 305 pgs: 305 active+clean; 73 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 284 B/s rd, 59 KiB/s wr, 3 op/s
Oct 01 17:09:22 compute-0 nova_compute[259504]: 2025-10-01 17:09:22.749 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:09:22 compute-0 nova_compute[259504]: 2025-10-01 17:09:22.787 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:09:22 compute-0 nova_compute[259504]: 2025-10-01 17:09:22.787 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:09:22 compute-0 nova_compute[259504]: 2025-10-01 17:09:22.787 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:09:22 compute-0 nova_compute[259504]: 2025-10-01 17:09:22.788 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 17:09:22 compute-0 nova_compute[259504]: 2025-10-01 17:09:22.788 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:09:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:09:23 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/523107951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:09:23 compute-0 nova_compute[259504]: 2025-10-01 17:09:23.379 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:09:23 compute-0 nova_compute[259504]: 2025-10-01 17:09:23.527 2 WARNING nova.virt.libvirt.driver [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 17:09:23 compute-0 nova_compute[259504]: 2025-10-01 17:09:23.528 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5051MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 17:09:23 compute-0 nova_compute[259504]: 2025-10-01 17:09:23.528 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:09:23 compute-0 nova_compute[259504]: 2025-10-01 17:09:23.529 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:09:23 compute-0 nova_compute[259504]: 2025-10-01 17:09:23.769 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 17:09:23 compute-0 nova_compute[259504]: 2025-10-01 17:09:23.770 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 17:09:23 compute-0 nova_compute[259504]: 2025-10-01 17:09:23.921 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Refreshing inventories for resource provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 01 17:09:23 compute-0 nova_compute[259504]: 2025-10-01 17:09:23.979 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Updating ProviderTree inventory for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 01 17:09:23 compute-0 nova_compute[259504]: 2025-10-01 17:09:23.980 2 DEBUG nova.compute.provider_tree [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Updating inventory in ProviderTree for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 01 17:09:23 compute-0 nova_compute[259504]: 2025-10-01 17:09:23.994 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Refreshing aggregate associations for resource provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 01 17:09:24 compute-0 nova_compute[259504]: 2025-10-01 17:09:24.017 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Refreshing trait associations for resource provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_ABM,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AVX2,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_ACCELERATORS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSSE3,HW_CPU_X86_SHA,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 01 17:09:24 compute-0 nova_compute[259504]: 2025-10-01 17:09:24.050 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:09:24 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b4fd18da-d185-4d9e-9718-4a7a16198c6c", "format": "json"}]: dispatch
Oct 01 17:09:24 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b4fd18da-d185-4d9e-9718-4a7a16198c6c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:09:24 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b4fd18da-d185-4d9e-9718-4a7a16198c6c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:09:24 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:09:24.110+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b4fd18da-d185-4d9e-9718-4a7a16198c6c' of type subvolume
Oct 01 17:09:24 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b4fd18da-d185-4d9e-9718-4a7a16198c6c' of type subvolume
Oct 01 17:09:24 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b4fd18da-d185-4d9e-9718-4a7a16198c6c", "force": true, "format": "json"}]: dispatch
Oct 01 17:09:24 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b4fd18da-d185-4d9e-9718-4a7a16198c6c, vol_name:cephfs) < ""
Oct 01 17:09:24 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b4fd18da-d185-4d9e-9718-4a7a16198c6c'' moved to trashcan
Oct 01 17:09:24 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:09:24 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b4fd18da-d185-4d9e-9718-4a7a16198c6c, vol_name:cephfs) < ""
Oct 01 17:09:24 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 73 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 79 KiB/s wr, 4 op/s
Oct 01 17:09:24 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "7262037d-5a43-4a8e-99a6-10daf7eb5d63", "new_size": 1073741824, "no_shrink": true, "format": "json"}]: dispatch
Oct 01 17:09:24 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:7262037d-5a43-4a8e-99a6-10daf7eb5d63, vol_name:cephfs) < ""
Oct 01 17:09:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:09:24 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3016621864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:09:24 compute-0 nova_compute[259504]: 2025-10-01 17:09:24.488 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:09:24 compute-0 nova_compute[259504]: 2025-10-01 17:09:24.495 2 DEBUG nova.compute.provider_tree [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed in ProviderTree for provider: 2417da73-53f1-4edf-ae4c-fbd9fa470d6b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 17:09:24 compute-0 nova_compute[259504]: 2025-10-01 17:09:24.514 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 17:09:24 compute-0 nova_compute[259504]: 2025-10-01 17:09:24.515 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 17:09:24 compute-0 nova_compute[259504]: 2025-10-01 17:09:24.515 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.987s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:09:24 compute-0 nova_compute[259504]: 2025-10-01 17:09:24.516 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:09:24 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/523107951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:09:25 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:7262037d-5a43-4a8e-99a6-10daf7eb5d63, vol_name:cephfs) < ""
Oct 01 17:09:25 compute-0 nova_compute[259504]: 2025-10-01 17:09:25.532 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:09:25 compute-0 nova_compute[259504]: 2025-10-01 17:09:25.532 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:09:25 compute-0 nova_compute[259504]: 2025-10-01 17:09:25.532 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 17:09:25 compute-0 nova_compute[259504]: 2025-10-01 17:09:25.533 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 17:09:25 compute-0 nova_compute[259504]: 2025-10-01 17:09:25.552 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 17:09:25 compute-0 nova_compute[259504]: 2025-10-01 17:09:25.552 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:09:25 compute-0 nova_compute[259504]: 2025-10-01 17:09:25.552 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:09:25 compute-0 sudo[278035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:09:25 compute-0 sudo[278035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:09:25 compute-0 sudo[278035]: pam_unix(sudo:session): session closed for user root
Oct 01 17:09:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Oct 01 17:09:25 compute-0 nova_compute[259504]: 2025-10-01 17:09:25.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:09:25 compute-0 nova_compute[259504]: 2025-10-01 17:09:25.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:09:25 compute-0 nova_compute[259504]: 2025-10-01 17:09:25.750 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 01 17:09:25 compute-0 sudo[278060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:09:25 compute-0 sudo[278060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:09:25 compute-0 sudo[278060]: pam_unix(sudo:session): session closed for user root
Oct 01 17:09:25 compute-0 sudo[278085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:09:25 compute-0 sudo[278085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:09:25 compute-0 sudo[278085]: pam_unix(sudo:session): session closed for user root
Oct 01 17:09:25 compute-0 sudo[278110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 17:09:25 compute-0 sudo[278110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:09:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Oct 01 17:09:26 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Oct 01 17:09:26 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1215: 305 pgs: 305 active+clean; 73 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 75 KiB/s wr, 3 op/s
Oct 01 17:09:26 compute-0 sudo[278110]: pam_unix(sudo:session): session closed for user root
Oct 01 17:09:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 17:09:26 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:09:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 17:09:26 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 17:09:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 17:09:26 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b4fd18da-d185-4d9e-9718-4a7a16198c6c", "format": "json"}]: dispatch
Oct 01 17:09:26 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b4fd18da-d185-4d9e-9718-4a7a16198c6c", "force": true, "format": "json"}]: dispatch
Oct 01 17:09:26 compute-0 ceph-mon[74273]: pgmap v1213: 305 pgs: 305 active+clean; 73 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 79 KiB/s wr, 4 op/s
Oct 01 17:09:26 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "7262037d-5a43-4a8e-99a6-10daf7eb5d63", "new_size": 1073741824, "no_shrink": true, "format": "json"}]: dispatch
Oct 01 17:09:26 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3016621864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:09:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:09:26 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:09:26 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 204bd683-6338-45d8-bbe6-c7989f1fc04a does not exist
Oct 01 17:09:26 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 756a80d8-da74-4528-8e3f-375b57e06c72 does not exist
Oct 01 17:09:26 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev fbaf5141-c53b-4625-ba57-a572515ebb9f does not exist
Oct 01 17:09:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 17:09:26 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 17:09:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 17:09:26 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 17:09:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 17:09:26 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:09:26 compute-0 nova_compute[259504]: 2025-10-01 17:09:26.769 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:09:26 compute-0 nova_compute[259504]: 2025-10-01 17:09:26.769 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 17:09:26 compute-0 sudo[278164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:09:26 compute-0 sudo[278164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:09:26 compute-0 sudo[278164]: pam_unix(sudo:session): session closed for user root
Oct 01 17:09:26 compute-0 sudo[278189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:09:26 compute-0 sudo[278189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:09:26 compute-0 sudo[278189]: pam_unix(sudo:session): session closed for user root
Oct 01 17:09:26 compute-0 sudo[278214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:09:26 compute-0 sudo[278214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:09:26 compute-0 sudo[278214]: pam_unix(sudo:session): session closed for user root
Oct 01 17:09:26 compute-0 sudo[278239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 17:09:26 compute-0 sudo[278239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:09:27 compute-0 podman[278301]: 2025-10-01 17:09:27.267014904 +0000 UTC m=+0.041244213 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:09:27 compute-0 podman[278301]: 2025-10-01 17:09:27.50060172 +0000 UTC m=+0.274830929 container create b866f05e2581922dd8a7cb122a09cdf97849bd60536d4f79b391d7c95b04ca77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cartwright, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 01 17:09:27 compute-0 ceph-mon[74273]: osdmap e167: 3 total, 3 up, 3 in
Oct 01 17:09:27 compute-0 ceph-mon[74273]: pgmap v1215: 305 pgs: 305 active+clean; 73 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 75 KiB/s wr, 3 op/s
Oct 01 17:09:27 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:09:27 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 17:09:27 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:09:27 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 17:09:27 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 17:09:27 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:09:27 compute-0 systemd[1]: Started libpod-conmon-b866f05e2581922dd8a7cb122a09cdf97849bd60536d4f79b391d7c95b04ca77.scope.
Oct 01 17:09:27 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7262037d-5a43-4a8e-99a6-10daf7eb5d63", "format": "json"}]: dispatch
Oct 01 17:09:27 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7262037d-5a43-4a8e-99a6-10daf7eb5d63, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:09:27 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7262037d-5a43-4a8e-99a6-10daf7eb5d63, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:09:27 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7262037d-5a43-4a8e-99a6-10daf7eb5d63' of type subvolume
Oct 01 17:09:27 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:09:27.657+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7262037d-5a43-4a8e-99a6-10daf7eb5d63' of type subvolume
Oct 01 17:09:27 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7262037d-5a43-4a8e-99a6-10daf7eb5d63", "force": true, "format": "json"}]: dispatch
Oct 01 17:09:27 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7262037d-5a43-4a8e-99a6-10daf7eb5d63, vol_name:cephfs) < ""
Oct 01 17:09:27 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7262037d-5a43-4a8e-99a6-10daf7eb5d63'' moved to trashcan
Oct 01 17:09:27 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:09:27 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7262037d-5a43-4a8e-99a6-10daf7eb5d63, vol_name:cephfs) < ""
Oct 01 17:09:27 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:09:27 compute-0 podman[278301]: 2025-10-01 17:09:27.986884054 +0000 UTC m=+0.761113303 container init b866f05e2581922dd8a7cb122a09cdf97849bd60536d4f79b391d7c95b04ca77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cartwright, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Oct 01 17:09:28 compute-0 podman[278301]: 2025-10-01 17:09:28.003142222 +0000 UTC m=+0.777371451 container start b866f05e2581922dd8a7cb122a09cdf97849bd60536d4f79b391d7c95b04ca77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cartwright, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 01 17:09:28 compute-0 sleepy_cartwright[278317]: 167 167
Oct 01 17:09:28 compute-0 systemd[1]: libpod-b866f05e2581922dd8a7cb122a09cdf97849bd60536d4f79b391d7c95b04ca77.scope: Deactivated successfully.
Oct 01 17:09:28 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 73 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 54 KiB/s wr, 2 op/s
Oct 01 17:09:28 compute-0 podman[278301]: 2025-10-01 17:09:28.26420469 +0000 UTC m=+1.038433999 container attach b866f05e2581922dd8a7cb122a09cdf97849bd60536d4f79b391d7c95b04ca77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:09:28 compute-0 podman[278301]: 2025-10-01 17:09:28.26596944 +0000 UTC m=+1.040198709 container died b866f05e2581922dd8a7cb122a09cdf97849bd60536d4f79b391d7c95b04ca77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cartwright, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Oct 01 17:09:28 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7262037d-5a43-4a8e-99a6-10daf7eb5d63", "format": "json"}]: dispatch
Oct 01 17:09:28 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7262037d-5a43-4a8e-99a6-10daf7eb5d63", "force": true, "format": "json"}]: dispatch
Oct 01 17:09:28 compute-0 ceph-mon[74273]: pgmap v1216: 305 pgs: 305 active+clean; 73 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 54 KiB/s wr, 2 op/s
Oct 01 17:09:28 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:09:28.693 162304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:71:db', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:60:3f:78:bd:29'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 17:09:28 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:09:28.694 162304 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 17:09:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-d708d39e360bd5b4024b5da2d7e7ff756bbff2ef41ba6086c6f060227b36557c-merged.mount: Deactivated successfully.
Oct 01 17:09:29 compute-0 podman[278301]: 2025-10-01 17:09:29.442107417 +0000 UTC m=+2.216336666 container remove b866f05e2581922dd8a7cb122a09cdf97849bd60536d4f79b391d7c95b04ca77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cartwright, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 01 17:09:29 compute-0 systemd[1]: libpod-conmon-b866f05e2581922dd8a7cb122a09cdf97849bd60536d4f79b391d7c95b04ca77.scope: Deactivated successfully.
Oct 01 17:09:29 compute-0 podman[278343]: 2025-10-01 17:09:29.630316548 +0000 UTC m=+0.031771557 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:09:29 compute-0 nova_compute[259504]: 2025-10-01 17:09:29.751 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:09:29 compute-0 podman[278343]: 2025-10-01 17:09:29.816169979 +0000 UTC m=+0.217624908 container create c7ce3ca07e57d27ac56e514c7840ba183b12816e0072663bf995548780879191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_black, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 01 17:09:29 compute-0 systemd[1]: Started libpod-conmon-c7ce3ca07e57d27ac56e514c7840ba183b12816e0072663bf995548780879191.scope.
Oct 01 17:09:29 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:09:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c6c80b2741e0e1e5f49703c100a90865e22bfff996743dc78e929944ebfd66f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:09:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c6c80b2741e0e1e5f49703c100a90865e22bfff996743dc78e929944ebfd66f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:09:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c6c80b2741e0e1e5f49703c100a90865e22bfff996743dc78e929944ebfd66f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:09:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c6c80b2741e0e1e5f49703c100a90865e22bfff996743dc78e929944ebfd66f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:09:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c6c80b2741e0e1e5f49703c100a90865e22bfff996743dc78e929944ebfd66f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 17:09:30 compute-0 podman[278343]: 2025-10-01 17:09:30.154514347 +0000 UTC m=+0.555969316 container init c7ce3ca07e57d27ac56e514c7840ba183b12816e0072663bf995548780879191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_black, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 01 17:09:30 compute-0 podman[278343]: 2025-10-01 17:09:30.160873281 +0000 UTC m=+0.562328210 container start c7ce3ca07e57d27ac56e514c7840ba183b12816e0072663bf995548780879191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_black, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 01 17:09:30 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 74 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 89 KiB/s wr, 3 op/s
Oct 01 17:09:30 compute-0 podman[278343]: 2025-10-01 17:09:30.385265089 +0000 UTC m=+0.786720038 container attach c7ce3ca07e57d27ac56e514c7840ba183b12816e0072663bf995548780879191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_black, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 01 17:09:30 compute-0 ceph-mon[74273]: pgmap v1217: 305 pgs: 305 active+clean; 74 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 89 KiB/s wr, 3 op/s
Oct 01 17:09:31 compute-0 cranky_black[278360]: --> passed data devices: 0 physical, 3 LVM
Oct 01 17:09:31 compute-0 cranky_black[278360]: --> relative data size: 1.0
Oct 01 17:09:31 compute-0 cranky_black[278360]: --> All data devices are unavailable
Oct 01 17:09:31 compute-0 systemd[1]: libpod-c7ce3ca07e57d27ac56e514c7840ba183b12816e0072663bf995548780879191.scope: Deactivated successfully.
Oct 01 17:09:31 compute-0 podman[278343]: 2025-10-01 17:09:31.203785174 +0000 UTC m=+1.605240143 container died c7ce3ca07e57d27ac56e514c7840ba183b12816e0072663bf995548780879191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_black, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 01 17:09:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c6c80b2741e0e1e5f49703c100a90865e22bfff996743dc78e929944ebfd66f-merged.mount: Deactivated successfully.
Oct 01 17:09:31 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bdd25bd8-509d-48a7-9a62-5485c6f3d21a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:09:31 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bdd25bd8-509d-48a7-9a62-5485c6f3d21a, vol_name:cephfs) < ""
Oct 01 17:09:31 compute-0 podman[278343]: 2025-10-01 17:09:31.625336095 +0000 UTC m=+2.026791024 container remove c7ce3ca07e57d27ac56e514c7840ba183b12816e0072663bf995548780879191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_black, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 01 17:09:31 compute-0 sudo[278239]: pam_unix(sudo:session): session closed for user root
Oct 01 17:09:31 compute-0 systemd[1]: libpod-conmon-c7ce3ca07e57d27ac56e514c7840ba183b12816e0072663bf995548780879191.scope: Deactivated successfully.
Oct 01 17:09:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:09:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Oct 01 17:09:31 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bdd25bd8-509d-48a7-9a62-5485c6f3d21a/.meta.tmp'
Oct 01 17:09:31 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bdd25bd8-509d-48a7-9a62-5485c6f3d21a/.meta.tmp' to config b'/volumes/_nogroup/bdd25bd8-509d-48a7-9a62-5485c6f3d21a/.meta'
Oct 01 17:09:31 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bdd25bd8-509d-48a7-9a62-5485c6f3d21a, vol_name:cephfs) < ""
Oct 01 17:09:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Oct 01 17:09:31 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bdd25bd8-509d-48a7-9a62-5485c6f3d21a", "format": "json"}]: dispatch
Oct 01 17:09:31 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bdd25bd8-509d-48a7-9a62-5485c6f3d21a, vol_name:cephfs) < ""
Oct 01 17:09:31 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bdd25bd8-509d-48a7-9a62-5485c6f3d21a, vol_name:cephfs) < ""
Oct 01 17:09:31 compute-0 sudo[278402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:09:31 compute-0 sudo[278402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:09:31 compute-0 sudo[278402]: pam_unix(sudo:session): session closed for user root
Oct 01 17:09:31 compute-0 sudo[278427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:09:31 compute-0 sudo[278427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:09:31 compute-0 sudo[278427]: pam_unix(sudo:session): session closed for user root
Oct 01 17:09:31 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Oct 01 17:09:31 compute-0 sudo[278452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:09:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:09:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:09:31 compute-0 sudo[278452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:09:31 compute-0 sudo[278452]: pam_unix(sudo:session): session closed for user root
Oct 01 17:09:31 compute-0 sudo[278477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 17:09:31 compute-0 sudo[278477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:09:32 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 74 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 72 KiB/s wr, 2 op/s
Oct 01 17:09:32 compute-0 podman[278544]: 2025-10-01 17:09:32.168919134 +0000 UTC m=+0.020230160 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:09:32 compute-0 podman[278544]: 2025-10-01 17:09:32.303484598 +0000 UTC m=+0.154795634 container create e06be1cbafe7d707a88c87fce98dc077f84349cebcbb95003c358422447d2c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:09:32 compute-0 systemd[1]: Started libpod-conmon-e06be1cbafe7d707a88c87fce98dc077f84349cebcbb95003c358422447d2c4b.scope.
Oct 01 17:09:32 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:09:32 compute-0 podman[278544]: 2025-10-01 17:09:32.60360743 +0000 UTC m=+0.454918526 container init e06be1cbafe7d707a88c87fce98dc077f84349cebcbb95003c358422447d2c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 01 17:09:32 compute-0 podman[278544]: 2025-10-01 17:09:32.619628222 +0000 UTC m=+0.470939268 container start e06be1cbafe7d707a88c87fce98dc077f84349cebcbb95003c358422447d2c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 17:09:32 compute-0 vibrant_keldysh[278559]: 167 167
Oct 01 17:09:32 compute-0 systemd[1]: libpod-e06be1cbafe7d707a88c87fce98dc077f84349cebcbb95003c358422447d2c4b.scope: Deactivated successfully.
Oct 01 17:09:32 compute-0 podman[278544]: 2025-10-01 17:09:32.692792401 +0000 UTC m=+0.544103397 container attach e06be1cbafe7d707a88c87fce98dc077f84349cebcbb95003c358422447d2c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:09:32 compute-0 podman[278544]: 2025-10-01 17:09:32.693492678 +0000 UTC m=+0.544803664 container died e06be1cbafe7d707a88c87fce98dc077f84349cebcbb95003c358422447d2c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_keldysh, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:09:32 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:09:32.696 162304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d2971fc2-5b75-459a-98a0-6e626d0d4d99, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 17:09:32 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bdd25bd8-509d-48a7-9a62-5485c6f3d21a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:09:32 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bdd25bd8-509d-48a7-9a62-5485c6f3d21a", "format": "json"}]: dispatch
Oct 01 17:09:32 compute-0 ceph-mon[74273]: osdmap e168: 3 total, 3 up, 3 in
Oct 01 17:09:32 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:09:32 compute-0 ceph-mon[74273]: pgmap v1219: 305 pgs: 305 active+clean; 74 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 72 KiB/s wr, 2 op/s
Oct 01 17:09:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5ed48870edf7c5ea45d0cd58c2fd9d963ba75e28db03f9e88aa82803618f410-merged.mount: Deactivated successfully.
Oct 01 17:09:33 compute-0 podman[278544]: 2025-10-01 17:09:33.087004437 +0000 UTC m=+0.938315443 container remove e06be1cbafe7d707a88c87fce98dc077f84349cebcbb95003c358422447d2c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_keldysh, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 01 17:09:33 compute-0 systemd[1]: libpod-conmon-e06be1cbafe7d707a88c87fce98dc077f84349cebcbb95003c358422447d2c4b.scope: Deactivated successfully.
Oct 01 17:09:33 compute-0 podman[278584]: 2025-10-01 17:09:33.287851151 +0000 UTC m=+0.074305777 container create dff300cc3090f1ae8aeb6beb9ad77f769696e1ef12d4e328f17e2c88104da455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_gates, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 01 17:09:33 compute-0 podman[278584]: 2025-10-01 17:09:33.241220731 +0000 UTC m=+0.027675437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:09:33 compute-0 systemd[1]: Started libpod-conmon-dff300cc3090f1ae8aeb6beb9ad77f769696e1ef12d4e328f17e2c88104da455.scope.
Oct 01 17:09:33 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:09:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deda3ed8b3701e8df14ba837c7723adf72ae393ddb1497cd00d5e057be62cdce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:09:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deda3ed8b3701e8df14ba837c7723adf72ae393ddb1497cd00d5e057be62cdce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:09:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deda3ed8b3701e8df14ba837c7723adf72ae393ddb1497cd00d5e057be62cdce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:09:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deda3ed8b3701e8df14ba837c7723adf72ae393ddb1497cd00d5e057be62cdce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:09:33 compute-0 podman[278584]: 2025-10-01 17:09:33.561412151 +0000 UTC m=+0.347866857 container init dff300cc3090f1ae8aeb6beb9ad77f769696e1ef12d4e328f17e2c88104da455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_gates, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:09:33 compute-0 podman[278584]: 2025-10-01 17:09:33.573331564 +0000 UTC m=+0.359786180 container start dff300cc3090f1ae8aeb6beb9ad77f769696e1ef12d4e328f17e2c88104da455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_gates, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 01 17:09:33 compute-0 podman[278584]: 2025-10-01 17:09:33.595953186 +0000 UTC m=+0.382407892 container attach dff300cc3090f1ae8aeb6beb9ad77f769696e1ef12d4e328f17e2c88104da455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 01 17:09:34 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 74 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 636 B/s rd, 99 KiB/s wr, 5 op/s
Oct 01 17:09:34 compute-0 gracious_gates[278601]: {
Oct 01 17:09:34 compute-0 gracious_gates[278601]:     "0": [
Oct 01 17:09:34 compute-0 gracious_gates[278601]:         {
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             "devices": [
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "/dev/loop3"
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             ],
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             "lv_name": "ceph_lv0",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             "lv_size": "21470642176",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             "name": "ceph_lv0",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             "tags": {
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "ceph.cluster_name": "ceph",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "ceph.crush_device_class": "",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "ceph.encrypted": "0",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "ceph.osd_id": "0",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "ceph.type": "block",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "ceph.vdo": "0"
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             },
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             "type": "block",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             "vg_name": "ceph_vg0"
Oct 01 17:09:34 compute-0 gracious_gates[278601]:         }
Oct 01 17:09:34 compute-0 gracious_gates[278601]:     ],
Oct 01 17:09:34 compute-0 gracious_gates[278601]:     "1": [
Oct 01 17:09:34 compute-0 gracious_gates[278601]:         {
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             "devices": [
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "/dev/loop4"
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             ],
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             "lv_name": "ceph_lv1",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             "lv_size": "21470642176",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             "name": "ceph_lv1",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             "tags": {
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "ceph.cluster_name": "ceph",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "ceph.crush_device_class": "",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "ceph.encrypted": "0",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "ceph.osd_id": "1",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "ceph.type": "block",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "ceph.vdo": "0"
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             },
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             "type": "block",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             "vg_name": "ceph_vg1"
Oct 01 17:09:34 compute-0 gracious_gates[278601]:         }
Oct 01 17:09:34 compute-0 gracious_gates[278601]:     ],
Oct 01 17:09:34 compute-0 gracious_gates[278601]:     "2": [
Oct 01 17:09:34 compute-0 gracious_gates[278601]:         {
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             "devices": [
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "/dev/loop5"
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             ],
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             "lv_name": "ceph_lv2",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             "lv_size": "21470642176",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             "name": "ceph_lv2",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             "tags": {
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "ceph.cluster_name": "ceph",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "ceph.crush_device_class": "",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "ceph.encrypted": "0",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "ceph.osd_id": "2",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "ceph.type": "block",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:                 "ceph.vdo": "0"
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             },
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             "type": "block",
Oct 01 17:09:34 compute-0 gracious_gates[278601]:             "vg_name": "ceph_vg2"
Oct 01 17:09:34 compute-0 gracious_gates[278601]:         }
Oct 01 17:09:34 compute-0 gracious_gates[278601]:     ]
Oct 01 17:09:34 compute-0 gracious_gates[278601]: }
Oct 01 17:09:34 compute-0 systemd[1]: libpod-dff300cc3090f1ae8aeb6beb9ad77f769696e1ef12d4e328f17e2c88104da455.scope: Deactivated successfully.
Oct 01 17:09:34 compute-0 podman[278584]: 2025-10-01 17:09:34.375104137 +0000 UTC m=+1.161558793 container died dff300cc3090f1ae8aeb6beb9ad77f769696e1ef12d4e328f17e2c88104da455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_gates, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:09:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-deda3ed8b3701e8df14ba837c7723adf72ae393ddb1497cd00d5e057be62cdce-merged.mount: Deactivated successfully.
Oct 01 17:09:35 compute-0 podman[278584]: 2025-10-01 17:09:35.088023202 +0000 UTC m=+1.874477848 container remove dff300cc3090f1ae8aeb6beb9ad77f769696e1ef12d4e328f17e2c88104da455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 01 17:09:35 compute-0 systemd[1]: libpod-conmon-dff300cc3090f1ae8aeb6beb9ad77f769696e1ef12d4e328f17e2c88104da455.scope: Deactivated successfully.
Oct 01 17:09:35 compute-0 sudo[278477]: pam_unix(sudo:session): session closed for user root
Oct 01 17:09:35 compute-0 sudo[278636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:09:35 compute-0 sudo[278636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:09:35 compute-0 sudo[278636]: pam_unix(sudo:session): session closed for user root
Oct 01 17:09:35 compute-0 sudo[278667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:09:35 compute-0 sudo[278667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:09:35 compute-0 podman[278611]: 2025-10-01 17:09:35.258349865 +0000 UTC m=+0.838986571 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 01 17:09:35 compute-0 sudo[278667]: pam_unix(sudo:session): session closed for user root
Oct 01 17:09:35 compute-0 ceph-mon[74273]: pgmap v1220: 305 pgs: 305 active+clean; 74 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 636 B/s rd, 99 KiB/s wr, 5 op/s
Oct 01 17:09:35 compute-0 sudo[278695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:09:35 compute-0 sudo[278695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:09:35 compute-0 sudo[278695]: pam_unix(sudo:session): session closed for user root
Oct 01 17:09:35 compute-0 sudo[278721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 17:09:35 compute-0 sudo[278721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:09:35 compute-0 podman[278787]: 2025-10-01 17:09:35.715804902 +0000 UTC m=+0.040699756 container create e31b816d5caf36c8281de4586a1c6279e107747f20a4ec761065ca20a3476dac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hertz, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 01 17:09:35 compute-0 systemd[1]: Started libpod-conmon-e31b816d5caf36c8281de4586a1c6279e107747f20a4ec761065ca20a3476dac.scope.
Oct 01 17:09:35 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:09:35 compute-0 podman[278787]: 2025-10-01 17:09:35.6975786 +0000 UTC m=+0.022473474 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:09:35 compute-0 podman[278787]: 2025-10-01 17:09:35.794950018 +0000 UTC m=+0.119844872 container init e31b816d5caf36c8281de4586a1c6279e107747f20a4ec761065ca20a3476dac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hertz, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:09:35 compute-0 podman[278787]: 2025-10-01 17:09:35.801149001 +0000 UTC m=+0.126043855 container start e31b816d5caf36c8281de4586a1c6279e107747f20a4ec761065ca20a3476dac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:09:35 compute-0 hopeful_hertz[278804]: 167 167
Oct 01 17:09:35 compute-0 systemd[1]: libpod-e31b816d5caf36c8281de4586a1c6279e107747f20a4ec761065ca20a3476dac.scope: Deactivated successfully.
Oct 01 17:09:35 compute-0 podman[278787]: 2025-10-01 17:09:35.807845077 +0000 UTC m=+0.132739931 container attach e31b816d5caf36c8281de4586a1c6279e107747f20a4ec761065ca20a3476dac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hertz, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 01 17:09:35 compute-0 podman[278787]: 2025-10-01 17:09:35.808496352 +0000 UTC m=+0.133391216 container died e31b816d5caf36c8281de4586a1c6279e107747f20a4ec761065ca20a3476dac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 17:09:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b0f3a720033d19ac293e7ab8f8f852488ebe9518d8e4dba69e60fddce908dfd-merged.mount: Deactivated successfully.
Oct 01 17:09:35 compute-0 podman[278787]: 2025-10-01 17:09:35.854586334 +0000 UTC m=+0.179481188 container remove e31b816d5caf36c8281de4586a1c6279e107747f20a4ec761065ca20a3476dac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hertz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:09:35 compute-0 systemd[1]: libpod-conmon-e31b816d5caf36c8281de4586a1c6279e107747f20a4ec761065ca20a3476dac.scope: Deactivated successfully.
Oct 01 17:09:35 compute-0 podman[278807]: 2025-10-01 17:09:35.878718609 +0000 UTC m=+0.084712986 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd)
Oct 01 17:09:36 compute-0 podman[278847]: 2025-10-01 17:09:36.005657264 +0000 UTC m=+0.037636487 container create d933efb1503d791c08855af79bbf6546814a1d56b03185cf99a0fb7f9e5a222e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:09:36 compute-0 systemd[1]: Started libpod-conmon-d933efb1503d791c08855af79bbf6546814a1d56b03185cf99a0fb7f9e5a222e.scope.
Oct 01 17:09:36 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:09:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82debb78642e967c371c1e3232fb3346144a009349ed572cb5eed34a890b8e36/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:09:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82debb78642e967c371c1e3232fb3346144a009349ed572cb5eed34a890b8e36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:09:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82debb78642e967c371c1e3232fb3346144a009349ed572cb5eed34a890b8e36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:09:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82debb78642e967c371c1e3232fb3346144a009349ed572cb5eed34a890b8e36/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:09:36 compute-0 podman[278847]: 2025-10-01 17:09:36.081861979 +0000 UTC m=+0.113841202 container init d933efb1503d791c08855af79bbf6546814a1d56b03185cf99a0fb7f9e5a222e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldwasser, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:09:36 compute-0 podman[278847]: 2025-10-01 17:09:35.989873958 +0000 UTC m=+0.021853201 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:09:36 compute-0 podman[278847]: 2025-10-01 17:09:36.089041609 +0000 UTC m=+0.121020832 container start d933efb1503d791c08855af79bbf6546814a1d56b03185cf99a0fb7f9e5a222e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldwasser, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:09:36 compute-0 podman[278847]: 2025-10-01 17:09:36.092026572 +0000 UTC m=+0.124005825 container attach d933efb1503d791c08855af79bbf6546814a1d56b03185cf99a0fb7f9e5a222e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldwasser, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:09:36 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 74 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 80 KiB/s wr, 4 op/s
Oct 01 17:09:36 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0f3443b9-e9d9-4a47-839c-8c53679a0f3e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:09:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0f3443b9-e9d9-4a47-839c-8c53679a0f3e, vol_name:cephfs) < ""
Oct 01 17:09:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0f3443b9-e9d9-4a47-839c-8c53679a0f3e/.meta.tmp'
Oct 01 17:09:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0f3443b9-e9d9-4a47-839c-8c53679a0f3e/.meta.tmp' to config b'/volumes/_nogroup/0f3443b9-e9d9-4a47-839c-8c53679a0f3e/.meta'
Oct 01 17:09:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0f3443b9-e9d9-4a47-839c-8c53679a0f3e, vol_name:cephfs) < ""
Oct 01 17:09:36 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0f3443b9-e9d9-4a47-839c-8c53679a0f3e", "format": "json"}]: dispatch
Oct 01 17:09:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0f3443b9-e9d9-4a47-839c-8c53679a0f3e, vol_name:cephfs) < ""
Oct 01 17:09:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0f3443b9-e9d9-4a47-839c-8c53679a0f3e, vol_name:cephfs) < ""
Oct 01 17:09:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:09:36 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:09:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:09:36 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bdd25bd8-509d-48a7-9a62-5485c6f3d21a", "format": "json"}]: dispatch
Oct 01 17:09:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:bdd25bd8-509d-48a7-9a62-5485c6f3d21a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:09:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:bdd25bd8-509d-48a7-9a62-5485c6f3d21a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:09:36 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bdd25bd8-509d-48a7-9a62-5485c6f3d21a' of type subvolume
Oct 01 17:09:36 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:09:36.823+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bdd25bd8-509d-48a7-9a62-5485c6f3d21a' of type subvolume
Oct 01 17:09:36 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bdd25bd8-509d-48a7-9a62-5485c6f3d21a", "force": true, "format": "json"}]: dispatch
Oct 01 17:09:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bdd25bd8-509d-48a7-9a62-5485c6f3d21a, vol_name:cephfs) < ""
Oct 01 17:09:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/bdd25bd8-509d-48a7-9a62-5485c6f3d21a'' moved to trashcan
Oct 01 17:09:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:09:36 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bdd25bd8-509d-48a7-9a62-5485c6f3d21a, vol_name:cephfs) < ""
Oct 01 17:09:36 compute-0 upbeat_goldwasser[278863]: {
Oct 01 17:09:36 compute-0 upbeat_goldwasser[278863]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 17:09:36 compute-0 upbeat_goldwasser[278863]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:09:36 compute-0 upbeat_goldwasser[278863]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 17:09:36 compute-0 upbeat_goldwasser[278863]:         "osd_id": 2,
Oct 01 17:09:36 compute-0 upbeat_goldwasser[278863]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 17:09:36 compute-0 upbeat_goldwasser[278863]:         "type": "bluestore"
Oct 01 17:09:36 compute-0 upbeat_goldwasser[278863]:     },
Oct 01 17:09:36 compute-0 upbeat_goldwasser[278863]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 17:09:36 compute-0 upbeat_goldwasser[278863]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:09:36 compute-0 upbeat_goldwasser[278863]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 17:09:36 compute-0 upbeat_goldwasser[278863]:         "osd_id": 0,
Oct 01 17:09:36 compute-0 upbeat_goldwasser[278863]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 17:09:36 compute-0 upbeat_goldwasser[278863]:         "type": "bluestore"
Oct 01 17:09:36 compute-0 upbeat_goldwasser[278863]:     },
Oct 01 17:09:36 compute-0 upbeat_goldwasser[278863]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 17:09:36 compute-0 upbeat_goldwasser[278863]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:09:36 compute-0 upbeat_goldwasser[278863]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 17:09:36 compute-0 upbeat_goldwasser[278863]:         "osd_id": 1,
Oct 01 17:09:36 compute-0 upbeat_goldwasser[278863]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 17:09:36 compute-0 upbeat_goldwasser[278863]:         "type": "bluestore"
Oct 01 17:09:36 compute-0 upbeat_goldwasser[278863]:     }
Oct 01 17:09:36 compute-0 upbeat_goldwasser[278863]: }
Oct 01 17:09:37 compute-0 systemd[1]: libpod-d933efb1503d791c08855af79bbf6546814a1d56b03185cf99a0fb7f9e5a222e.scope: Deactivated successfully.
Oct 01 17:09:37 compute-0 podman[278847]: 2025-10-01 17:09:37.012887744 +0000 UTC m=+1.044866967 container died d933efb1503d791c08855af79bbf6546814a1d56b03185cf99a0fb7f9e5a222e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldwasser, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 01 17:09:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-82debb78642e967c371c1e3232fb3346144a009349ed572cb5eed34a890b8e36-merged.mount: Deactivated successfully.
Oct 01 17:09:37 compute-0 podman[278847]: 2025-10-01 17:09:37.084655587 +0000 UTC m=+1.116634820 container remove d933efb1503d791c08855af79bbf6546814a1d56b03185cf99a0fb7f9e5a222e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 01 17:09:37 compute-0 systemd[1]: libpod-conmon-d933efb1503d791c08855af79bbf6546814a1d56b03185cf99a0fb7f9e5a222e.scope: Deactivated successfully.
Oct 01 17:09:37 compute-0 sudo[278721]: pam_unix(sudo:session): session closed for user root
Oct 01 17:09:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 17:09:37 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:09:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 17:09:37 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:09:37 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 0f29767f-9cae-4b70-9be0-802ef69a8037 does not exist
Oct 01 17:09:37 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 4f06c03e-648a-47dc-8f13-3db5aa511f09 does not exist
Oct 01 17:09:37 compute-0 sudo[278907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:09:37 compute-0 sudo[278907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:09:37 compute-0 sudo[278907]: pam_unix(sudo:session): session closed for user root
Oct 01 17:09:37 compute-0 sudo[278932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 17:09:37 compute-0 sudo[278932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:09:37 compute-0 sudo[278932]: pam_unix(sudo:session): session closed for user root
Oct 01 17:09:37 compute-0 ceph-mon[74273]: pgmap v1221: 305 pgs: 305 active+clean; 74 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 80 KiB/s wr, 4 op/s
Oct 01 17:09:37 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:09:37 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:09:37 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:09:38 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 305 active+clean; 74 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 57 KiB/s wr, 3 op/s
Oct 01 17:09:38 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0f3443b9-e9d9-4a47-839c-8c53679a0f3e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:09:38 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0f3443b9-e9d9-4a47-839c-8c53679a0f3e", "format": "json"}]: dispatch
Oct 01 17:09:38 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bdd25bd8-509d-48a7-9a62-5485c6f3d21a", "format": "json"}]: dispatch
Oct 01 17:09:38 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bdd25bd8-509d-48a7-9a62-5485c6f3d21a", "force": true, "format": "json"}]: dispatch
Oct 01 17:09:38 compute-0 nova_compute[259504]: 2025-10-01 17:09:38.751 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:09:38 compute-0 nova_compute[259504]: 2025-10-01 17:09:38.752 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 01 17:09:38 compute-0 nova_compute[259504]: 2025-10-01 17:09:38.779 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 01 17:09:39 compute-0 ceph-mon[74273]: pgmap v1222: 305 pgs: 305 active+clean; 74 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 57 KiB/s wr, 3 op/s
Oct 01 17:09:40 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0f3443b9-e9d9-4a47-839c-8c53679a0f3e", "snap_name": "9df0b9f8-e591-4a39-b66c-e88c6cec14cd", "format": "json"}]: dispatch
Oct 01 17:09:40 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:9df0b9f8-e591-4a39-b66c-e88c6cec14cd, sub_name:0f3443b9-e9d9-4a47-839c-8c53679a0f3e, vol_name:cephfs) < ""
Oct 01 17:09:40 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 74 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 59 KiB/s wr, 4 op/s
Oct 01 17:09:40 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:9df0b9f8-e591-4a39-b66c-e88c6cec14cd, sub_name:0f3443b9-e9d9-4a47-839c-8c53679a0f3e, vol_name:cephfs) < ""
Oct 01 17:09:40 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8814b9a0-ee27-4d85-bc47-1d08537fe868", "format": "json"}]: dispatch
Oct 01 17:09:40 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8814b9a0-ee27-4d85-bc47-1d08537fe868, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:09:40 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8814b9a0-ee27-4d85-bc47-1d08537fe868, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:09:40 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8814b9a0-ee27-4d85-bc47-1d08537fe868", "force": true, "format": "json"}]: dispatch
Oct 01 17:09:40 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8814b9a0-ee27-4d85-bc47-1d08537fe868, vol_name:cephfs) < ""
Oct 01 17:09:40 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8814b9a0-ee27-4d85-bc47-1d08537fe868'' moved to trashcan
Oct 01 17:09:40 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:09:40 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8814b9a0-ee27-4d85-bc47-1d08537fe868, vol_name:cephfs) < ""
Oct 01 17:09:40 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0f3443b9-e9d9-4a47-839c-8c53679a0f3e", "snap_name": "9df0b9f8-e591-4a39-b66c-e88c6cec14cd", "format": "json"}]: dispatch
Oct 01 17:09:40 compute-0 ceph-mon[74273]: pgmap v1223: 305 pgs: 305 active+clean; 74 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 59 KiB/s wr, 4 op/s
Oct 01 17:09:40 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8814b9a0-ee27-4d85-bc47-1d08537fe868", "format": "json"}]: dispatch
Oct 01 17:09:40 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8814b9a0-ee27-4d85-bc47-1d08537fe868", "force": true, "format": "json"}]: dispatch
Oct 01 17:09:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:09:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:09:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:09:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:09:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:09:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f81397faf10>)]
Oct 01 17:09:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Oct 01 17:09:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:09:42 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 74 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 296 B/s rd, 57 KiB/s wr, 3 op/s
Oct 01 17:09:42 compute-0 ceph-mon[74273]: pgmap v1224: 305 pgs: 305 active+clean; 74 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 296 B/s rd, 57 KiB/s wr, 3 op/s
Oct 01 17:09:43 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8bab47f1-016a-4c5f-8b58-46e57c02ad64", "snap_name": "f40cff96-8848-426c-90c1-e99c5be0398c_0c523dfb-ea8e-423a-89c4-13a61256651c", "force": true, "format": "json"}]: dispatch
Oct 01 17:09:43 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f40cff96-8848-426c-90c1-e99c5be0398c_0c523dfb-ea8e-423a-89c4-13a61256651c, sub_name:8bab47f1-016a-4c5f-8b58-46e57c02ad64, vol_name:cephfs) < ""
Oct 01 17:09:43 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.pmbdpj(active, since 35m)
Oct 01 17:09:43 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8bab47f1-016a-4c5f-8b58-46e57c02ad64/.meta.tmp'
Oct 01 17:09:43 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8bab47f1-016a-4c5f-8b58-46e57c02ad64/.meta.tmp' to config b'/volumes/_nogroup/8bab47f1-016a-4c5f-8b58-46e57c02ad64/.meta'
Oct 01 17:09:43 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f40cff96-8848-426c-90c1-e99c5be0398c_0c523dfb-ea8e-423a-89c4-13a61256651c, sub_name:8bab47f1-016a-4c5f-8b58-46e57c02ad64, vol_name:cephfs) < ""
Oct 01 17:09:43 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8bab47f1-016a-4c5f-8b58-46e57c02ad64", "snap_name": "f40cff96-8848-426c-90c1-e99c5be0398c", "force": true, "format": "json"}]: dispatch
Oct 01 17:09:43 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f40cff96-8848-426c-90c1-e99c5be0398c, sub_name:8bab47f1-016a-4c5f-8b58-46e57c02ad64, vol_name:cephfs) < ""
Oct 01 17:09:43 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8bab47f1-016a-4c5f-8b58-46e57c02ad64/.meta.tmp'
Oct 01 17:09:43 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8bab47f1-016a-4c5f-8b58-46e57c02ad64/.meta.tmp' to config b'/volumes/_nogroup/8bab47f1-016a-4c5f-8b58-46e57c02ad64/.meta'
Oct 01 17:09:43 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f40cff96-8848-426c-90c1-e99c5be0398c, sub_name:8bab47f1-016a-4c5f-8b58-46e57c02ad64, vol_name:cephfs) < ""
Oct 01 17:09:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 17:09:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/145296270' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 17:09:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 17:09:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/145296270' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 17:09:44 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 75 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 80 KiB/s wr, 5 op/s
Oct 01 17:09:44 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8bab47f1-016a-4c5f-8b58-46e57c02ad64", "snap_name": "f40cff96-8848-426c-90c1-e99c5be0398c_0c523dfb-ea8e-423a-89c4-13a61256651c", "force": true, "format": "json"}]: dispatch
Oct 01 17:09:44 compute-0 ceph-mon[74273]: mgrmap e16: compute-0.pmbdpj(active, since 35m)
Oct 01 17:09:44 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8bab47f1-016a-4c5f-8b58-46e57c02ad64", "snap_name": "f40cff96-8848-426c-90c1-e99c5be0398c", "force": true, "format": "json"}]: dispatch
Oct 01 17:09:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/145296270' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 17:09:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/145296270' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 17:09:44 compute-0 ceph-mon[74273]: pgmap v1225: 305 pgs: 305 active+clean; 75 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 80 KiB/s wr, 5 op/s
Oct 01 17:09:44 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0f3443b9-e9d9-4a47-839c-8c53679a0f3e", "snap_name": "9df0b9f8-e591-4a39-b66c-e88c6cec14cd_fbc8b7cc-a3c5-4938-a7e9-5dd26846f601", "force": true, "format": "json"}]: dispatch
Oct 01 17:09:44 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9df0b9f8-e591-4a39-b66c-e88c6cec14cd_fbc8b7cc-a3c5-4938-a7e9-5dd26846f601, sub_name:0f3443b9-e9d9-4a47-839c-8c53679a0f3e, vol_name:cephfs) < ""
Oct 01 17:09:44 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0f3443b9-e9d9-4a47-839c-8c53679a0f3e/.meta.tmp'
Oct 01 17:09:44 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0f3443b9-e9d9-4a47-839c-8c53679a0f3e/.meta.tmp' to config b'/volumes/_nogroup/0f3443b9-e9d9-4a47-839c-8c53679a0f3e/.meta'
Oct 01 17:09:44 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9df0b9f8-e591-4a39-b66c-e88c6cec14cd_fbc8b7cc-a3c5-4938-a7e9-5dd26846f601, sub_name:0f3443b9-e9d9-4a47-839c-8c53679a0f3e, vol_name:cephfs) < ""
Oct 01 17:09:44 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0f3443b9-e9d9-4a47-839c-8c53679a0f3e", "snap_name": "9df0b9f8-e591-4a39-b66c-e88c6cec14cd", "force": true, "format": "json"}]: dispatch
Oct 01 17:09:44 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9df0b9f8-e591-4a39-b66c-e88c6cec14cd, sub_name:0f3443b9-e9d9-4a47-839c-8c53679a0f3e, vol_name:cephfs) < ""
Oct 01 17:09:45 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0f3443b9-e9d9-4a47-839c-8c53679a0f3e/.meta.tmp'
Oct 01 17:09:45 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0f3443b9-e9d9-4a47-839c-8c53679a0f3e/.meta.tmp' to config b'/volumes/_nogroup/0f3443b9-e9d9-4a47-839c-8c53679a0f3e/.meta'
Oct 01 17:09:45 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9df0b9f8-e591-4a39-b66c-e88c6cec14cd, sub_name:0f3443b9-e9d9-4a47-839c-8c53679a0f3e, vol_name:cephfs) < ""
Oct 01 17:09:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Oct 01 17:09:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Oct 01 17:09:45 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Oct 01 17:09:45 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0f3443b9-e9d9-4a47-839c-8c53679a0f3e", "snap_name": "9df0b9f8-e591-4a39-b66c-e88c6cec14cd_fbc8b7cc-a3c5-4938-a7e9-5dd26846f601", "force": true, "format": "json"}]: dispatch
Oct 01 17:09:45 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0f3443b9-e9d9-4a47-839c-8c53679a0f3e", "snap_name": "9df0b9f8-e591-4a39-b66c-e88c6cec14cd", "force": true, "format": "json"}]: dispatch
Oct 01 17:09:46 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 75 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 75 KiB/s wr, 4 op/s
Oct 01 17:09:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:09:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Oct 01 17:09:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Oct 01 17:09:46 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Oct 01 17:09:46 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8bab47f1-016a-4c5f-8b58-46e57c02ad64", "format": "json"}]: dispatch
Oct 01 17:09:46 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8bab47f1-016a-4c5f-8b58-46e57c02ad64, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:09:46 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8bab47f1-016a-4c5f-8b58-46e57c02ad64, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:09:46 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:09:46.915+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8bab47f1-016a-4c5f-8b58-46e57c02ad64' of type subvolume
Oct 01 17:09:46 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8bab47f1-016a-4c5f-8b58-46e57c02ad64' of type subvolume
Oct 01 17:09:46 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8bab47f1-016a-4c5f-8b58-46e57c02ad64", "force": true, "format": "json"}]: dispatch
Oct 01 17:09:46 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8bab47f1-016a-4c5f-8b58-46e57c02ad64, vol_name:cephfs) < ""
Oct 01 17:09:46 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8bab47f1-016a-4c5f-8b58-46e57c02ad64'' moved to trashcan
Oct 01 17:09:46 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:09:46 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8bab47f1-016a-4c5f-8b58-46e57c02ad64, vol_name:cephfs) < ""
Oct 01 17:09:46 compute-0 ceph-mon[74273]: osdmap e169: 3 total, 3 up, 3 in
Oct 01 17:09:46 compute-0 ceph-mon[74273]: pgmap v1227: 305 pgs: 305 active+clean; 75 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 75 KiB/s wr, 4 op/s
Oct 01 17:09:46 compute-0 ceph-mon[74273]: osdmap e170: 3 total, 3 up, 3 in
Oct 01 17:09:47 compute-0 podman[278957]: 2025-10-01 17:09:47.756931437 +0000 UTC m=+0.075614856 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 01 17:09:48 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8bab47f1-016a-4c5f-8b58-46e57c02ad64", "format": "json"}]: dispatch
Oct 01 17:09:48 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8bab47f1-016a-4c5f-8b58-46e57c02ad64", "force": true, "format": "json"}]: dispatch
Oct 01 17:09:48 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 75 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 48 KiB/s wr, 6 op/s
Oct 01 17:09:48 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0f3443b9-e9d9-4a47-839c-8c53679a0f3e", "format": "json"}]: dispatch
Oct 01 17:09:48 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0f3443b9-e9d9-4a47-839c-8c53679a0f3e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:09:48 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0f3443b9-e9d9-4a47-839c-8c53679a0f3e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:09:48 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:09:48.190+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0f3443b9-e9d9-4a47-839c-8c53679a0f3e' of type subvolume
Oct 01 17:09:48 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0f3443b9-e9d9-4a47-839c-8c53679a0f3e' of type subvolume
Oct 01 17:09:48 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0f3443b9-e9d9-4a47-839c-8c53679a0f3e", "force": true, "format": "json"}]: dispatch
Oct 01 17:09:48 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0f3443b9-e9d9-4a47-839c-8c53679a0f3e, vol_name:cephfs) < ""
Oct 01 17:09:48 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0f3443b9-e9d9-4a47-839c-8c53679a0f3e'' moved to trashcan
Oct 01 17:09:48 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:09:48 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0f3443b9-e9d9-4a47-839c-8c53679a0f3e, vol_name:cephfs) < ""
Oct 01 17:09:49 compute-0 ceph-mon[74273]: pgmap v1229: 305 pgs: 305 active+clean; 75 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 48 KiB/s wr, 6 op/s
Oct 01 17:09:49 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0f3443b9-e9d9-4a47-839c-8c53679a0f3e", "format": "json"}]: dispatch
Oct 01 17:09:49 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0f3443b9-e9d9-4a47-839c-8c53679a0f3e", "force": true, "format": "json"}]: dispatch
Oct 01 17:09:50 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 75 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 137 KiB/s wr, 8 op/s
Oct 01 17:09:51 compute-0 ceph-mon[74273]: pgmap v1230: 305 pgs: 305 active+clean; 75 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 137 KiB/s wr, 8 op/s
Oct 01 17:09:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:09:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Oct 01 17:09:51 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Oct 01 17:09:51 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Oct 01 17:09:52 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 75 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 958 B/s rd, 113 KiB/s wr, 7 op/s
Oct 01 17:09:52 compute-0 podman[278983]: 2025-10-01 17:09:52.75816859 +0000 UTC m=+0.069073070 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 01 17:09:52 compute-0 ceph-mon[74273]: osdmap e171: 3 total, 3 up, 3 in
Oct 01 17:09:52 compute-0 ceph-mon[74273]: pgmap v1232: 305 pgs: 305 active+clean; 75 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 958 B/s rd, 113 KiB/s wr, 7 op/s
Oct 01 17:09:54 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 75 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 101 KiB/s wr, 7 op/s
Oct 01 17:09:54 compute-0 ceph-mon[74273]: pgmap v1233: 305 pgs: 305 active+clean; 75 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 101 KiB/s wr, 7 op/s
Oct 01 17:09:56 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 75 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 87 KiB/s wr, 6 op/s
Oct 01 17:09:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:09:57 compute-0 ceph-mon[74273]: pgmap v1234: 305 pgs: 305 active+clean; 75 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 87 KiB/s wr, 6 op/s
Oct 01 17:09:58 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 75 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 79 KiB/s wr, 3 op/s
Oct 01 17:09:59 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d5107ec8-a115-44fa-8f38-14e43f7ac582", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:09:59 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d5107ec8-a115-44fa-8f38-14e43f7ac582, vol_name:cephfs) < ""
Oct 01 17:09:59 compute-0 ceph-mon[74273]: pgmap v1235: 305 pgs: 305 active+clean; 75 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 79 KiB/s wr, 3 op/s
Oct 01 17:10:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d5107ec8-a115-44fa-8f38-14e43f7ac582/.meta.tmp'
Oct 01 17:10:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d5107ec8-a115-44fa-8f38-14e43f7ac582/.meta.tmp' to config b'/volumes/_nogroup/d5107ec8-a115-44fa-8f38-14e43f7ac582/.meta'
Oct 01 17:10:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d5107ec8-a115-44fa-8f38-14e43f7ac582, vol_name:cephfs) < ""
Oct 01 17:10:00 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d5107ec8-a115-44fa-8f38-14e43f7ac582", "format": "json"}]: dispatch
Oct 01 17:10:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d5107ec8-a115-44fa-8f38-14e43f7ac582, vol_name:cephfs) < ""
Oct 01 17:10:00 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d5107ec8-a115-44fa-8f38-14e43f7ac582, vol_name:cephfs) < ""
Oct 01 17:10:00 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 17:10:00 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:10:00 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 76 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 26 KiB/s wr, 2 op/s
Oct 01 17:10:00 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d5107ec8-a115-44fa-8f38-14e43f7ac582", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Oct 01 17:10:00 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d5107ec8-a115-44fa-8f38-14e43f7ac582", "format": "json"}]: dispatch
Oct 01 17:10:00 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/974982333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 17:10:00 compute-0 ceph-mon[74273]: pgmap v1236: 305 pgs: 305 active+clean; 76 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 26 KiB/s wr, 2 op/s
Oct 01 17:10:01 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:10:02 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 76 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 390 B/s rd, 25 KiB/s wr, 2 op/s
Oct 01 17:10:02 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "d5107ec8-a115-44fa-8f38-14e43f7ac582", "snap_name": "345ab768-d585-48ee-ab8b-ca5dc8b6d2f3", "format": "json"}]: dispatch
Oct 01 17:10:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:345ab768-d585-48ee-ab8b-ca5dc8b6d2f3, sub_name:d5107ec8-a115-44fa-8f38-14e43f7ac582, vol_name:cephfs) < ""
Oct 01 17:10:02 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:345ab768-d585-48ee-ab8b-ca5dc8b6d2f3, sub_name:d5107ec8-a115-44fa-8f38-14e43f7ac582, vol_name:cephfs) < ""
Oct 01 17:10:03 compute-0 ceph-mon[74273]: pgmap v1237: 305 pgs: 305 active+clean; 76 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 390 B/s rd, 25 KiB/s wr, 2 op/s
Oct 01 17:10:04 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 76 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 40 KiB/s wr, 3 op/s
Oct 01 17:10:04 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "d5107ec8-a115-44fa-8f38-14e43f7ac582", "snap_name": "345ab768-d585-48ee-ab8b-ca5dc8b6d2f3", "format": "json"}]: dispatch
Oct 01 17:10:05 compute-0 podman[279002]: 2025-10-01 17:10:05.794659406 +0000 UTC m=+0.108954589 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, config_id=iscsid)
Oct 01 17:10:05 compute-0 ceph-mon[74273]: pgmap v1238: 305 pgs: 305 active+clean; 76 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 40 KiB/s wr, 3 op/s
Oct 01 17:10:06 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 305 active+clean; 76 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s wr, 1 op/s
Oct 01 17:10:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:10:06 compute-0 podman[279023]: 2025-10-01 17:10:06.780276154 +0000 UTC m=+0.083990508 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd)
Oct 01 17:10:06 compute-0 ceph-mon[74273]: pgmap v1239: 305 pgs: 305 active+clean; 76 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s wr, 1 op/s
Oct 01 17:10:07 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d5107ec8-a115-44fa-8f38-14e43f7ac582", "snap_name": "345ab768-d585-48ee-ab8b-ca5dc8b6d2f3_c98a7624-3617-42e0-a937-3a16d4696ffc", "force": true, "format": "json"}]: dispatch
Oct 01 17:10:07 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:345ab768-d585-48ee-ab8b-ca5dc8b6d2f3_c98a7624-3617-42e0-a937-3a16d4696ffc, sub_name:d5107ec8-a115-44fa-8f38-14e43f7ac582, vol_name:cephfs) < ""
Oct 01 17:10:07 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d5107ec8-a115-44fa-8f38-14e43f7ac582/.meta.tmp'
Oct 01 17:10:07 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d5107ec8-a115-44fa-8f38-14e43f7ac582/.meta.tmp' to config b'/volumes/_nogroup/d5107ec8-a115-44fa-8f38-14e43f7ac582/.meta'
Oct 01 17:10:07 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:345ab768-d585-48ee-ab8b-ca5dc8b6d2f3_c98a7624-3617-42e0-a937-3a16d4696ffc, sub_name:d5107ec8-a115-44fa-8f38-14e43f7ac582, vol_name:cephfs) < ""
Oct 01 17:10:07 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d5107ec8-a115-44fa-8f38-14e43f7ac582", "snap_name": "345ab768-d585-48ee-ab8b-ca5dc8b6d2f3", "force": true, "format": "json"}]: dispatch
Oct 01 17:10:07 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:345ab768-d585-48ee-ab8b-ca5dc8b6d2f3, sub_name:d5107ec8-a115-44fa-8f38-14e43f7ac582, vol_name:cephfs) < ""
Oct 01 17:10:07 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d5107ec8-a115-44fa-8f38-14e43f7ac582/.meta.tmp'
Oct 01 17:10:07 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d5107ec8-a115-44fa-8f38-14e43f7ac582/.meta.tmp' to config b'/volumes/_nogroup/d5107ec8-a115-44fa-8f38-14e43f7ac582/.meta'
Oct 01 17:10:07 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:345ab768-d585-48ee-ab8b-ca5dc8b6d2f3, sub_name:d5107ec8-a115-44fa-8f38-14e43f7ac582, vol_name:cephfs) < ""
Oct 01 17:10:08 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 76 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s wr, 1 op/s
Oct 01 17:10:09 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d5107ec8-a115-44fa-8f38-14e43f7ac582", "snap_name": "345ab768-d585-48ee-ab8b-ca5dc8b6d2f3_c98a7624-3617-42e0-a937-3a16d4696ffc", "force": true, "format": "json"}]: dispatch
Oct 01 17:10:09 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d5107ec8-a115-44fa-8f38-14e43f7ac582", "snap_name": "345ab768-d585-48ee-ab8b-ca5dc8b6d2f3", "force": true, "format": "json"}]: dispatch
Oct 01 17:10:09 compute-0 ceph-mon[74273]: pgmap v1240: 305 pgs: 305 active+clean; 76 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s wr, 1 op/s
Oct 01 17:10:10 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 76 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s wr, 2 op/s
Oct 01 17:10:10 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d5107ec8-a115-44fa-8f38-14e43f7ac582", "format": "json"}]: dispatch
Oct 01 17:10:10 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d5107ec8-a115-44fa-8f38-14e43f7ac582, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:10:10 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d5107ec8-a115-44fa-8f38-14e43f7ac582, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Oct 01 17:10:10 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:10:10.996+0000 7f813a030640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd5107ec8-a115-44fa-8f38-14e43f7ac582' of type subvolume
Oct 01 17:10:10 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd5107ec8-a115-44fa-8f38-14e43f7ac582' of type subvolume
Oct 01 17:10:11 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d5107ec8-a115-44fa-8f38-14e43f7ac582", "force": true, "format": "json"}]: dispatch
Oct 01 17:10:11 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d5107ec8-a115-44fa-8f38-14e43f7ac582, vol_name:cephfs) < ""
Oct 01 17:10:11 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d5107ec8-a115-44fa-8f38-14e43f7ac582'' moved to trashcan
Oct 01 17:10:11 compute-0 ceph-mgr[74571]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 01 17:10:11 compute-0 ceph-mgr[74571]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d5107ec8-a115-44fa-8f38-14e43f7ac582, vol_name:cephfs) < ""
Oct 01 17:10:11 compute-0 ceph-mon[74273]: pgmap v1241: 305 pgs: 305 active+clean; 76 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s wr, 2 op/s
Oct 01 17:10:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:10:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:10:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:10:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:10:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_17:10:11
Oct 01 17:10:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 17:10:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 17:10:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['volumes', '.mgr', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'vms', 'images', 'default.rgw.log', 'default.rgw.meta']
Oct 01 17:10:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 17:10:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:10:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:10:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f816c3d0d00>)]
Oct 01 17:10:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Oct 01 17:10:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 17:10:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:10:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 17:10:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:10:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:10:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:10:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:10:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:10:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:10:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:10:12 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 76 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s wr, 2 op/s
Oct 01 17:10:12 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d5107ec8-a115-44fa-8f38-14e43f7ac582", "format": "json"}]: dispatch
Oct 01 17:10:12 compute-0 ceph-mon[74273]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d5107ec8-a115-44fa-8f38-14e43f7ac582", "force": true, "format": "json"}]: dispatch
Oct 01 17:10:13 compute-0 ceph-mon[74273]: pgmap v1242: 305 pgs: 305 active+clean; 76 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s wr, 2 op/s
Oct 01 17:10:13 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.pmbdpj(active, since 36m)
Oct 01 17:10:14 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1243: 305 pgs: 305 active+clean; 76 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 67 KiB/s wr, 3 op/s
Oct 01 17:10:14 compute-0 ceph-mon[74273]: mgrmap e17: compute-0.pmbdpj(active, since 36m)
Oct 01 17:10:15 compute-0 ceph-mon[74273]: pgmap v1243: 305 pgs: 305 active+clean; 76 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 67 KiB/s wr, 3 op/s
Oct 01 17:10:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Oct 01 17:10:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Oct 01 17:10:15 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Oct 01 17:10:16 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 76 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 58 KiB/s wr, 3 op/s
Oct 01 17:10:16 compute-0 ceph-mon[74273]: osdmap e172: 3 total, 3 up, 3 in
Oct 01 17:10:16 compute-0 ceph-mon[74273]: pgmap v1245: 305 pgs: 305 active+clean; 76 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 58 KiB/s wr, 3 op/s
Oct 01 17:10:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:10:18 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 305 active+clean; 76 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 59 KiB/s wr, 3 op/s
Oct 01 17:10:18 compute-0 nova_compute[259504]: 2025-10-01 17:10:18.778 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:10:18 compute-0 podman[279043]: 2025-10-01 17:10:18.827468101 +0000 UTC m=+0.137291972 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct 01 17:10:19 compute-0 ceph-mon[74273]: pgmap v1246: 305 pgs: 305 active+clean; 76 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 59 KiB/s wr, 3 op/s
Oct 01 17:10:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:10:19.980 162304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:10:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:10:19.980 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:10:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:10:19.981 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:10:20 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 305 active+clean; 76 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 46 KiB/s wr, 3 op/s
Oct 01 17:10:21 compute-0 ceph-mon[74273]: pgmap v1247: 305 pgs: 305 active+clean; 76 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 46 KiB/s wr, 3 op/s
Oct 01 17:10:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 17:10:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:10:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 17:10:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:10:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:10:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:10:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:10:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:10:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:10:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:10:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 01 17:10:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:10:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005724435518004819 of space, bias 4.0, pg target 0.6869322621605782 quantized to 16 (current 16)
Oct 01 17:10:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:10:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Oct 01 17:10:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:10:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 17:10:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:10:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 17:10:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:10:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:10:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:10:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 17:10:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:10:22 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 76 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 46 KiB/s wr, 3 op/s
Oct 01 17:10:23 compute-0 ceph-mon[74273]: pgmap v1248: 305 pgs: 305 active+clean; 76 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 46 KiB/s wr, 3 op/s
Oct 01 17:10:23 compute-0 podman[279070]: 2025-10-01 17:10:23.778439146 +0000 UTC m=+0.082347445 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 01 17:10:24 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 25 KiB/s wr, 1 op/s
Oct 01 17:10:24 compute-0 ceph-mon[74273]: pgmap v1249: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 25 KiB/s wr, 1 op/s
Oct 01 17:10:24 compute-0 nova_compute[259504]: 2025-10-01 17:10:24.746 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:10:24 compute-0 nova_compute[259504]: 2025-10-01 17:10:24.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:10:24 compute-0 nova_compute[259504]: 2025-10-01 17:10:24.750 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 17:10:24 compute-0 nova_compute[259504]: 2025-10-01 17:10:24.750 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 17:10:24 compute-0 nova_compute[259504]: 2025-10-01 17:10:24.891 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 17:10:24 compute-0 nova_compute[259504]: 2025-10-01 17:10:24.892 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:10:25 compute-0 nova_compute[259504]: 2025-10-01 17:10:25.100 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:10:25 compute-0 nova_compute[259504]: 2025-10-01 17:10:25.101 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:10:25 compute-0 nova_compute[259504]: 2025-10-01 17:10:25.101 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:10:25 compute-0 nova_compute[259504]: 2025-10-01 17:10:25.102 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 17:10:25 compute-0 nova_compute[259504]: 2025-10-01 17:10:25.102 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:10:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:10:25 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3717600765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:10:25 compute-0 nova_compute[259504]: 2025-10-01 17:10:25.555 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:10:25 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3717600765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:10:25 compute-0 nova_compute[259504]: 2025-10-01 17:10:25.801 2 WARNING nova.virt.libvirt.driver [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 17:10:25 compute-0 nova_compute[259504]: 2025-10-01 17:10:25.803 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5032MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 17:10:25 compute-0 nova_compute[259504]: 2025-10-01 17:10:25.803 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:10:25 compute-0 nova_compute[259504]: 2025-10-01 17:10:25.804 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:10:26 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 189 B/s rd, 23 KiB/s wr, 1 op/s
Oct 01 17:10:26 compute-0 ceph-mon[74273]: pgmap v1250: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 189 B/s rd, 23 KiB/s wr, 1 op/s
Oct 01 17:10:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:10:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Oct 01 17:10:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Oct 01 17:10:27 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Oct 01 17:10:27 compute-0 nova_compute[259504]: 2025-10-01 17:10:27.442 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 17:10:27 compute-0 nova_compute[259504]: 2025-10-01 17:10:27.442 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 17:10:27 compute-0 nova_compute[259504]: 2025-10-01 17:10:27.457 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:10:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:10:27 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3753724075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:10:27 compute-0 nova_compute[259504]: 2025-10-01 17:10:27.926 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:10:27 compute-0 nova_compute[259504]: 2025-10-01 17:10:27.935 2 DEBUG nova.compute.provider_tree [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed in ProviderTree for provider: 2417da73-53f1-4edf-ae4c-fbd9fa470d6b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 17:10:28 compute-0 ceph-mon[74273]: osdmap e173: 3 total, 3 up, 3 in
Oct 01 17:10:28 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3753724075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:10:28 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1252: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s wr, 0 op/s
Oct 01 17:10:28 compute-0 nova_compute[259504]: 2025-10-01 17:10:28.566 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 17:10:28 compute-0 nova_compute[259504]: 2025-10-01 17:10:28.567 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 17:10:28 compute-0 nova_compute[259504]: 2025-10-01 17:10:28.567 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:10:29 compute-0 ceph-mon[74273]: pgmap v1252: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s wr, 0 op/s
Oct 01 17:10:29 compute-0 nova_compute[259504]: 2025-10-01 17:10:29.426 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:10:29 compute-0 nova_compute[259504]: 2025-10-01 17:10:29.427 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:10:29 compute-0 nova_compute[259504]: 2025-10-01 17:10:29.428 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:10:29 compute-0 nova_compute[259504]: 2025-10-01 17:10:29.428 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:10:29 compute-0 nova_compute[259504]: 2025-10-01 17:10:29.429 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 17:10:30 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s wr, 0 op/s
Oct 01 17:10:30 compute-0 nova_compute[259504]: 2025-10-01 17:10:30.751 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:10:31 compute-0 ceph-mon[74273]: pgmap v1253: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s wr, 0 op/s
Oct 01 17:10:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:10:32 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s wr, 0 op/s
Oct 01 17:10:32 compute-0 nova_compute[259504]: 2025-10-01 17:10:32.745 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:10:33 compute-0 ceph-mon[74273]: pgmap v1254: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s wr, 0 op/s
Oct 01 17:10:34 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s wr, 0 op/s
Oct 01 17:10:35 compute-0 ceph-mon[74273]: pgmap v1255: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s wr, 0 op/s
Oct 01 17:10:36 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s wr, 0 op/s
Oct 01 17:10:36 compute-0 podman[279133]: 2025-10-01 17:10:36.792198122 +0000 UTC m=+0.097559715 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 01 17:10:36 compute-0 podman[279153]: 2025-10-01 17:10:36.919044877 +0000 UTC m=+0.083474354 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251001, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 01 17:10:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:10:37 compute-0 ceph-mon[74273]: pgmap v1256: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s wr, 0 op/s
Oct 01 17:10:37 compute-0 sudo[279173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:10:37 compute-0 sudo[279173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:10:37 compute-0 sudo[279173]: pam_unix(sudo:session): session closed for user root
Oct 01 17:10:37 compute-0 sudo[279198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:10:37 compute-0 sudo[279198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:10:37 compute-0 sudo[279198]: pam_unix(sudo:session): session closed for user root
Oct 01 17:10:37 compute-0 sudo[279223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:10:37 compute-0 sudo[279223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:10:37 compute-0 sudo[279223]: pam_unix(sudo:session): session closed for user root
Oct 01 17:10:37 compute-0 sudo[279248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 17:10:37 compute-0 sudo[279248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:10:38 compute-0 sudo[279248]: pam_unix(sudo:session): session closed for user root
Oct 01 17:10:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 17:10:38 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:10:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 17:10:38 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 17:10:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 17:10:38 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:10:38 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev f13f6b46-6c4c-44e7-aee4-fbab1983275b does not exist
Oct 01 17:10:38 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev ff01de70-e517-4906-b3b8-21b3d01d4e7a does not exist
Oct 01 17:10:38 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev d5c9004e-3b5c-4c84-bcd1-d93fd171aed6 does not exist
Oct 01 17:10:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 17:10:38 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 17:10:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 17:10:38 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 17:10:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 17:10:38 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:10:38 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct 01 17:10:38 compute-0 sudo[279304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:10:38 compute-0 sudo[279304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:10:38 compute-0 sudo[279304]: pam_unix(sudo:session): session closed for user root
Oct 01 17:10:38 compute-0 sudo[279329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:10:38 compute-0 sudo[279329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:10:38 compute-0 sudo[279329]: pam_unix(sudo:session): session closed for user root
Oct 01 17:10:38 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:10:38 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 17:10:38 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:10:38 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 17:10:38 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 17:10:38 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:10:38 compute-0 sudo[279354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:10:38 compute-0 sudo[279354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:10:38 compute-0 sudo[279354]: pam_unix(sudo:session): session closed for user root
Oct 01 17:10:38 compute-0 sudo[279379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 17:10:38 compute-0 sudo[279379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:10:38 compute-0 podman[279445]: 2025-10-01 17:10:38.907277216 +0000 UTC m=+0.073487745 container create baa84159be2db15a5447262a1abf1f62fab19063ca22e4b6e706e9fc87985445 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_banzai, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 01 17:10:38 compute-0 systemd[1]: Started libpod-conmon-baa84159be2db15a5447262a1abf1f62fab19063ca22e4b6e706e9fc87985445.scope.
Oct 01 17:10:38 compute-0 podman[279445]: 2025-10-01 17:10:38.874714456 +0000 UTC m=+0.040925065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:10:39 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:10:39 compute-0 podman[279445]: 2025-10-01 17:10:39.031809967 +0000 UTC m=+0.198020556 container init baa84159be2db15a5447262a1abf1f62fab19063ca22e4b6e706e9fc87985445 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_banzai, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:10:39 compute-0 podman[279445]: 2025-10-01 17:10:39.0469357 +0000 UTC m=+0.213146219 container start baa84159be2db15a5447262a1abf1f62fab19063ca22e4b6e706e9fc87985445 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 01 17:10:39 compute-0 podman[279445]: 2025-10-01 17:10:39.051656503 +0000 UTC m=+0.217867002 container attach baa84159be2db15a5447262a1abf1f62fab19063ca22e4b6e706e9fc87985445 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 01 17:10:39 compute-0 bold_banzai[279462]: 167 167
Oct 01 17:10:39 compute-0 systemd[1]: libpod-baa84159be2db15a5447262a1abf1f62fab19063ca22e4b6e706e9fc87985445.scope: Deactivated successfully.
Oct 01 17:10:39 compute-0 podman[279445]: 2025-10-01 17:10:39.058524826 +0000 UTC m=+0.224735355 container died baa84159be2db15a5447262a1abf1f62fab19063ca22e4b6e706e9fc87985445 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 01 17:10:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce32447196097173859cd4170e8f98741aad696082de37962ca1e4d152394199-merged.mount: Deactivated successfully.
Oct 01 17:10:39 compute-0 podman[279445]: 2025-10-01 17:10:39.126264654 +0000 UTC m=+0.292475183 container remove baa84159be2db15a5447262a1abf1f62fab19063ca22e4b6e706e9fc87985445 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:10:39 compute-0 systemd[1]: libpod-conmon-baa84159be2db15a5447262a1abf1f62fab19063ca22e4b6e706e9fc87985445.scope: Deactivated successfully.
Oct 01 17:10:39 compute-0 ceph-mon[74273]: pgmap v1257: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct 01 17:10:39 compute-0 podman[279488]: 2025-10-01 17:10:39.394944493 +0000 UTC m=+0.101875856 container create 659b964b7681f89a315226c20d91bd44062c4224ea0651e69b0fe6e61313b186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_swirles, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:10:39 compute-0 podman[279488]: 2025-10-01 17:10:39.326837432 +0000 UTC m=+0.033768795 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:10:39 compute-0 systemd[1]: Started libpod-conmon-659b964b7681f89a315226c20d91bd44062c4224ea0651e69b0fe6e61313b186.scope.
Oct 01 17:10:39 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:10:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efdaf3abb60326d9c64ae2eebc73969317b58e919cb4c7fd7fb4013aa3b0bb0f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:10:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efdaf3abb60326d9c64ae2eebc73969317b58e919cb4c7fd7fb4013aa3b0bb0f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:10:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efdaf3abb60326d9c64ae2eebc73969317b58e919cb4c7fd7fb4013aa3b0bb0f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:10:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efdaf3abb60326d9c64ae2eebc73969317b58e919cb4c7fd7fb4013aa3b0bb0f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:10:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efdaf3abb60326d9c64ae2eebc73969317b58e919cb4c7fd7fb4013aa3b0bb0f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 17:10:39 compute-0 podman[279488]: 2025-10-01 17:10:39.55582094 +0000 UTC m=+0.262752313 container init 659b964b7681f89a315226c20d91bd44062c4224ea0651e69b0fe6e61313b186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:10:39 compute-0 podman[279488]: 2025-10-01 17:10:39.574131896 +0000 UTC m=+0.281063269 container start 659b964b7681f89a315226c20d91bd44062c4224ea0651e69b0fe6e61313b186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_swirles, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 01 17:10:39 compute-0 podman[279488]: 2025-10-01 17:10:39.57925732 +0000 UTC m=+0.286188743 container attach 659b964b7681f89a315226c20d91bd44062c4224ea0651e69b0fe6e61313b186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_swirles, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 01 17:10:40 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s wr, 0 op/s
Oct 01 17:10:40 compute-0 reverent_swirles[279505]: --> passed data devices: 0 physical, 3 LVM
Oct 01 17:10:40 compute-0 reverent_swirles[279505]: --> relative data size: 1.0
Oct 01 17:10:40 compute-0 reverent_swirles[279505]: --> All data devices are unavailable
Oct 01 17:10:40 compute-0 systemd[1]: libpod-659b964b7681f89a315226c20d91bd44062c4224ea0651e69b0fe6e61313b186.scope: Deactivated successfully.
Oct 01 17:10:40 compute-0 systemd[1]: libpod-659b964b7681f89a315226c20d91bd44062c4224ea0651e69b0fe6e61313b186.scope: Consumed 1.022s CPU time.
Oct 01 17:10:40 compute-0 podman[279534]: 2025-10-01 17:10:40.694554414 +0000 UTC m=+0.026887402 container died 659b964b7681f89a315226c20d91bd44062c4224ea0651e69b0fe6e61313b186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_swirles, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:10:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-efdaf3abb60326d9c64ae2eebc73969317b58e919cb4c7fd7fb4013aa3b0bb0f-merged.mount: Deactivated successfully.
Oct 01 17:10:40 compute-0 podman[279534]: 2025-10-01 17:10:40.750995753 +0000 UTC m=+0.083328711 container remove 659b964b7681f89a315226c20d91bd44062c4224ea0651e69b0fe6e61313b186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_swirles, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:10:40 compute-0 systemd[1]: libpod-conmon-659b964b7681f89a315226c20d91bd44062c4224ea0651e69b0fe6e61313b186.scope: Deactivated successfully.
Oct 01 17:10:40 compute-0 sudo[279379]: pam_unix(sudo:session): session closed for user root
Oct 01 17:10:40 compute-0 sudo[279549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:10:40 compute-0 sudo[279549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:10:40 compute-0 sudo[279549]: pam_unix(sudo:session): session closed for user root
Oct 01 17:10:40 compute-0 sudo[279574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:10:40 compute-0 sudo[279574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:10:40 compute-0 sudo[279574]: pam_unix(sudo:session): session closed for user root
Oct 01 17:10:41 compute-0 sudo[279599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:10:41 compute-0 sudo[279599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:10:41 compute-0 sudo[279599]: pam_unix(sudo:session): session closed for user root
Oct 01 17:10:41 compute-0 sudo[279624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 17:10:41 compute-0 sudo[279624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:10:41 compute-0 ceph-mon[74273]: pgmap v1258: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s wr, 0 op/s
Oct 01 17:10:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:10:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:10:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:10:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:10:41 compute-0 podman[279689]: 2025-10-01 17:10:41.470729057 +0000 UTC m=+0.057039962 container create c98d1996dac0ccb9b4b8a6c66d84e73c97e2c5a0d140c4420834b32c7ad99915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 01 17:10:41 compute-0 systemd[1]: Started libpod-conmon-c98d1996dac0ccb9b4b8a6c66d84e73c97e2c5a0d140c4420834b32c7ad99915.scope.
Oct 01 17:10:41 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:10:41 compute-0 podman[279689]: 2025-10-01 17:10:41.45329083 +0000 UTC m=+0.039601746 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:10:41 compute-0 podman[279689]: 2025-10-01 17:10:41.563425725 +0000 UTC m=+0.149736671 container init c98d1996dac0ccb9b4b8a6c66d84e73c97e2c5a0d140c4420834b32c7ad99915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Oct 01 17:10:41 compute-0 podman[279689]: 2025-10-01 17:10:41.570638352 +0000 UTC m=+0.156949258 container start c98d1996dac0ccb9b4b8a6c66d84e73c97e2c5a0d140c4420834b32c7ad99915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_brown, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 17:10:41 compute-0 podman[279689]: 2025-10-01 17:10:41.574277195 +0000 UTC m=+0.160588141 container attach c98d1996dac0ccb9b4b8a6c66d84e73c97e2c5a0d140c4420834b32c7ad99915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_brown, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 01 17:10:41 compute-0 magical_brown[279705]: 167 167
Oct 01 17:10:41 compute-0 systemd[1]: libpod-c98d1996dac0ccb9b4b8a6c66d84e73c97e2c5a0d140c4420834b32c7ad99915.scope: Deactivated successfully.
Oct 01 17:10:41 compute-0 podman[279689]: 2025-10-01 17:10:41.576581369 +0000 UTC m=+0.162892265 container died c98d1996dac0ccb9b4b8a6c66d84e73c97e2c5a0d140c4420834b32c7ad99915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:10:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-08fc4e59f566c4aa4ec9bbbf723abe491ae8c7c7fb00371c8e081076bf46e408-merged.mount: Deactivated successfully.
Oct 01 17:10:41 compute-0 podman[279689]: 2025-10-01 17:10:41.612939863 +0000 UTC m=+0.199250769 container remove c98d1996dac0ccb9b4b8a6c66d84e73c97e2c5a0d140c4420834b32c7ad99915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_brown, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 01 17:10:41 compute-0 systemd[1]: libpod-conmon-c98d1996dac0ccb9b4b8a6c66d84e73c97e2c5a0d140c4420834b32c7ad99915.scope: Deactivated successfully.
Oct 01 17:10:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:10:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:10:41 compute-0 podman[279730]: 2025-10-01 17:10:41.790348028 +0000 UTC m=+0.052206145 container create 66c9f522064645412af8d358ec18b6f55035728595092b6b0a6a34145f4e4b91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_colden, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:10:41 compute-0 systemd[1]: Started libpod-conmon-66c9f522064645412af8d358ec18b6f55035728595092b6b0a6a34145f4e4b91.scope.
Oct 01 17:10:41 compute-0 podman[279730]: 2025-10-01 17:10:41.762126878 +0000 UTC m=+0.023985055 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:10:41 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:10:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3acf573bdc9c597c14313917a92472e4900e99299f0451dfa564d75651cc2d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:10:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3acf573bdc9c597c14313917a92472e4900e99299f0451dfa564d75651cc2d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:10:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3acf573bdc9c597c14313917a92472e4900e99299f0451dfa564d75651cc2d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:10:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3acf573bdc9c597c14313917a92472e4900e99299f0451dfa564d75651cc2d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:10:41 compute-0 podman[279730]: 2025-10-01 17:10:41.912734548 +0000 UTC m=+0.174592735 container init 66c9f522064645412af8d358ec18b6f55035728595092b6b0a6a34145f4e4b91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_colden, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 01 17:10:41 compute-0 podman[279730]: 2025-10-01 17:10:41.920653113 +0000 UTC m=+0.182511230 container start 66c9f522064645412af8d358ec18b6f55035728595092b6b0a6a34145f4e4b91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 01 17:10:41 compute-0 podman[279730]: 2025-10-01 17:10:41.924956065 +0000 UTC m=+0.186814192 container attach 66c9f522064645412af8d358ec18b6f55035728595092b6b0a6a34145f4e4b91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_colden, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:10:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:10:42 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:10:42 compute-0 epic_colden[279747]: {
Oct 01 17:10:42 compute-0 epic_colden[279747]:     "0": [
Oct 01 17:10:42 compute-0 epic_colden[279747]:         {
Oct 01 17:10:42 compute-0 epic_colden[279747]:             "devices": [
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "/dev/loop3"
Oct 01 17:10:42 compute-0 epic_colden[279747]:             ],
Oct 01 17:10:42 compute-0 epic_colden[279747]:             "lv_name": "ceph_lv0",
Oct 01 17:10:42 compute-0 epic_colden[279747]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:10:42 compute-0 epic_colden[279747]:             "lv_size": "21470642176",
Oct 01 17:10:42 compute-0 epic_colden[279747]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:10:42 compute-0 epic_colden[279747]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 17:10:42 compute-0 epic_colden[279747]:             "name": "ceph_lv0",
Oct 01 17:10:42 compute-0 epic_colden[279747]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:10:42 compute-0 epic_colden[279747]:             "tags": {
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "ceph.cluster_name": "ceph",
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "ceph.crush_device_class": "",
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "ceph.encrypted": "0",
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "ceph.osd_id": "0",
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "ceph.type": "block",
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "ceph.vdo": "0"
Oct 01 17:10:42 compute-0 epic_colden[279747]:             },
Oct 01 17:10:42 compute-0 epic_colden[279747]:             "type": "block",
Oct 01 17:10:42 compute-0 epic_colden[279747]:             "vg_name": "ceph_vg0"
Oct 01 17:10:42 compute-0 epic_colden[279747]:         }
Oct 01 17:10:42 compute-0 epic_colden[279747]:     ],
Oct 01 17:10:42 compute-0 epic_colden[279747]:     "1": [
Oct 01 17:10:42 compute-0 epic_colden[279747]:         {
Oct 01 17:10:42 compute-0 epic_colden[279747]:             "devices": [
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "/dev/loop4"
Oct 01 17:10:42 compute-0 epic_colden[279747]:             ],
Oct 01 17:10:42 compute-0 epic_colden[279747]:             "lv_name": "ceph_lv1",
Oct 01 17:10:42 compute-0 epic_colden[279747]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:10:42 compute-0 epic_colden[279747]:             "lv_size": "21470642176",
Oct 01 17:10:42 compute-0 epic_colden[279747]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:10:42 compute-0 epic_colden[279747]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 17:10:42 compute-0 epic_colden[279747]:             "name": "ceph_lv1",
Oct 01 17:10:42 compute-0 epic_colden[279747]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:10:42 compute-0 epic_colden[279747]:             "tags": {
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "ceph.cluster_name": "ceph",
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "ceph.crush_device_class": "",
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "ceph.encrypted": "0",
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "ceph.osd_id": "1",
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "ceph.type": "block",
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "ceph.vdo": "0"
Oct 01 17:10:42 compute-0 epic_colden[279747]:             },
Oct 01 17:10:42 compute-0 epic_colden[279747]:             "type": "block",
Oct 01 17:10:42 compute-0 epic_colden[279747]:             "vg_name": "ceph_vg1"
Oct 01 17:10:42 compute-0 epic_colden[279747]:         }
Oct 01 17:10:42 compute-0 epic_colden[279747]:     ],
Oct 01 17:10:42 compute-0 epic_colden[279747]:     "2": [
Oct 01 17:10:42 compute-0 epic_colden[279747]:         {
Oct 01 17:10:42 compute-0 epic_colden[279747]:             "devices": [
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "/dev/loop5"
Oct 01 17:10:42 compute-0 epic_colden[279747]:             ],
Oct 01 17:10:42 compute-0 epic_colden[279747]:             "lv_name": "ceph_lv2",
Oct 01 17:10:42 compute-0 epic_colden[279747]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:10:42 compute-0 epic_colden[279747]:             "lv_size": "21470642176",
Oct 01 17:10:42 compute-0 epic_colden[279747]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:10:42 compute-0 epic_colden[279747]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 17:10:42 compute-0 epic_colden[279747]:             "name": "ceph_lv2",
Oct 01 17:10:42 compute-0 epic_colden[279747]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:10:42 compute-0 epic_colden[279747]:             "tags": {
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "ceph.cluster_name": "ceph",
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "ceph.crush_device_class": "",
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "ceph.encrypted": "0",
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "ceph.osd_id": "2",
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "ceph.type": "block",
Oct 01 17:10:42 compute-0 epic_colden[279747]:                 "ceph.vdo": "0"
Oct 01 17:10:42 compute-0 epic_colden[279747]:             },
Oct 01 17:10:42 compute-0 epic_colden[279747]:             "type": "block",
Oct 01 17:10:42 compute-0 epic_colden[279747]:             "vg_name": "ceph_vg2"
Oct 01 17:10:42 compute-0 epic_colden[279747]:         }
Oct 01 17:10:42 compute-0 epic_colden[279747]:     ]
Oct 01 17:10:42 compute-0 epic_colden[279747]: }
Oct 01 17:10:42 compute-0 systemd[1]: libpod-66c9f522064645412af8d358ec18b6f55035728595092b6b0a6a34145f4e4b91.scope: Deactivated successfully.
Oct 01 17:10:42 compute-0 podman[279730]: 2025-10-01 17:10:42.680429762 +0000 UTC m=+0.942287849 container died 66c9f522064645412af8d358ec18b6f55035728595092b6b0a6a34145f4e4b91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:10:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3acf573bdc9c597c14313917a92472e4900e99299f0451dfa564d75651cc2d1-merged.mount: Deactivated successfully.
Oct 01 17:10:42 compute-0 podman[279730]: 2025-10-01 17:10:42.73884211 +0000 UTC m=+1.000700237 container remove 66c9f522064645412af8d358ec18b6f55035728595092b6b0a6a34145f4e4b91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_colden, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:10:42 compute-0 systemd[1]: libpod-conmon-66c9f522064645412af8d358ec18b6f55035728595092b6b0a6a34145f4e4b91.scope: Deactivated successfully.
Oct 01 17:10:42 compute-0 sudo[279624]: pam_unix(sudo:session): session closed for user root
Oct 01 17:10:42 compute-0 sudo[279767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:10:42 compute-0 sudo[279767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:10:42 compute-0 sudo[279767]: pam_unix(sudo:session): session closed for user root
Oct 01 17:10:42 compute-0 sudo[279792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:10:42 compute-0 sudo[279792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:10:42 compute-0 sudo[279792]: pam_unix(sudo:session): session closed for user root
Oct 01 17:10:42 compute-0 sudo[279817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:10:42 compute-0 sudo[279817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:10:42 compute-0 sudo[279817]: pam_unix(sudo:session): session closed for user root
Oct 01 17:10:43 compute-0 sudo[279842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 17:10:43 compute-0 sudo[279842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:10:43 compute-0 ceph-mon[74273]: pgmap v1259: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:10:43 compute-0 podman[279909]: 2025-10-01 17:10:43.444620895 +0000 UTC m=+0.049998967 container create e3d351225f7affa46fa5220734bea7855d46fecdebb13292e14f442b0b397c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_maxwell, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:10:43 compute-0 systemd[1]: Started libpod-conmon-e3d351225f7affa46fa5220734bea7855d46fecdebb13292e14f442b0b397c62.scope.
Oct 01 17:10:43 compute-0 podman[279909]: 2025-10-01 17:10:43.416842325 +0000 UTC m=+0.022220477 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:10:43 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:10:43 compute-0 podman[279909]: 2025-10-01 17:10:43.53738775 +0000 UTC m=+0.142765882 container init e3d351225f7affa46fa5220734bea7855d46fecdebb13292e14f442b0b397c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_maxwell, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 01 17:10:43 compute-0 podman[279909]: 2025-10-01 17:10:43.543444371 +0000 UTC m=+0.148822453 container start e3d351225f7affa46fa5220734bea7855d46fecdebb13292e14f442b0b397c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_maxwell, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:10:43 compute-0 podman[279909]: 2025-10-01 17:10:43.547705235 +0000 UTC m=+0.153083367 container attach e3d351225f7affa46fa5220734bea7855d46fecdebb13292e14f442b0b397c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_maxwell, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 01 17:10:43 compute-0 strange_maxwell[279925]: 167 167
Oct 01 17:10:43 compute-0 systemd[1]: libpod-e3d351225f7affa46fa5220734bea7855d46fecdebb13292e14f442b0b397c62.scope: Deactivated successfully.
Oct 01 17:10:43 compute-0 podman[279909]: 2025-10-01 17:10:43.55105484 +0000 UTC m=+0.156432892 container died e3d351225f7affa46fa5220734bea7855d46fecdebb13292e14f442b0b397c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_maxwell, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 17:10:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b6de04df0e89baeb9068c97847886737775a6cc7bd053cc9659928e0aeb99c4-merged.mount: Deactivated successfully.
Oct 01 17:10:43 compute-0 podman[279909]: 2025-10-01 17:10:43.591198251 +0000 UTC m=+0.196576313 container remove e3d351225f7affa46fa5220734bea7855d46fecdebb13292e14f442b0b397c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_maxwell, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 01 17:10:43 compute-0 systemd[1]: libpod-conmon-e3d351225f7affa46fa5220734bea7855d46fecdebb13292e14f442b0b397c62.scope: Deactivated successfully.
Oct 01 17:10:43 compute-0 podman[279948]: 2025-10-01 17:10:43.795006979 +0000 UTC m=+0.054654453 container create 2f53f2f25ce687c7512d3a946145d0b19d58062faf00aca5e6faa5a3fcd0ffc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_agnesi, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 01 17:10:43 compute-0 systemd[1]: Started libpod-conmon-2f53f2f25ce687c7512d3a946145d0b19d58062faf00aca5e6faa5a3fcd0ffc9.scope.
Oct 01 17:10:43 compute-0 podman[279948]: 2025-10-01 17:10:43.765989446 +0000 UTC m=+0.025636980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:10:43 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:10:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e53c727b97f3e375168ab683b11d080121b1a292c385a41df3eeb10f4b430d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:10:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e53c727b97f3e375168ab683b11d080121b1a292c385a41df3eeb10f4b430d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:10:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 17:10:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1379046869' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 17:10:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e53c727b97f3e375168ab683b11d080121b1a292c385a41df3eeb10f4b430d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:10:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 17:10:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1379046869' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 17:10:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e53c727b97f3e375168ab683b11d080121b1a292c385a41df3eeb10f4b430d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:10:43 compute-0 podman[279948]: 2025-10-01 17:10:43.9054294 +0000 UTC m=+0.165076874 container init 2f53f2f25ce687c7512d3a946145d0b19d58062faf00aca5e6faa5a3fcd0ffc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_agnesi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:10:43 compute-0 podman[279948]: 2025-10-01 17:10:43.918755616 +0000 UTC m=+0.178403070 container start 2f53f2f25ce687c7512d3a946145d0b19d58062faf00aca5e6faa5a3fcd0ffc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:10:43 compute-0 podman[279948]: 2025-10-01 17:10:43.922594159 +0000 UTC m=+0.182241623 container attach 2f53f2f25ce687c7512d3a946145d0b19d58062faf00aca5e6faa5a3fcd0ffc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_agnesi, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 01 17:10:44 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:10:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1379046869' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 17:10:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1379046869' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 17:10:44 compute-0 beautiful_agnesi[279964]: {
Oct 01 17:10:44 compute-0 beautiful_agnesi[279964]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 17:10:44 compute-0 beautiful_agnesi[279964]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:10:44 compute-0 beautiful_agnesi[279964]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 17:10:44 compute-0 beautiful_agnesi[279964]:         "osd_id": 2,
Oct 01 17:10:44 compute-0 beautiful_agnesi[279964]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 17:10:44 compute-0 beautiful_agnesi[279964]:         "type": "bluestore"
Oct 01 17:10:44 compute-0 beautiful_agnesi[279964]:     },
Oct 01 17:10:44 compute-0 beautiful_agnesi[279964]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 17:10:44 compute-0 beautiful_agnesi[279964]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:10:44 compute-0 beautiful_agnesi[279964]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 17:10:44 compute-0 beautiful_agnesi[279964]:         "osd_id": 0,
Oct 01 17:10:44 compute-0 beautiful_agnesi[279964]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 17:10:44 compute-0 beautiful_agnesi[279964]:         "type": "bluestore"
Oct 01 17:10:44 compute-0 beautiful_agnesi[279964]:     },
Oct 01 17:10:44 compute-0 beautiful_agnesi[279964]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 17:10:44 compute-0 beautiful_agnesi[279964]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:10:44 compute-0 beautiful_agnesi[279964]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 17:10:44 compute-0 beautiful_agnesi[279964]:         "osd_id": 1,
Oct 01 17:10:44 compute-0 beautiful_agnesi[279964]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 17:10:44 compute-0 beautiful_agnesi[279964]:         "type": "bluestore"
Oct 01 17:10:44 compute-0 beautiful_agnesi[279964]:     }
Oct 01 17:10:44 compute-0 beautiful_agnesi[279964]: }
Oct 01 17:10:44 compute-0 systemd[1]: libpod-2f53f2f25ce687c7512d3a946145d0b19d58062faf00aca5e6faa5a3fcd0ffc9.scope: Deactivated successfully.
Oct 01 17:10:44 compute-0 systemd[1]: libpod-2f53f2f25ce687c7512d3a946145d0b19d58062faf00aca5e6faa5a3fcd0ffc9.scope: Consumed 1.001s CPU time.
Oct 01 17:10:44 compute-0 podman[279948]: 2025-10-01 17:10:44.923450108 +0000 UTC m=+1.183097582 container died 2f53f2f25ce687c7512d3a946145d0b19d58062faf00aca5e6faa5a3fcd0ffc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Oct 01 17:10:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e53c727b97f3e375168ab683b11d080121b1a292c385a41df3eeb10f4b430d3-merged.mount: Deactivated successfully.
Oct 01 17:10:45 compute-0 ceph-mon[74273]: pgmap v1260: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:10:45 compute-0 podman[279948]: 2025-10-01 17:10:45.557212373 +0000 UTC m=+1.816859817 container remove 2f53f2f25ce687c7512d3a946145d0b19d58062faf00aca5e6faa5a3fcd0ffc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_agnesi, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 01 17:10:45 compute-0 systemd[1]: libpod-conmon-2f53f2f25ce687c7512d3a946145d0b19d58062faf00aca5e6faa5a3fcd0ffc9.scope: Deactivated successfully.
Oct 01 17:10:45 compute-0 sudo[279842]: pam_unix(sudo:session): session closed for user root
Oct 01 17:10:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 17:10:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:10:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 17:10:45 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:10:45 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 7ecb8c7e-bfeb-4458-aead-370eb6269820 does not exist
Oct 01 17:10:45 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 0cc0b407-96fa-499c-ad51-ddd75a10ba44 does not exist
Oct 01 17:10:45 compute-0 sudo[280010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:10:45 compute-0 sudo[280010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:10:45 compute-0 sudo[280010]: pam_unix(sudo:session): session closed for user root
Oct 01 17:10:45 compute-0 sudo[280035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 17:10:45 compute-0 sudo[280035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:10:45 compute-0 sudo[280035]: pam_unix(sudo:session): session closed for user root
Oct 01 17:10:46 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:10:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:10:46 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:10:46 compute-0 ceph-mon[74273]: pgmap v1261: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:10:46 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Oct 01 17:10:46 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:10:46.889199) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 17:10:46 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Oct 01 17:10:46 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338646889240, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2461, "num_deletes": 508, "total_data_size": 3435138, "memory_usage": 3506512, "flush_reason": "Manual Compaction"}
Oct 01 17:10:46 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Oct 01 17:10:46 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338646967436, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3217932, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26222, "largest_seqno": 28682, "table_properties": {"data_size": 3207367, "index_size": 6163, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3333, "raw_key_size": 27112, "raw_average_key_size": 20, "raw_value_size": 3183529, "raw_average_value_size": 2433, "num_data_blocks": 271, "num_entries": 1308, "num_filter_entries": 1308, "num_deletions": 508, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759338455, "oldest_key_time": 1759338455, "file_creation_time": 1759338646, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Oct 01 17:10:46 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 78304 microseconds, and 14178 cpu microseconds.
Oct 01 17:10:46 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 17:10:47 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:10:46.967500) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3217932 bytes OK
Oct 01 17:10:47 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:10:46.967526) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Oct 01 17:10:47 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:10:47.026857) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Oct 01 17:10:47 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:10:47.026879) EVENT_LOG_v1 {"time_micros": 1759338647026872, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 17:10:47 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:10:47.026923) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 17:10:47 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3423583, prev total WAL file size 3450071, number of live WAL files 2.
Oct 01 17:10:47 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 17:10:47 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:10:47.028948) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Oct 01 17:10:47 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 17:10:47 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3142KB)], [59(9828KB)]
Oct 01 17:10:47 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338647028981, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 13282038, "oldest_snapshot_seqno": -1}
Oct 01 17:10:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:10:47 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5872 keys, 8645559 bytes, temperature: kUnknown
Oct 01 17:10:47 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338647241969, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 8645559, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8605739, "index_size": 24030, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14725, "raw_key_size": 146689, "raw_average_key_size": 24, "raw_value_size": 8499959, "raw_average_value_size": 1447, "num_data_blocks": 985, "num_entries": 5872, "num_filter_entries": 5872, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336399, "oldest_key_time": 0, "file_creation_time": 1759338647, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Oct 01 17:10:47 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 17:10:47 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:10:47.242209) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 8645559 bytes
Oct 01 17:10:47 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:10:47.301019) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 62.3 rd, 40.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 9.6 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(6.8) write-amplify(2.7) OK, records in: 6891, records dropped: 1019 output_compression: NoCompression
Oct 01 17:10:47 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:10:47.301066) EVENT_LOG_v1 {"time_micros": 1759338647301048, "job": 32, "event": "compaction_finished", "compaction_time_micros": 213060, "compaction_time_cpu_micros": 28441, "output_level": 6, "num_output_files": 1, "total_output_size": 8645559, "num_input_records": 6891, "num_output_records": 5872, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 17:10:47 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 17:10:47 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338647303045, "job": 32, "event": "table_file_deletion", "file_number": 61}
Oct 01 17:10:47 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 17:10:47 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338647307371, "job": 32, "event": "table_file_deletion", "file_number": 59}
Oct 01 17:10:47 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:10:47.028068) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:10:47 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:10:47.307456) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:10:47 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:10:47.307461) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:10:47 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:10:47.307463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:10:47 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:10:47.307467) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:10:47 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:10:47.307470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:10:48 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:10:49 compute-0 ceph-mon[74273]: pgmap v1262: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:10:49 compute-0 podman[280060]: 2025-10-01 17:10:49.809116708 +0000 UTC m=+0.120552216 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 01 17:10:50 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:10:51 compute-0 ceph-mon[74273]: pgmap v1263: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:10:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:10:52 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:10:53 compute-0 ceph-mon[74273]: pgmap v1264: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:10:54 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:10:54 compute-0 podman[280086]: 2025-10-01 17:10:54.743923562 +0000 UTC m=+0.058961734 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 17:10:55 compute-0 ceph-mon[74273]: pgmap v1265: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:10:56 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:10:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:10:57 compute-0 ceph-mon[74273]: pgmap v1266: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:10:58 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:10:59 compute-0 ceph-mon[74273]: pgmap v1267: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:11:00 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:11:01 compute-0 ceph-mon[74273]: pgmap v1268: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:11:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:11:02 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:11:02 compute-0 ceph-mon[74273]: pgmap v1269: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:11:04 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:11:05 compute-0 ceph-mon[74273]: pgmap v1270: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:11:06 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:11:06 compute-0 ceph-mon[74273]: pgmap v1271: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:11:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:11:07 compute-0 podman[280106]: 2025-10-01 17:11:07.749639811 +0000 UTC m=+0.058539844 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 01 17:11:07 compute-0 podman[280107]: 2025-10-01 17:11:07.787583472 +0000 UTC m=+0.100802596 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 01 17:11:08 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:11:08 compute-0 sshd-session[280146]: Accepted publickey for zuul from 192.168.122.10 port 58042 ssh2: ECDSA SHA256:cAu4I/kPoFUKOLOQB71BUt6Th09G4PIJ2iHT8DD8gEY
Oct 01 17:11:08 compute-0 systemd-logind[788]: New session 53 of user zuul.
Oct 01 17:11:08 compute-0 systemd[1]: Started Session 53 of User zuul.
Oct 01 17:11:08 compute-0 sshd-session[280146]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 17:11:08 compute-0 sudo[280150]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Oct 01 17:11:08 compute-0 sudo[280150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 17:11:08 compute-0 ceph-mon[74273]: pgmap v1272: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:11:10 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1273: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:11:10 compute-0 ceph-mon[74273]: pgmap v1273: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:11:11 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14505 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:11:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f816c1b76a0>)]
Oct 01 17:11:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Oct 01 17:11:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:11:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f816c378bb0>)]
Oct 01 17:11:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Oct 01 17:11:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_17:11:11
Oct 01 17:11:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 17:11:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 17:11:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', '.rgw.root', '.mgr', 'volumes', 'images', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta']
Oct 01 17:11:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 17:11:11 compute-0 ceph-mon[74273]: from='client.14505 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:11:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:11:11 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14507 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:11:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 17:11:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:11:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 17:11:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:11:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:11:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:11:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:11:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:11:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:11:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:11:12 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:11:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 01 17:11:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3147446166' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 01 17:11:12 compute-0 ceph-mon[74273]: from='client.14507 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:12 compute-0 ceph-mon[74273]: pgmap v1274: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:11:12 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3147446166' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 01 17:11:13 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.pmbdpj(active, since 37m)
Oct 01 17:11:14 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 01 17:11:14 compute-0 ceph-mon[74273]: mgrmap e18: compute-0.pmbdpj(active, since 37m)
Oct 01 17:11:14 compute-0 ceph-mon[74273]: pgmap v1275: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 01 17:11:16 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 01 17:11:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:11:17 compute-0 ceph-mon[74273]: pgmap v1276: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 01 17:11:17 compute-0 ovs-vsctl[280477]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 01 17:11:18 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 01 17:11:18 compute-0 ceph-mon[74273]: pgmap v1277: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 01 17:11:18 compute-0 virtqemud[259310]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 01 17:11:18 compute-0 nova_compute[259504]: 2025-10-01 17:11:18.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:11:18 compute-0 virtqemud[259310]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 01 17:11:18 compute-0 virtqemud[259310]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 01 17:11:19 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: cache status {prefix=cache status} (starting...)
Oct 01 17:11:19 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: client ls {prefix=client ls} (starting...)
Oct 01 17:11:19 compute-0 lvm[280822]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 01 17:11:19 compute-0 lvm[280822]: VG ceph_vg0 finished
Oct 01 17:11:19 compute-0 lvm[280824]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 01 17:11:19 compute-0 lvm[280824]: VG ceph_vg1 finished
Oct 01 17:11:19 compute-0 podman[280833]: 2025-10-01 17:11:19.975707168 +0000 UTC m=+0.126053392 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 01 17:11:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:11:19.981 162304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:11:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:11:19.981 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:11:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:11:19.982 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:11:20 compute-0 lvm[280887]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 01 17:11:20 compute-0 lvm[280887]: VG ceph_vg2 finished
Oct 01 17:11:20 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14511 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:20 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 01 17:11:20 compute-0 kernel: block loop3: the capability attribute has been deprecated.
Oct 01 17:11:20 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: damage ls {prefix=damage ls} (starting...)
Oct 01 17:11:20 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: dump loads {prefix=dump loads} (starting...)
Oct 01 17:11:20 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14513 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:20 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 01 17:11:20 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 01 17:11:20 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 01 17:11:20 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 01 17:11:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Oct 01 17:11:21 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/209512273' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 01 17:11:21 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14519 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:21 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:11:21.111+0000 7f816b913640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 01 17:11:21 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 01 17:11:21 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 01 17:11:21 compute-0 ceph-mon[74273]: from='client.14511 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:21 compute-0 ceph-mon[74273]: pgmap v1278: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 01 17:11:21 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/209512273' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 01 17:11:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 17:11:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:11:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 17:11:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:11:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:11:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:11:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:11:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:11:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:11:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:11:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 01 17:11:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:11:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005739061380803542 of space, bias 4.0, pg target 0.6886873656964251 quantized to 16 (current 16)
Oct 01 17:11:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:11:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Oct 01 17:11:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:11:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 17:11:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:11:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 17:11:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:11:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:11:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:11:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 17:11:21 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 01 17:11:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 17:11:21 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1538312808' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:11:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct 01 17:11:21 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2576758460' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 01 17:11:21 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: ops {prefix=ops} (starting...)
Oct 01 17:11:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct 01 17:11:21 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2165412498' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 01 17:11:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 01 17:11:21 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1316843949' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 01 17:11:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:11:22 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 01 17:11:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct 01 17:11:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1752634835' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 01 17:11:22 compute-0 ceph-mon[74273]: from='client.14513 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:22 compute-0 ceph-mon[74273]: from='client.14519 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:22 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1538312808' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:11:22 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2576758460' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 01 17:11:22 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2165412498' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 01 17:11:22 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1316843949' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 01 17:11:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 01 17:11:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/33130040' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 01 17:11:22 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session ls {prefix=session ls} (starting...)
Oct 01 17:11:22 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: status {prefix=status} (starting...)
Oct 01 17:11:22 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14533 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 01 17:11:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1906210581' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 01 17:11:22 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14537 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 01 17:11:23 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3659065413' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 01 17:11:23 compute-0 ceph-mon[74273]: pgmap v1279: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 01 17:11:23 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1752634835' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 01 17:11:23 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/33130040' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 01 17:11:23 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1906210581' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 01 17:11:23 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3659065413' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 01 17:11:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Oct 01 17:11:23 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2020081953' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 01 17:11:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct 01 17:11:23 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/741343272' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 01 17:11:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct 01 17:11:23 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3164474368' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 01 17:11:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 01 17:11:23 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1115099366' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 01 17:11:24 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14549 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:24 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 01 17:11:24 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:11:24.222+0000 7f816b913640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 01 17:11:24 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 01 17:11:24 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14551 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:24 compute-0 ceph-mon[74273]: from='client.14533 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:24 compute-0 ceph-mon[74273]: from='client.14537 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:24 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2020081953' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 01 17:11:24 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/741343272' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 01 17:11:24 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3164474368' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 01 17:11:24 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1115099366' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 01 17:11:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct 01 17:11:24 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/892563597' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 01 17:11:24 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14555 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:24 compute-0 nova_compute[259504]: 2025-10-01 17:11:24.751 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:11:24 compute-0 nova_compute[259504]: 2025-10-01 17:11:24.751 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 17:11:24 compute-0 nova_compute[259504]: 2025-10-01 17:11:24.751 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 17:11:25 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14559 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct 01 17:11:25 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3105034247' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 01 17:11:25 compute-0 nova_compute[259504]: 2025-10-01 17:11:25.217 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 17:11:25 compute-0 ceph-mon[74273]: from='client.14549 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:25 compute-0 ceph-mon[74273]: pgmap v1280: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 01 17:11:25 compute-0 ceph-mon[74273]: from='client.14551 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:25 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/892563597' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 01 17:11:25 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3105034247' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 01 17:11:25 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14561 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 01 17:11:25 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/209229292' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 01 17:11:25 compute-0 podman[281858]: 2025-10-01 17:11:25.741758177 +0000 UTC m=+0.061259068 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 01 17:11:25 compute-0 nova_compute[259504]: 2025-10-01 17:11:25.749 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:38:44.272292+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:39:14.041116+0000 osd.2 (osd.2) 116 : cluster [DBG] 7.5 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:39:14.055227+0000 osd.2 (osd.2) 117 : cluster [DBG] 7.5 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 117) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:39:14.041116+0000 osd.2 (osd.2) 116 : cluster [DBG] 7.5 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:39:14.055227+0000 osd.2 (osd.2) 117 : cluster [DBG] 7.5 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 1163264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:38:45.272543+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 1163264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:38:46.272684+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 1155072 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766894 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:38:47.272813+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 1155072 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.b scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.895071983s of 11.926254272s, submitted: 8
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.b scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:38:48.272958+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:39:18.091518+0000 osd.2 (osd.2) 118 : cluster [DBG] 11.b scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:39:18.105602+0000 osd.2 (osd.2) 119 : cluster [DBG] 11.b scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 119) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:39:18.091518+0000 osd.2 (osd.2) 118 : cluster [DBG] 11.b scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:39:18.105602+0000 osd.2 (osd.2) 119 : cluster [DBG] 11.b scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 1155072 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:38:49.273123+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 1146880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 7.c scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 7.c scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:38:50.273264+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:39:20.068108+0000 osd.2 (osd.2) 120 : cluster [DBG] 7.c scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:39:20.082214+0000 osd.2 (osd.2) 121 : cluster [DBG] 7.c scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 121) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:39:20.068108+0000 osd.2 (osd.2) 120 : cluster [DBG] 7.c scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:39:20.082214+0000 osd.2 (osd.2) 121 : cluster [DBG] 7.c scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 1146880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:38:51.273444+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 1138688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 769189 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:38:52.273574+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:39:22.153642+0000 osd.2 (osd.2) 122 : cluster [DBG] 11.12 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:39:22.167711+0000 osd.2 (osd.2) 123 : cluster [DBG] 11.12 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 123) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:39:22.153642+0000 osd.2 (osd.2) 122 : cluster [DBG] 11.12 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:39:22.167711+0000 osd.2 (osd.2) 123 : cluster [DBG] 11.12 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 1138688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:38:53.273793+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 1114112 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:38:54.273947+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 1114112 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:38:55.274190+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 1114112 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:38:56.274600+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 1105920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 770338 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:38:57.274761+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 1105920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.008382797s of 10.028627396s, submitted: 6
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:38:58.274960+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:39:28.120106+0000 osd.2 (osd.2) 124 : cluster [DBG] 3.8 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:39:28.134198+0000 osd.2 (osd.2) 125 : cluster [DBG] 3.8 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 125) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:39:28.120106+0000 osd.2 (osd.2) 124 : cluster [DBG] 3.8 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:39:28.134198+0000 osd.2 (osd.2) 125 : cluster [DBG] 3.8 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 1097728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:38:59.275262+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 1097728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:00.275450+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 1097728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:01.275576+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 1089536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771485 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:02.275709+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 1089536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:03.275961+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 1081344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:04.276139+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:39:34.109548+0000 osd.2 (osd.2) 126 : cluster [DBG] 11.2 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:39:34.123163+0000 osd.2 (osd.2) 127 : cluster [DBG] 11.2 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 127) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:39:34.109548+0000 osd.2 (osd.2) 126 : cluster [DBG] 11.2 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:39:34.123163+0000 osd.2 (osd.2) 127 : cluster [DBG] 11.2 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 1073152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:05.276502+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 1073152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:06.276710+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 1064960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 772633 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:07.276861+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 1056768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:08.277007+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 1048576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:09.277200+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 1048576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:10.277374+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 1048576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:11.277526+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1040384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 772633 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:12.277693+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 1024000 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:13.277857+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:14.278037+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:15.278161+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 7.e scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.943264008s of 17.958541870s, submitted: 4
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 7.e scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:16.278320+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:39:46.078787+0000 osd.2 (osd.2) 128 : cluster [DBG] 7.e scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:39:46.092815+0000 osd.2 (osd.2) 129 : cluster [DBG] 7.e scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 129) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:39:46.078787+0000 osd.2 (osd.2) 128 : cluster [DBG] 7.e scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:39:46.092815+0000 osd.2 (osd.2) 129 : cluster [DBG] 7.e scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 8.2 deep-scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 8.2 deep-scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 991232 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 774927 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:17.278555+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:39:47.100636+0000 osd.2 (osd.2) 130 : cluster [DBG] 8.2 deep-scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:39:47.114761+0000 osd.2 (osd.2) 131 : cluster [DBG] 8.2 deep-scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 131) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:39:47.100636+0000 osd.2 (osd.2) 130 : cluster [DBG] 8.2 deep-scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:39:47.114761+0000 osd.2 (osd.2) 131 : cluster [DBG] 8.2 deep-scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 983040 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:18.278824+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 974848 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:19.283245+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 974848 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:20.283520+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 966656 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:21.285118+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 958464 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 776075 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:22.285601+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:39:52.116097+0000 osd.2 (osd.2) 132 : cluster [DBG] 11.8 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:39:52.130206+0000 osd.2 (osd.2) 133 : cluster [DBG] 11.8 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 133) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:39:52.116097+0000 osd.2 (osd.2) 132 : cluster [DBG] 11.8 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:39:52.130206+0000 osd.2 (osd.2) 133 : cluster [DBG] 11.8 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 950272 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:23.286681+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 950272 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:24.287036+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 942080 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:25.288291+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 942080 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:26.288985+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 3.e scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.937836647s of 10.958680153s, submitted: 6
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 3.e scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 933888 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 777222 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:27.289233+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:39:57.037342+0000 osd.2 (osd.2) 134 : cluster [DBG] 3.e scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:39:57.051411+0000 osd.2 (osd.2) 135 : cluster [DBG] 3.e scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 135) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:39:57.037342+0000 osd.2 (osd.2) 134 : cluster [DBG] 3.e scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:39:57.051411+0000 osd.2 (osd.2) 135 : cluster [DBG] 3.e scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 933888 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:28.289460+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 933888 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:29.289664+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:39:59.077296+0000 osd.2 (osd.2) 136 : cluster [DBG] 8.4 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:39:59.091684+0000 osd.2 (osd.2) 137 : cluster [DBG] 8.4 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 137) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:39:59.077296+0000 osd.2 (osd.2) 136 : cluster [DBG] 8.4 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:39:59.091684+0000 osd.2 (osd.2) 137 : cluster [DBG] 8.4 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 925696 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:30.290091+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 925696 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:31.290284+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 917504 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 778369 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:32.290876+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 7.a scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 7.a scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 892928 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:33.291188+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:02.975442+0000 osd.2 (osd.2) 138 : cluster [DBG] 7.a scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:02.989518+0000 osd.2 (osd.2) 139 : cluster [DBG] 7.a scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 139) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:02.975442+0000 osd.2 (osd.2) 138 : cluster [DBG] 7.a scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:02.989518+0000 osd.2 (osd.2) 139 : cluster [DBG] 7.a scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 892928 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:34.291809+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 884736 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:35.292537+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 884736 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:36.292663+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:05.997570+0000 osd.2 (osd.2) 140 : cluster [DBG] 7.8 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:06.011700+0000 osd.2 (osd.2) 141 : cluster [DBG] 7.8 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 141) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:05.997570+0000 osd.2 (osd.2) 140 : cluster [DBG] 7.8 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:06.011700+0000 osd.2 (osd.2) 141 : cluster [DBG] 7.8 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 884736 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 780663 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:37.292872+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 876544 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:38.293052+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.870950699s of 11.901948929s, submitted: 8
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 876544 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:39.293218+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:08.939321+0000 osd.2 (osd.2) 142 : cluster [DBG] 11.18 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:08.953385+0000 osd.2 (osd.2) 143 : cluster [DBG] 11.18 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 143) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:08.939321+0000 osd.2 (osd.2) 142 : cluster [DBG] 11.18 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:08.953385+0000 osd.2 (osd.2) 143 : cluster [DBG] 11.18 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 868352 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:40.293536+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 868352 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:41.293748+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 860160 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 781812 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:42.293968+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 860160 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:43.294338+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:12.985105+0000 osd.2 (osd.2) 144 : cluster [DBG] 11.3 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:12.999226+0000 osd.2 (osd.2) 145 : cluster [DBG] 11.3 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 145) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:12.985105+0000 osd.2 (osd.2) 144 : cluster [DBG] 11.3 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:12.999226+0000 osd.2 (osd.2) 145 : cluster [DBG] 11.3 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 860160 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:44.294735+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 851968 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:45.294907+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 851968 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:46.295061+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:15.974163+0000 osd.2 (osd.2) 146 : cluster [DBG] 7.15 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:15.988237+0000 osd.2 (osd.2) 147 : cluster [DBG] 7.15 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 147) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:15.974163+0000 osd.2 (osd.2) 146 : cluster [DBG] 7.15 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:15.988237+0000 osd.2 (osd.2) 147 : cluster [DBG] 7.15 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 851968 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784108 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:47.295246+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 835584 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:48.295377+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 827392 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:49.295532+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 819200 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:50.295704+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 819200 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:51.295940+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 819200 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784108 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:52.296253+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:53.296489+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 811008 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:54.296734+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 802816 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.021923065s of 16.041292191s, submitted: 6
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:55.296944+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:24.980680+0000 osd.2 (osd.2) 148 : cluster [DBG] 11.1a scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:24.994804+0000 osd.2 (osd.2) 149 : cluster [DBG] 11.1a scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 778240 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 149) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:24.980680+0000 osd.2 (osd.2) 148 : cluster [DBG] 11.1a scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:24.994804+0000 osd.2 (osd.2) 149 : cluster [DBG] 11.1a scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.1b deep-scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.1b deep-scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:56.298084+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 151 sent 149 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:25.954510+0000 osd.2 (osd.2) 150 : cluster [DBG] 11.1b deep-scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:25.968513+0000 osd.2 (osd.2) 151 : cluster [DBG] 11.1b deep-scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 770048 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 151) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:25.954510+0000 osd.2 (osd.2) 150 : cluster [DBG] 11.1b deep-scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:25.968513+0000 osd.2 (osd.2) 151 : cluster [DBG] 11.1b deep-scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:57.299060+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:26.970165+0000 osd.2 (osd.2) 152 : cluster [DBG] 8.1b scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:26.984290+0000 osd.2 (osd.2) 153 : cluster [DBG] 8.1b scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 770048 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 787554 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 153) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:26.970165+0000 osd.2 (osd.2) 152 : cluster [DBG] 8.1b scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:26.984290+0000 osd.2 (osd.2) 153 : cluster [DBG] 8.1b scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:58.299849+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:27.978859+0000 osd.2 (osd.2) 154 : cluster [DBG] 11.1c scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:27.992988+0000 osd.2 (osd.2) 155 : cluster [DBG] 11.1c scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 745472 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 155) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:27.978859+0000 osd.2 (osd.2) 154 : cluster [DBG] 11.1c scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:27.992988+0000 osd.2 (osd.2) 155 : cluster [DBG] 11.1c scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 7.11 deep-scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 7.11 deep-scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:59.300782+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:28.996199+0000 osd.2 (osd.2) 156 : cluster [DBG] 7.11 deep-scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:29.010326+0000 osd.2 (osd.2) 157 : cluster [DBG] 7.11 deep-scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 737280 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 157) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:28.996199+0000 osd.2 (osd.2) 156 : cluster [DBG] 7.11 deep-scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:29.010326+0000 osd.2 (osd.2) 157 : cluster [DBG] 7.11 deep-scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:00.301484+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 729088 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:01.302119+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 729088 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:02.302247+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:31.955977+0000 osd.2 (osd.2) 158 : cluster [DBG] 11.1e scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:31.970055+0000 osd.2 (osd.2) 159 : cluster [DBG] 11.1e scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 720896 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 791000 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 159) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:31.955977+0000 osd.2 (osd.2) 158 : cluster [DBG] 11.1e scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:31.970055+0000 osd.2 (osd.2) 159 : cluster [DBG] 11.1e scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:03.302606+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 720896 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:04.302985+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 704512 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:05.303255+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 704512 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:06.303447+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 696320 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:07.303591+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 696320 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 791000 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:08.303738+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 696320 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.960369110s of 14.000273705s, submitted: 12
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:09.303933+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:38.980959+0000 osd.2 (osd.2) 160 : cluster [DBG] 11.1f scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:38.998596+0000 osd.2 (osd.2) 161 : cluster [DBG] 11.1f scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 688128 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 161) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:38.980959+0000 osd.2 (osd.2) 160 : cluster [DBG] 11.1f scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:38.998596+0000 osd.2 (osd.2) 161 : cluster [DBG] 11.1f scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:10.304175+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 688128 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:11.304304+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 679936 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:12.304457+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:41.996873+0000 osd.2 (osd.2) 162 : cluster [DBG] 8.1c scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:42.010966+0000 osd.2 (osd.2) 163 : cluster [DBG] 8.1c scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 679936 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 793297 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 163) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:41.996873+0000 osd.2 (osd.2) 162 : cluster [DBG] 8.1c scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:42.010966+0000 osd.2 (osd.2) 163 : cluster [DBG] 8.1c scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:13.304691+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 671744 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:14.304882+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 679936 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:15.305045+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:45.022602+0000 osd.2 (osd.2) 164 : cluster [DBG] 3.16 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:45.036730+0000 osd.2 (osd.2) 165 : cluster [DBG] 3.16 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 679936 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 165) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:45.022602+0000 osd.2 (osd.2) 164 : cluster [DBG] 3.16 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:45.036730+0000 osd.2 (osd.2) 165 : cluster [DBG] 3.16 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:16.305206+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 671744 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 7.1 deep-scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 7.1 deep-scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:17.305326+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:46.983621+0000 osd.2 (osd.2) 166 : cluster [DBG] 7.1 deep-scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:46.997726+0000 osd.2 (osd.2) 167 : cluster [DBG] 7.1 deep-scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 671744 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 795592 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 167) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:46.983621+0000 osd.2 (osd.2) 166 : cluster [DBG] 7.1 deep-scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:46.997726+0000 osd.2 (osd.2) 167 : cluster [DBG] 7.1 deep-scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:18.305502+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 671744 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:19.305616+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 663552 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.039050102s of 11.064813614s, submitted: 8
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:20.305826+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:50.045830+0000 osd.2 (osd.2) 168 : cluster [DBG] 3.11 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:50.059961+0000 osd.2 (osd.2) 169 : cluster [DBG] 3.11 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 655360 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 169) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:50.045830+0000 osd.2 (osd.2) 168 : cluster [DBG] 3.11 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:50.059961+0000 osd.2 (osd.2) 169 : cluster [DBG] 3.11 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:21.305970+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 655360 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:22.306144+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 647168 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796740 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:23.306282+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 647168 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:24.306424+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 638976 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:25.306552+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 638976 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.e scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.e scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:26.306688+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:56.095907+0000 osd.2 (osd.2) 170 : cluster [DBG] 9.e scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:56.127694+0000 osd.2 (osd.2) 171 : cluster [DBG] 9.e scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 638976 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 171) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:56.095907+0000 osd.2 (osd.2) 170 : cluster [DBG] 9.e scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:56.127694+0000 osd.2 (osd.2) 171 : cluster [DBG] 9.e scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:27.306913+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 630784 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 797887 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:28.307047+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:58.018878+0000 osd.2 (osd.2) 172 : cluster [DBG] 9.6 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:58.054193+0000 osd.2 (osd.2) 173 : cluster [DBG] 9.6 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 638976 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 173) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:58.018878+0000 osd.2 (osd.2) 172 : cluster [DBG] 9.6 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:58.054193+0000 osd.2 (osd.2) 173 : cluster [DBG] 9.6 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:29.307260+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:59.009357+0000 osd.2 (osd.2) 174 : cluster [DBG] 6.8 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:40:59.023782+0000 osd.2 (osd.2) 175 : cluster [DBG] 6.8 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 630784 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 175) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:59.009357+0000 osd.2 (osd.2) 174 : cluster [DBG] 6.8 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:40:59.023782+0000 osd.2 (osd.2) 175 : cluster [DBG] 6.8 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:30.307501+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 630784 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.887284279s of 10.914896011s, submitted: 8
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:31.307613+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:41:00.960670+0000 osd.2 (osd.2) 176 : cluster [DBG] 9.17 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:41:00.985377+0000 osd.2 (osd.2) 177 : cluster [DBG] 9.17 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 622592 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 177) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:41:00.960670+0000 osd.2 (osd.2) 176 : cluster [DBG] 9.17 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:41:00.985377+0000 osd.2 (osd.2) 177 : cluster [DBG] 9.17 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.f deep-scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.f deep-scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:32.307748+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:41:01.991204+0000 osd.2 (osd.2) 178 : cluster [DBG] 9.f deep-scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:41:02.030028+0000 osd.2 (osd.2) 179 : cluster [DBG] 9.f deep-scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 606208 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 802476 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 179) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:41:01.991204+0000 osd.2 (osd.2) 178 : cluster [DBG] 9.f deep-scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:41:02.030028+0000 osd.2 (osd.2) 179 : cluster [DBG] 9.f deep-scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:33.307935+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 598016 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:34.308109+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 589824 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:35.308254+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:41:04.990326+0000 osd.2 (osd.2) 180 : cluster [DBG] 9.7 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:41:05.022114+0000 osd.2 (osd.2) 181 : cluster [DBG] 9.7 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 581632 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 181) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:41:04.990326+0000 osd.2 (osd.2) 180 : cluster [DBG] 9.7 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:41:05.022114+0000 osd.2 (osd.2) 181 : cluster [DBG] 9.7 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:36.308421+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 573440 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:37.308563+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 573440 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 803623 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:38.308718+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 573440 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:39.308943+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 565248 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:40.309135+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 565248 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:41.309292+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:41:10.880738+0000 osd.2 (osd.2) 182 : cluster [DBG] 9.18 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:41:10.908981+0000 osd.2 (osd.2) 183 : cluster [DBG] 9.18 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 565248 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 183) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:41:10.880738+0000 osd.2 (osd.2) 182 : cluster [DBG] 9.18 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:41:10.908981+0000 osd.2 (osd.2) 183 : cluster [DBG] 9.18 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:42.309597+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 557056 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 804771 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:43.309768+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 557056 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:44.309947+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 548864 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.8 deep-scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.947059631s of 13.981809616s, submitted: 8
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.8 deep-scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:45.310105+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:41:14.942544+0000 osd.2 (osd.2) 184 : cluster [DBG] 9.8 deep-scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:41:14.981355+0000 osd.2 (osd.2) 185 : cluster [DBG] 9.8 deep-scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 540672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 185) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:41:14.942544+0000 osd.2 (osd.2) 184 : cluster [DBG] 9.8 deep-scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:41:14.981355+0000 osd.2 (osd.2) 185 : cluster [DBG] 9.8 deep-scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:46.310300+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 540672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:47.310453+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 532480 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 805918 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:48.310615+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 532480 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:49.310771+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 532480 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:50.310955+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 524288 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:51.311134+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 524288 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:52.311306+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 516096 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 805918 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:53.311565+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 516096 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:54.311715+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 516096 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.c scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.071781158s of 10.079121590s, submitted: 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.c scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:55.311876+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:41:25.021767+0000 osd.2 (osd.2) 186 : cluster [DBG] 9.c scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:41:25.053466+0000 osd.2 (osd.2) 187 : cluster [DBG] 9.c scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 187) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:41:25.021767+0000 osd.2 (osd.2) 186 : cluster [DBG] 9.c scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:41:25.053466+0000 osd.2 (osd.2) 187 : cluster [DBG] 9.c scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 507904 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:56.313855+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 507904 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:57.313980+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 507904 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 807065 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 6.f scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 6.f scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:58.314096+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:41:27.971778+0000 osd.2 (osd.2) 188 : cluster [DBG] 6.f scrub starts
Oct 01 17:11:25 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14565 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:41:27.996513+0000 osd.2 (osd.2) 189 : cluster [DBG] 6.f scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 189) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:41:27.971778+0000 osd.2 (osd.2) 188 : cluster [DBG] 6.f scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:41:27.996513+0000 osd.2 (osd.2) 189 : cluster [DBG] 6.f scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 499712 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:59.314363+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 499712 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:00.314554+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 499712 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:01.314712+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:41:30.963846+0000 osd.2 (osd.2) 190 : cluster [DBG] 9.13 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:41:30.995606+0000 osd.2 (osd.2) 191 : cluster [DBG] 9.13 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 191) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:41:30.963846+0000 osd.2 (osd.2) 190 : cluster [DBG] 9.13 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:41:30.995606+0000 osd.2 (osd.2) 191 : cluster [DBG] 9.13 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 491520 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:02.315752+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 491520 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 809360 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:03.315963+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:41:32.977534+0000 osd.2 (osd.2) 192 : cluster [DBG] 9.19 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  will send 2025-10-01T16:41:33.016455+0000 osd.2 (osd.2) 193 : cluster [DBG] 9.19 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client handle_log_ack log(last 193) v1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:41:32.977534+0000 osd.2 (osd.2) 192 : cluster [DBG] 9.19 scrub starts
Oct 01 17:11:25 compute-0 ceph-osd[90269]: log_client  logged 2025-10-01T16:41:33.016455+0000 osd.2 (osd.2) 193 : cluster [DBG] 9.19 scrub ok
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 475136 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:04.316280+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 475136 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:05.316553+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 475136 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:06.316832+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 466944 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:07.317088+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 466944 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:08.317343+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 458752 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:09.317574+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 458752 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:10.317764+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 458752 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:11.317868+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 450560 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:12.317950+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 450560 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:13.318139+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 434176 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:14.318270+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 434176 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:15.318466+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 434176 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:16.319379+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 425984 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:17.319534+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 425984 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:18.319679+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 425984 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:19.319798+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 417792 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:20.320021+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 417792 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:21.320164+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 417792 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:22.320281+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 409600 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:23.320401+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 409600 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:24.320543+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 401408 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:25.320780+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 401408 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:26.320944+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 401408 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:27.321106+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 393216 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:28.321245+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 393216 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:29.321438+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 393216 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:30.322210+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 385024 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:31.322385+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 385024 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:32.322818+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 385024 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:33.323020+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 376832 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:34.323358+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 376832 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:35.323645+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 368640 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:36.323885+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 368640 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:37.324095+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 368640 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:38.324317+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67747840 unmapped: 352256 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:39.324466+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67747840 unmapped: 352256 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:40.324662+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 344064 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:41.325581+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 335872 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:42.325771+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 335872 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:43.325921+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67780608 unmapped: 319488 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:44.326087+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67780608 unmapped: 319488 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:45.326365+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67780608 unmapped: 319488 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:46.326573+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 311296 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:47.326718+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 311296 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:48.327071+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 303104 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:49.327282+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 303104 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:50.327486+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 303104 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:51.327660+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 294912 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:52.327820+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 294912 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:53.327952+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 294912 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:54.328102+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 286720 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:55.328275+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 286720 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:56.328436+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67821568 unmapped: 278528 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:57.328555+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67821568 unmapped: 278528 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:58.328677+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67821568 unmapped: 278528 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:59.328843+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:00.329062+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:01.330122+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:02.330284+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 262144 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:03.330426+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 262144 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:04.331188+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 262144 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:05.331405+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:06.331564+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:07.331740+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 245760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:08.331871+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 245760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:09.332018+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 237568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:10.332249+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 237568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:11.332473+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 237568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:12.332751+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 229376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:13.333013+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 229376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:14.333358+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 229376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:15.333719+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:16.333948+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:17.334112+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 212992 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:18.334308+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:19.334480+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:20.334667+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 212992 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:21.334813+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 212992 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:22.334941+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 204800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:23.335051+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 196608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:24.335207+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 196608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:25.335363+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 188416 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:26.335656+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 188416 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:27.335823+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 188416 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:28.335945+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 196608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:29.336284+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 196608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:30.336455+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 196608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:31.336590+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 188416 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:32.336736+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 188416 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:33.336933+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 188416 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:34.337111+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 180224 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:35.337251+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 180224 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:36.337816+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 172032 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:37.337991+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 172032 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:38.338140+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 172032 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:39.338571+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 163840 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:40.339090+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 163840 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:41.339223+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 147456 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:42.339408+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 139264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:43.339596+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 139264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:44.339727+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 131072 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:45.339886+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 131072 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:46.340047+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 131072 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:47.340169+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 122880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:48.340307+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 122880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:49.340420+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 122880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:50.340771+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 114688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:51.341005+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 114688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:52.341141+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 114688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:53.341291+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:54.341494+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:55.341689+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:56.341841+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:57.342010+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 90112 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:58.342161+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 90112 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:59.342370+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 90112 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:00.343269+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 90112 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:01.343414+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:02.343663+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:03.343828+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:04.344045+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 73728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:05.344315+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 73728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:06.344454+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:07.344570+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:08.344687+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:09.344887+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:10.345115+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:11.345257+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 40960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:12.345404+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 40960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:13.345556+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 32768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:14.345736+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 32768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:15.345994+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 32768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:16.346313+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:17.346477+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:18.346648+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:19.346781+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:20.346978+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:21.347113+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:22.347232+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:23.347395+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:24.347575+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:25.347798+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:26.347956+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:27.348234+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:28.348405+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:29.348592+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:30.348801+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:31.348954+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:32.349086+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:33.349234+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:34.349346+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:35.349463+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:36.349721+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:37.349918+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:38.350036+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:39.350174+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:40.350361+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:41.350499+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:42.350649+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 983040 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:43.350797+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 983040 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:44.350960+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:45.351152+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:46.351308+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:47.351501+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:48.351646+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:49.351832+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:50.352722+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:51.352938+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:52.353136+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:53.353285+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:54.353463+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:55.353614+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:56.354064+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:57.354211+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 925696 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:58.354373+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 925696 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:59.354495+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 925696 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:00.354648+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:01.354840+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:02.355089+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:03.355255+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:04.355468+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 909312 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:05.355627+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 909312 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:06.355779+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:07.355957+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:08.356066+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:09.356263+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:10.356462+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:11.356646+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 884736 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:12.356872+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 884736 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:13.357048+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 884736 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:14.357226+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:15.357392+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:16.357570+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:17.357717+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:18.357851+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 851968 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:19.357977+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:20.358162+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 843776 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:21.358338+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 843776 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:22.358470+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 843776 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:23.358609+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68313088 unmapped: 835584 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:24.358733+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68313088 unmapped: 835584 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:25.358916+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68321280 unmapped: 827392 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:26.359036+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68321280 unmapped: 827392 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:27.359157+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68329472 unmapped: 819200 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:28.359374+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68329472 unmapped: 819200 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:29.359536+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68329472 unmapped: 819200 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:30.359723+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68329472 unmapped: 819200 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:31.359926+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68337664 unmapped: 811008 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:32.360071+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68337664 unmapped: 811008 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:33.360237+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68337664 unmapped: 811008 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:34.360380+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68345856 unmapped: 802816 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:35.360504+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68345856 unmapped: 802816 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:36.360581+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68354048 unmapped: 794624 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:37.360665+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68354048 unmapped: 794624 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:38.360797+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68354048 unmapped: 794624 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:39.361035+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68362240 unmapped: 786432 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:40.361210+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68362240 unmapped: 786432 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:41.361468+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68362240 unmapped: 786432 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:42.361593+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68370432 unmapped: 778240 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:43.361701+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68362240 unmapped: 786432 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:44.361808+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68362240 unmapped: 786432 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:45.361927+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68370432 unmapped: 778240 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 5418 writes, 23K keys, 5418 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5418 writes, 774 syncs, 7.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5418 writes, 23K keys, 5418 commit groups, 1.0 writes per commit group, ingest: 18.33 MB, 0.03 MB/s
                                           Interval WAL: 5418 writes, 774 syncs, 7.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.55 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.55 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:46.362047+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68427776 unmapped: 720896 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:47.362162+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 712704 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:48.362292+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:49.362525+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:50.362718+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:51.362971+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 688128 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:52.363208+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 688128 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:53.363389+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 679936 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:54.363548+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 679936 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:55.363700+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:56.363854+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:57.363963+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:58.364119+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:59.364308+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 663552 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:00.364486+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 663552 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:01.364699+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68493312 unmapped: 655360 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:02.364846+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68493312 unmapped: 655360 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:03.365020+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:04.365207+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 630784 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:05.365366+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 630784 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:06.365503+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:07.365984+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:08.366145+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:09.366311+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:10.366544+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:11.366701+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:12.366854+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:13.366980+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 589824 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:14.367100+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:15.367518+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:16.367747+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:17.367944+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 573440 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:18.368123+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 573440 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:19.368282+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68583424 unmapped: 565248 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:20.368585+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68583424 unmapped: 565248 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:21.368752+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68583424 unmapped: 565248 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:22.368978+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 557056 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:23.369116+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 557056 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:24.369341+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 557056 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:25.369567+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68599808 unmapped: 548864 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:26.369773+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68599808 unmapped: 548864 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:27.370089+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68599808 unmapped: 548864 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:28.370224+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 540672 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:29.370485+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 532480 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:30.370688+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 532480 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:31.370837+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 532480 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:32.370971+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 532480 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:33.371112+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68624384 unmapped: 524288 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:34.371260+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68624384 unmapped: 524288 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:35.371520+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 516096 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:36.371652+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 516096 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:37.371852+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 516096 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 282.795837402s of 282.837615967s, submitted: 8
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:38.372046+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68583424 unmapped: 565248 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:39.372236+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:40.372415+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 360448 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:41.372575+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 360448 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:42.372716+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 360448 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:43.372944+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 360448 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:44.373118+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 360448 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:45.373283+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 352256 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:46.373464+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 352256 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:47.373608+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69853184 unmapped: 344064 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:48.373756+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 335872 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:49.373908+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 335872 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:50.374090+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69869568 unmapped: 327680 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:51.374218+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69869568 unmapped: 327680 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:52.374342+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69877760 unmapped: 319488 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:53.374514+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69877760 unmapped: 319488 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:54.374706+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69885952 unmapped: 311296 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:55.374847+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69885952 unmapped: 311296 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:56.374931+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69885952 unmapped: 311296 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:57.375057+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69894144 unmapped: 303104 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:58.375158+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69894144 unmapped: 303104 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:59.375275+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69894144 unmapped: 303104 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:00.375599+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69902336 unmapped: 294912 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:01.375689+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69902336 unmapped: 294912 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:02.375846+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69910528 unmapped: 286720 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:03.375960+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69910528 unmapped: 286720 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:04.376139+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69910528 unmapped: 286720 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:05.376305+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69918720 unmapped: 278528 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:06.376525+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69918720 unmapped: 278528 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:07.376642+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69926912 unmapped: 270336 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:08.376758+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69926912 unmapped: 270336 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:09.376885+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69926912 unmapped: 270336 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:10.377219+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69935104 unmapped: 262144 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:11.377417+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69935104 unmapped: 262144 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:12.377561+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69943296 unmapped: 253952 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:13.377701+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69943296 unmapped: 253952 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:14.377866+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69943296 unmapped: 253952 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:15.377946+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 245760 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:16.378055+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 245760 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:17.378176+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69959680 unmapped: 237568 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:18.378301+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69959680 unmapped: 237568 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:19.378432+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69967872 unmapped: 229376 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:20.378613+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69967872 unmapped: 229376 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:21.378747+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69967872 unmapped: 229376 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:22.378876+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69976064 unmapped: 221184 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:23.379032+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69976064 unmapped: 221184 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:24.379168+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69984256 unmapped: 212992 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:25.379290+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69984256 unmapped: 212992 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:26.379442+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69992448 unmapped: 204800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:27.379586+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69992448 unmapped: 204800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:28.379738+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69992448 unmapped: 204800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:29.379908+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 196608 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:30.380125+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 196608 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:31.380288+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70008832 unmapped: 188416 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:32.380436+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70008832 unmapped: 188416 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:33.380594+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 163840 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:34.380744+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 163840 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:35.380918+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 163840 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:36.381040+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 155648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:37.381188+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 155648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:38.381341+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 155648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:39.381500+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70049792 unmapped: 147456 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:40.381669+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70049792 unmapped: 147456 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:41.381841+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70057984 unmapped: 139264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:42.381955+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70057984 unmapped: 139264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:43.382093+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 131072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:44.382237+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 131072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:45.382370+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 131072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:46.382515+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:47.382673+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:48.382823+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:49.383493+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:50.384525+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:51.384666+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:52.384812+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:53.384980+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:54.385096+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:55.385240+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:56.385344+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:57.385462+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:58.385634+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:59.385827+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:00.386079+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:01.386219+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:02.386368+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:03.386521+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:04.386658+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:05.386851+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:06.386990+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:07.387154+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:08.387283+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:09.387440+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:10.387632+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:11.387797+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:12.387955+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:13.388133+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:14.388269+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:15.388393+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:16.388515+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:17.388629+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:18.388766+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:19.388907+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:20.389059+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:21.389176+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:22.389338+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:23.389485+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:24.389617+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:25.389843+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:26.390645+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:27.390765+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:28.390932+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:29.391221+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:30.391817+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:31.391948+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:32.392082+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:33.392244+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:34.392399+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:35.392579+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:36.392691+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:37.392844+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:38.393095+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 98304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:39.393315+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 98304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:40.393492+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 98304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:41.393657+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 98304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:42.393787+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 98304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:43.393959+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 98304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:44.394072+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 98304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:45.394221+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 98304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:46.394371+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 98304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:47.394509+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 98304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:48.394683+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 98304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:49.394836+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 98304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:50.394988+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 98304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:51.395150+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 98304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:52.395263+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 98304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:53.395423+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 90112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:54.395609+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 90112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:55.395757+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 90112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:56.395970+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 90112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:57.396114+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 90112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:58.396353+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 90112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:59.397037+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 90112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:00.397213+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 81920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:01.397405+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 81920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:02.397594+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 81920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:03.397730+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 81920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:04.397857+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 81920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:05.397955+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 81920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:06.398101+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 81920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:07.398286+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 81920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:08.398463+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 81920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:09.398577+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 81920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:10.398730+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 81920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:11.398926+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 81920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:12.399071+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 81920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:13.399203+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 81920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:14.399579+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 81920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:15.399691+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 81920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:16.399819+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 81920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:17.399976+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 81920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:18.400131+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:19.400273+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:20.400454+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:21.400615+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:22.400777+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:23.400924+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:24.401069+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:25.401219+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:26.401324+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:27.401431+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:28.401545+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:29.401682+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:30.401836+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:31.401993+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:32.402149+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:33.402309+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:34.402438+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:35.402601+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:36.402795+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:37.402961+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:38.403079+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:39.403231+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:40.403410+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:41.403523+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 65536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:42.403637+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 65536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:43.403787+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 65536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:44.403984+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 65536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:45.404103+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 65536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:46.404240+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 65536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:47.404413+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 65536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:48.404534+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 65536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:49.404718+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 65536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:50.404933+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 65536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:51.405048+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 65536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:52.405185+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:53.405368+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:54.405484+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:55.405598+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:56.405731+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:57.405919+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:58.406079+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:59.406274+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:00.406473+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:01.407070+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:02.407261+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:03.408311+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:04.408453+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:05.409219+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:06.409405+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:07.409565+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:08.409829+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:09.410088+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:10.410270+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:11.410719+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:12.410929+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:13.411340+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70148096 unmapped: 49152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:14.411478+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 40960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:15.411592+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 40960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:16.411723+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 40960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:17.411850+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 40960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:18.411956+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 40960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:19.412071+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 40960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:20.412471+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 40960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:21.413032+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 40960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:22.413190+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 40960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:23.413266+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 40960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:24.413417+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 40960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:25.413584+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 40960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:26.413701+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 40960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:27.413873+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 40960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:28.414013+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 32768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:29.414179+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 32768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:30.414400+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 32768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:31.414603+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 32768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:32.414715+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 32768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:33.414833+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 16384 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:34.414929+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 16384 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:35.415074+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 16384 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:36.415194+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 16384 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:37.415341+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 16384 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:38.415490+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:39.415588+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:40.415746+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:41.415913+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:42.416011+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:43.416158+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:44.416268+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:45.416430+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:46.416572+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:47.416695+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:48.416804+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:49.417013+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:50.417743+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:51.417927+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:52.418077+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:53.418216+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:54.418366+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:55.418518+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:56.418697+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:57.418933+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:58.419121+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:59.419248+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:00.419418+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:01.419568+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:02.419718+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:03.419858+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:04.420204+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:05.420449+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:06.420601+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:07.420795+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:08.420926+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:09.421045+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:10.421263+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:11.421400+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:12.421547+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:13.421669+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:14.421808+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:15.421967+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:16.422100+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:17.422220+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:18.422349+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:19.422594+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:20.422806+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:21.422967+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:22.423126+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:23.423314+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:24.423466+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:25.423642+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:26.423770+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:27.424000+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:28.424151+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:29.424298+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:30.424450+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:31.424607+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:32.424779+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:33.424977+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:34.425110+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:35.425298+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:36.425461+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:37.425632+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:38.425798+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:39.426016+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:40.426254+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:41.426510+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:42.426771+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:43.426966+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 0 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:44.427203+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 0 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:45.427460+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 0 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:46.427646+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 0 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:47.427849+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 0 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:48.428044+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 0 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:49.428286+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 0 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:50.428515+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 0 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:51.428752+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 0 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:52.429028+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 0 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:53.429293+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:54.429487+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:55.429677+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:56.429857+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:57.430053+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:58.430283+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:59.430516+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:00.430796+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:01.430993+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:02.431172+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:03.431404+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:04.431605+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:05.431809+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:06.432071+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:07.432314+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:08.432550+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:09.432989+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:10.433550+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:11.433786+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:12.433954+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:13.434262+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:14.434597+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:15.434853+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:16.435070+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:17.435297+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:18.435465+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:19.435649+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:20.436051+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:21.436295+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:22.436526+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:23.436768+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:24.436970+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:25.437182+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:26.437344+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:27.437518+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:28.437699+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 1024000 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:29.437873+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 1024000 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:30.438075+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 1024000 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:31.438216+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 1024000 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:32.438340+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 1024000 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:33.438456+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 1015808 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:34.438559+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 1015808 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:35.439422+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 1015808 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:36.439549+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 1015808 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:37.439684+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 1015808 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:38.439793+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 1015808 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:39.439940+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 1015808 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:40.440070+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 1015808 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:41.440222+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 999424 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:42.440378+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 999424 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:43.440510+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 999424 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:44.440647+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 999424 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:45.440805+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 999424 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:46.440962+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 999424 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:47.441117+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 999424 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:48.441286+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:49.441666+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:50.441868+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:51.442041+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:52.442158+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:53.442287+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:54.442490+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:55.442635+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:56.442787+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:57.442921+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:58.443079+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:59.443236+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:00.443400+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:01.443568+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:02.443714+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:03.443884+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:04.444055+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:05.444204+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:06.444380+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:07.444512+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:08.444654+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:09.444801+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:10.444972+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:11.445119+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:12.445270+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:13.445387+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:14.445545+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:15.445700+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:16.445865+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:17.445994+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:18.446143+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:19.446311+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:20.446462+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:21.446595+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:22.446759+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:23.446992+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:24.447157+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:25.447425+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:26.447703+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:27.447877+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:28.448045+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:29.448210+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:30.448448+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:31.448682+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:32.448853+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:33.448973+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:34.449123+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:35.449315+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:36.449495+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:37.449660+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:38.449810+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:39.450048+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:40.450216+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:41.450363+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:42.450577+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:43.450726+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:44.450856+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:45.450949+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:46.451088+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:47.451281+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:48.451476+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:49.451638+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:50.451810+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:51.451949+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:52.452134+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:53.452270+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:54.452402+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:55.452543+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:56.452661+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:57.452801+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:58.452957+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:59.453101+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:00.453281+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:01.453452+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:02.453594+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:03.453728+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:04.453872+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:05.454041+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:06.454168+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:07.454305+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:08.454426+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:09.454565+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:10.454761+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:11.454968+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:12.455136+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:13.455299+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:14.455442+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:15.455545+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:16.455760+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:17.455938+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:18.456074+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:19.456255+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:20.456369+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:21.456473+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:22.456632+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:23.456790+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:24.456965+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:25.457154+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:26.457309+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:27.457458+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:28.457599+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:29.457704+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:30.457845+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:31.458009+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:32.458151+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:33.458334+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:34.458487+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:35.458640+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:36.458777+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:37.458943+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:38.459126+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:39.459273+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:40.459663+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:41.459818+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:42.459990+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:43.460128+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:44.460273+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:45.460374+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:46.460545+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:47.460669+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:48.460805+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:49.460943+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:50.461960+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:51.462285+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:52.462998+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:53.463726+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:54.463871+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:55.464012+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:56.464168+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:57.464377+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:58.464526+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:59.464979+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:00.465274+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:01.465668+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:02.465970+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:03.466262+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:04.466471+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:05.466726+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:06.466939+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:07.467082+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:08.467248+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:09.467397+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:10.467597+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:11.467806+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:12.467963+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:13.468136+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:14.468299+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:15.468457+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:16.468606+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:17.468802+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:18.468981+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:19.469190+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:20.469421+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:21.469575+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:22.469755+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:23.469973+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:24.470143+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:25.470301+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:26.470467+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:27.470639+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:28.470769+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:29.470967+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:30.471166+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:31.471299+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:32.471411+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:33.471580+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:34.471749+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:35.471928+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:36.472075+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:37.472219+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:38.472347+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:39.472572+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:40.472759+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:41.472999+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:42.473195+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:43.473371+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:44.473631+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:45.473861+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 5598 writes, 23K keys, 5598 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5598 writes, 864 syncs, 6.48 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.55 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.55 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:46.474116+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70328320 unmapped: 917504 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:47.474276+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70328320 unmapped: 917504 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:48.474470+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:49.474622+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:50.474849+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:51.475032+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:52.475180+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:53.475308+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:54.475498+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:55.475707+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:56.475864+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:57.476032+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:58.476210+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:59.476365+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:00.476515+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:01.476664+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:02.476783+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:03.476958+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:04.477075+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:05.477246+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:06.477383+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:07.477530+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:08.477741+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:09.478028+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:10.478224+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:11.478441+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:12.478593+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:13.478731+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:14.478875+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:15.479087+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:16.479228+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:17.479378+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:18.479547+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:19.479733+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:20.480001+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:21.480148+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:22.480327+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:23.480467+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:24.480552+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:25.480668+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:26.480861+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:27.481046+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:28.481203+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:29.481388+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:30.481552+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:31.481709+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:32.481961+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:33.482126+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:34.482310+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:35.482443+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:36.482585+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:37.482701+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 599.436401367s of 600.340209961s, submitted: 90
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:38.482871+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [0,0,0,0,0,0,1])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:39.483046+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70328320 unmapped: 917504 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:40.483300+0000)
Oct 01 17:11:25 compute-0 nova_compute[259504]: 2025-10-01 17:11:25.868 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:11:25 compute-0 nova_compute[259504]: 2025-10-01 17:11:25.869 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:11:25 compute-0 nova_compute[259504]: 2025-10-01 17:11:25.869 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:11:25 compute-0 nova_compute[259504]: 2025-10-01 17:11:25.869 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 17:11:25 compute-0 nova_compute[259504]: 2025-10-01 17:11:25.869 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 1933312 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:41.483451+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:42.483654+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:43.483751+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:44.483953+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:45.484129+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:46.484360+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:47.484512+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:48.484656+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:49.484813+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:50.485009+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:51.485178+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:52.485356+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:53.485524+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:54.485701+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:55.485859+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:56.485976+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:57.486152+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:58.486254+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:59.486440+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:00.486587+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:01.486759+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:02.486879+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:03.487053+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:04.487260+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:05.487406+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:06.487635+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:07.487791+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:08.487969+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:09.488129+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:10.489780+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:11.489937+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:12.490110+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:13.490251+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:14.490372+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:15.490514+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:16.490696+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:17.490846+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:18.491009+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:19.491226+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:20.491452+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:21.491718+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:22.492016+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:23.492243+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:24.492420+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:25.492648+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:26.492876+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:27.493123+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:28.493302+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:29.493580+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:30.493822+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:31.493993+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:32.494227+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:33.494519+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:34.494721+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:35.495101+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:36.495418+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:37.495682+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:38.495868+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:39.496051+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:40.496230+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:41.496411+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:42.496573+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:43.496831+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:44.497093+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:45.497247+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:46.497402+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:47.497552+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:48.497725+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:49.497882+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:50.498141+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:51.498331+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:52.498486+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:53.498619+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:54.498772+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:55.498972+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:56.499109+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:57.499252+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:58.499388+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:59.499539+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:00.499749+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:01.499880+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:02.500083+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:03.500296+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:04.500460+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:05.500649+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:06.500840+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:07.501052+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:08.501233+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:09.501415+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:10.501617+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:11.501792+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:12.502028+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:13.502215+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:14.502428+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:15.502628+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:16.502812+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:17.502992+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:18.503138+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:19.503310+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:20.503534+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:21.503713+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:22.503874+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:23.504091+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:24.504253+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:25.504462+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:26.504640+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:27.504839+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:28.505029+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:29.505251+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:30.505520+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:31.505674+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:32.505815+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:33.505979+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 1900544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:34.506168+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 1900544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:35.506341+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 1900544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:36.506534+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 1900544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:37.506751+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 1900544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:38.506941+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 1900544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:39.507124+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 1900544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:40.507332+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 1900544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:41.507494+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:42.507641+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:43.507772+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:44.507972+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:45.508102+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:46.508242+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:47.508419+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:48.508572+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:49.508773+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:50.508994+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:51.509209+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:52.509422+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:53.509630+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:54.509836+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:55.510046+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:56.510204+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:57.510390+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:58.510560+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:59.510729+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:00.510968+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:01.511156+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:02.511340+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:03.511524+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:04.511700+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:05.511885+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:06.512079+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:07.512348+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:08.512553+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:09.512830+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:10.513012+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:11.513117+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:12.513284+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:13.513449+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:14.513595+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:15.513730+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:16.513879+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:17.514052+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:18.514334+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:19.514501+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:20.514727+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:21.514851+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:22.514982+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:23.515141+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:24.515301+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:25.515444+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:26.515622+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:27.515770+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:28.515901+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:29.516079+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:30.516267+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:31.516395+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:32.516579+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:33.516753+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:34.516929+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:35.517219+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:36.517358+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:37.517563+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:38.517695+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:39.517817+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:40.518069+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:41.518249+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:42.518447+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:43.518624+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:44.518798+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:45.518972+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:46.519109+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:47.519219+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:48.519344+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:49.519555+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:50.519774+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:51.519992+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:52.520177+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:53.520304+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:54.520540+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:55.520716+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:56.520996+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:57.521172+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:58.521302+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:59.521522+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:00.521742+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:01.521984+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:02.522132+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:03.522272+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:04.522424+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:05.522560+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:06.522685+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:07.522836+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:08.523188+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:09.523411+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:10.523689+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:11.523914+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:12.524084+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:13.524242+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:14.524397+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:15.524517+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:16.524662+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:17.524822+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:18.525041+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:19.525186+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:20.525377+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:21.525506+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 122 handle_osd_map epochs [122,123], i have 122, src has [1,123]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 222.690231323s of 224.058120728s, submitted: 90
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:22.525634+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 123 handle_osd_map epochs [123,124], i have 123, src has [1,124]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: handle_auth_request added challenge on 0x562612741000
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70459392 unmapped: 1835008 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:23.525766+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _renew_subs
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 125 ms_handle_reset con 0x562612741000 session 0x56261030ab40
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 1687552 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fcaa9000/0x0/0x4ffc00000, data 0xbd288/0x173000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:24.525974+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 1687552 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:25.526134+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: handle_auth_request added challenge on 0x562612741400
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 879394 data_alloc: 218103808 data_used: 159744
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 18300928 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:26.526289+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _renew_subs
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 126 ms_handle_reset con 0x562612741400 session 0x562612fca5a0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 18259968 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:27.526449+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70836224 unmapped: 18243584 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:28.526591+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70836224 unmapped: 18243584 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:29.526719+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fc2a4000/0x0/0x4ffc00000, data 0x8c09bf/0x979000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 126 handle_osd_map epochs [126,127], i have 126, src has [1,127]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70836224 unmapped: 18243584 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:30.527014+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886478 data_alloc: 218103808 data_used: 172032
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70836224 unmapped: 18243584 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:31.527175+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fc2a1000/0x0/0x4ffc00000, data 0x8c2422/0x97c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 18210816 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:32.527291+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 18210816 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:33.527406+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 18210816 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:34.527539+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 18210816 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:35.527660+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886478 data_alloc: 218103808 data_used: 172032
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 18210816 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:36.527758+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fc2a1000/0x0/0x4ffc00000, data 0x8c2422/0x97c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 18178048 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:37.527999+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 18178048 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:38.528129+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 18178048 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:39.528250+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 18178048 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:40.528411+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fc2a1000/0x0/0x4ffc00000, data 0x8c2422/0x97c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886478 data_alloc: 218103808 data_used: 172032
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 18178048 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:41.528556+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 18178048 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:42.528840+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 18178048 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:43.528945+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: handle_auth_request added challenge on 0x56261301fc00
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.454425812s of 22.008068085s, submitted: 54
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 18169856 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:44.529084+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 18104320 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:45.529233+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fc2a0000/0x0/0x4ffc00000, data 0x8c2558/0x97e000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Got map version 10
Oct 01 17:11:25 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 889294 data_alloc: 218103808 data_used: 176128
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 18145280 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:46.529367+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 18145280 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:47.529503+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fc2a0000/0x0/0x4ffc00000, data 0x8c2558/0x97e000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 18145280 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:48.529693+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 18145280 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:49.529811+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 18145280 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:50.529994+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fc29f000/0x0/0x4ffc00000, data 0x8c25f3/0x97f000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 891062 data_alloc: 218103808 data_used: 176128
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 18145280 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:51.530157+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 18145280 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:52.530309+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 18145280 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:53.530435+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Got map version 11
Oct 01 17:11:25 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70991872 unmapped: 18087936 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:54.530542+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fc29f000/0x0/0x4ffc00000, data 0x8c25f3/0x97f000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: handle_auth_request added challenge on 0x56261301f800
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.824827194s of 10.837122917s, submitted: 3
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70991872 unmapped: 18087936 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:55.530667+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fc29f000/0x0/0x4ffc00000, data 0x8c25f3/0x97f000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892302 data_alloc: 218103808 data_used: 176128
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:56.530818+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 18071552 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:57.530977+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 18071552 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:58.531141+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 18071552 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:59.531297+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 18071552 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:00.531518+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 18071552 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fc29f000/0x0/0x4ffc00000, data 0x8c25f3/0x97f000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 891612 data_alloc: 218103808 data_used: 176128
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:01.531659+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 18055168 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:02.531817+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 18055168 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:03.531989+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 18055168 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:04.532124+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 18055168 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:05.532251+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 18055168 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.991678238s of 11.003929138s, submitted: 3
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 891612 data_alloc: 218103808 data_used: 176128
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:06.532382+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fc29f000/0x0/0x4ffc00000, data 0x8c25f3/0x97f000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 18014208 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:07.532539+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 18014208 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:08.532705+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 18014208 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fc29f000/0x0/0x4ffc00000, data 0x8c25f3/0x97f000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:09.532861+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 18014208 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:10.533100+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 18014208 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _renew_subs
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 895610 data_alloc: 218103808 data_used: 184320
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:11.533258+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 18055168 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:12.533444+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 18055168 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:13.533605+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 18055168 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fc29b000/0x0/0x4ffc00000, data 0x8c41d9/0x982000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:14.533752+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:15.533981+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 893350 data_alloc: 218103808 data_used: 184320
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:16.534148+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:17.534245+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fc29e000/0x0/0x4ffc00000, data 0x8c40a3/0x980000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:18.534385+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.727453232s of 13.014475822s, submitted: 28
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:19.534560+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 128 handle_osd_map epochs [128,129], i have 128, src has [1,129]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:20.534734+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 897524 data_alloc: 218103808 data_used: 192512
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:21.534943+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:22.535121+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fc29a000/0x0/0x4ffc00000, data 0x8c5b06/0x983000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:23.535304+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fc29a000/0x0/0x4ffc00000, data 0x8c5b06/0x983000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:24.535424+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:25.535590+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fc29a000/0x0/0x4ffc00000, data 0x8c5b06/0x983000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 897524 data_alloc: 218103808 data_used: 192512
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:26.535707+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fc29a000/0x0/0x4ffc00000, data 0x8c5b06/0x983000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:27.535874+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fc29a000/0x0/0x4ffc00000, data 0x8c5b06/0x983000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:28.536113+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:29.536266+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:30.536464+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fc29b000/0x0/0x4ffc00000, data 0x8c5a6b/0x982000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:31.536633+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 896834 data_alloc: 218103808 data_used: 192512
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:32.536835+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:33.536963+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fc29b000/0x0/0x4ffc00000, data 0x8c5a6b/0x982000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:34.537093+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:35.537287+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:36.537498+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 896834 data_alloc: 218103808 data_used: 192512
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:37.537678+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:38.537855+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _renew_subs
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.400629044s of 19.416978836s, submitted: 14
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:39.538026+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fc298000/0x0/0x4ffc00000, data 0x8c7651/0x985000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:40.538213+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:41.538359+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 899808 data_alloc: 218103808 data_used: 192512
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 18038784 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:42.538525+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 18038784 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:43.538690+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 18038784 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:44.538944+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 18038784 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fc298000/0x0/0x4ffc00000, data 0x8c7651/0x985000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:45.539059+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 18038784 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:46.539185+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 899808 data_alloc: 218103808 data_used: 192512
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 18038784 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:47.539362+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 16990208 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:48.539567+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 16990208 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:49.539721+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 16990208 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.459293365s of 11.637549400s, submitted: 37
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:50.540000+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 16982016 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fc295000/0x0/0x4ffc00000, data 0x8c90b4/0x988000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:51.540127+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902782 data_alloc: 218103808 data_used: 192512
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 16982016 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fc295000/0x0/0x4ffc00000, data 0x8c90b4/0x988000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:52.540297+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 16982016 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:53.540462+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 16982016 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:54.540603+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 16982016 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:55.540763+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 16982016 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fc295000/0x0/0x4ffc00000, data 0x8c90b4/0x988000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:56.540943+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 903670 data_alloc: 218103808 data_used: 192512
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 16982016 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:57.541103+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 16973824 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:58.541248+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 16973824 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:59.541404+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 16973824 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.640153885s of 10.068619728s, submitted: 5
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:00.541607+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 16973824 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fc293000/0x0/0x4ffc00000, data 0x8c9285/0x98b000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:01.541773+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907030 data_alloc: 218103808 data_used: 192512
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 16973824 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:02.541958+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 16973824 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fc28f000/0x0/0x4ffc00000, data 0x8cae6b/0x98e000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:03.542115+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 16973824 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:04.542278+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 16973824 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:05.542452+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 16973824 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:06.542621+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911050 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 16932864 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:07.542785+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: handle_auth_request added challenge on 0x56261301f400
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 16900096 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fc28a000/0x0/0x4ffc00000, data 0x8ccc8c/0x993000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [1])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:08.542956+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Got map version 12
Oct 01 17:11:25 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 16883712 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fc28a000/0x0/0x4ffc00000, data 0x8ccc8c/0x993000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:09.543298+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 16875520 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:10.543478+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.811242580s of 10.441696167s, submitted: 53
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 16867328 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:11.543632+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920358 data_alloc: 218103808 data_used: 212992
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 16859136 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _renew_subs
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:12.543785+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 16842752 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:13.543996+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 16842752 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:14.544126+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 16842752 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fc28b000/0x0/0x4ffc00000, data 0x8ce599/0x992000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:15.544302+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 16826368 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fc288000/0x0/0x4ffc00000, data 0x8d0207/0x995000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:16.544474+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925632 data_alloc: 218103808 data_used: 225280
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72261632 unmapped: 16818176 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:17.544622+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72261632 unmapped: 16818176 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:18.544757+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72261632 unmapped: 16818176 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fc287000/0x0/0x4ffc00000, data 0x8d02a2/0x996000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:19.544921+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72261632 unmapped: 16818176 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:20.545073+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72261632 unmapped: 16818176 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 01 17:11:25 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/533120042' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.666393280s of 10.306402206s, submitted: 99
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:21.545184+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931706 data_alloc: 218103808 data_used: 233472
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 15769600 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:22.545309+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 15769600 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:23.545432+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 15769600 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:24.545576+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fc282000/0x0/0x4ffc00000, data 0x8d3805/0x99a000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 15769600 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fc282000/0x0/0x4ffc00000, data 0x8d3805/0x99a000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:25.545686+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 15769600 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:26.545833+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933318 data_alloc: 218103808 data_used: 233472
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 15753216 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:27.545969+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 15753216 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:28.546139+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 15753216 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:29.546268+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 15753216 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:30.546478+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 15720448 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc280000/0x0/0x4ffc00000, data 0x8d5288/0x99d000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:31.546625+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935086 data_alloc: 218103808 data_used: 233472
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 15720448 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:32.546774+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.658912659s of 11.807299614s, submitted: 52
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 15720448 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:33.546962+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 15720448 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc27f000/0x0/0x4ffc00000, data 0x8d53be/0x99f000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:34.547173+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 15720448 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:35.547336+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 15712256 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:36.547468+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935798 data_alloc: 218103808 data_used: 233472
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 15712256 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc27f000/0x0/0x4ffc00000, data 0x8d53be/0x99f000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:37.547600+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 15712256 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:38.547760+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc27f000/0x0/0x4ffc00000, data 0x8d53be/0x99f000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 15712256 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:39.547959+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 15704064 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:40.548160+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 15704064 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc280000/0x0/0x4ffc00000, data 0x8d5323/0x99e000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 138 handle_osd_map epochs [139,139], i have 139, src has [1,139]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:41.548307+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939004 data_alloc: 218103808 data_used: 262144
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 15695872 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:42.548467+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 15695872 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:43.548659+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:44.548802+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fc27b000/0x0/0x4ffc00000, data 0x8d6fd4/0x9a2000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fc27b000/0x0/0x4ffc00000, data 0x8d6fd4/0x9a2000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:45.548929+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.430072784s of 13.595589638s, submitted: 29
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:46.549037+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943570 data_alloc: 218103808 data_used: 262144
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:47.549202+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:48.549343+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:49.549499+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fc278000/0x0/0x4ffc00000, data 0x8d8a57/0x9a5000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:50.549675+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:51.550146+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945338 data_alloc: 218103808 data_used: 262144
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:52.550355+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:53.550650+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:54.550799+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fc279000/0x0/0x4ffc00000, data 0x8d8a57/0x9a5000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:55.551376+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _renew_subs
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:56.552171+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947076 data_alloc: 218103808 data_used: 270336
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:57.552361+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:58.552638+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:59.552993+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fc276000/0x0/0x4ffc00000, data 0x8da5a2/0x9a7000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.812130928s of 14.100782394s, submitted: 45
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:00.553317+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:01.553557+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950050 data_alloc: 218103808 data_used: 270336
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:02.553798+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:03.553940+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc273000/0x0/0x4ffc00000, data 0x8dc005/0x9aa000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc273000/0x0/0x4ffc00000, data 0x8dc005/0x9aa000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:04.554077+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:05.554281+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:06.554532+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950050 data_alloc: 218103808 data_used: 270336
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:07.554967+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc273000/0x0/0x4ffc00000, data 0x8dc005/0x9aa000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:08.555134+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:09.555767+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:10.555999+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:11.556196+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951818 data_alloc: 218103808 data_used: 270336
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:12.556455+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Got map version 13
Oct 01 17:11:25 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:13.556591+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc272000/0x0/0x4ffc00000, data 0x8dc0a0/0x9ab000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 15630336 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:14.556765+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc272000/0x0/0x4ffc00000, data 0x8dc0a0/0x9ab000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 15630336 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:15.556956+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 15630336 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.800889015s of 15.819281578s, submitted: 16
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:16.557134+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950200 data_alloc: 218103808 data_used: 270336
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 15630336 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:17.557332+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 15630336 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc273000/0x0/0x4ffc00000, data 0x8dc005/0x9aa000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:18.557475+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 15630336 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:19.557628+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 15622144 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:20.557986+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 15622144 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:21.558178+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc273000/0x0/0x4ffc00000, data 0x8dc0a0/0x9ab000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950938 data_alloc: 218103808 data_used: 270336
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 15622144 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:22.558371+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 15622144 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc274000/0x0/0x4ffc00000, data 0x8dc005/0x9aa000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:23.558520+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc274000/0x0/0x4ffc00000, data 0x8dc005/0x9aa000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 15581184 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:24.558662+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 15581184 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:25.558817+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 15581184 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:26.558989+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951840 data_alloc: 218103808 data_used: 270336
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 15581184 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.204713821s of 11.229690552s, submitted: 4
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:27.559149+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 15540224 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:28.559313+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 15540224 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:29.559454+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc272000/0x0/0x4ffc00000, data 0x8dc035/0x9ab000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 15540224 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:30.559607+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 15556608 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:31.559758+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951968 data_alloc: 218103808 data_used: 270336
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 15556608 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:32.559929+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 15556608 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:33.560090+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 15556608 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:34.560237+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 15556608 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:35.560402+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc272000/0x0/0x4ffc00000, data 0x8dc035/0x9ab000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 15556608 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:36.560579+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951968 data_alloc: 218103808 data_used: 270336
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 15556608 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:37.560682+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 15556608 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.572502136s of 10.632000923s, submitted: 12
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:38.560829+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 15499264 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:39.560969+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc271000/0x0/0x4ffc00000, data 0x8dc103/0x9ac000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 15499264 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc271000/0x0/0x4ffc00000, data 0x8dc103/0x9ac000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:40.561144+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 15499264 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 142 ms_handle_reset con 0x56261301f400 session 0x562612fc9c20
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:41.561315+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953560 data_alloc: 218103808 data_used: 270336
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 14909440 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:42.561481+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Got map version 14
Oct 01 17:11:25 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 14901248 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:43.561643+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc271000/0x0/0x4ffc00000, data 0x8dc0d0/0x9ac000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 14901248 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:44.561803+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 14893056 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:45.561947+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 14876672 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:46.562086+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc26e000/0x0/0x4ffc00000, data 0x8dc265/0x9ae000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958336 data_alloc: 218103808 data_used: 270336
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 14843904 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:47.562208+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 14843904 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc26e000/0x0/0x4ffc00000, data 0x8dc32e/0x9af000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:48.562359+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.431452751s of 10.563019753s, submitted: 199
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 14819328 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:49.562543+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 14819328 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:50.562756+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 14819328 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:51.562954+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957518 data_alloc: 218103808 data_used: 270336
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 14786560 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:52.563115+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 14794752 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:53.563297+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc271000/0x0/0x4ffc00000, data 0x8dc1cc/0x9ad000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 14786560 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:54.563460+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 14786560 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:55.563596+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 14778368 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:56.563724+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958724 data_alloc: 218103808 data_used: 270336
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 14778368 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:57.563866+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0x8dc232/0x9ae000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 14753792 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:58.563943+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 14753792 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:59.564112+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.824316025s of 10.967424393s, submitted: 29
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 14721024 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:00.564288+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 14721024 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:01.564484+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961346 data_alloc: 218103808 data_used: 270336
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 14721024 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:02.566191+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0x8dc32c/0x9af000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 14721024 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:03.566308+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0x8dc32c/0x9af000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 13672448 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:04.566454+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 13672448 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:05.566634+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 13672448 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:06.566830+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960612 data_alloc: 218103808 data_used: 270336
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 13631488 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:07.584542+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 13631488 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:08.584773+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0x8dc1c8/0x9ad000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 13631488 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:09.584981+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.987728119s of 10.111098289s, submitted: 25
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 13631488 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:10.585182+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 13598720 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:11.585296+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962332 data_alloc: 218103808 data_used: 270336
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 13598720 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0x8dc20a/0x9ae000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:12.585443+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 13598720 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:13.585621+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 13598720 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:14.585741+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc26e000/0x0/0x4ffc00000, data 0x8dc1f5/0x9ae000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 13533184 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:15.585868+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 13533184 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:16.586018+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962800 data_alloc: 218103808 data_used: 270336
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 13516800 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:17.586164+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc271000/0x0/0x4ffc00000, data 0x8dc170/0x9ad000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75579392 unmapped: 13500416 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:18.586275+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75579392 unmapped: 13500416 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:19.586346+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75579392 unmapped: 13500416 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:20.586531+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.955349922s of 11.056019783s, submitted: 22
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 13475840 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:21.586680+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964568 data_alloc: 218103808 data_used: 270336
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 13475840 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:22.586857+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc26d000/0x0/0x4ffc00000, data 0x8dc1fc/0x9ae000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 13475840 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:23.586988+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 13475840 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:24.587154+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75653120 unmapped: 13426688 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:25.587279+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75653120 unmapped: 13426688 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:26.587418+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963832 data_alloc: 218103808 data_used: 270336
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75653120 unmapped: 13426688 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc270000/0x0/0x4ffc00000, data 0x8dc20a/0x9ae000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:27.587561+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 13418496 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:28.587846+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 13402112 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:29.588057+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 13402112 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:30.588303+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc26e000/0x0/0x4ffc00000, data 0x8dc1fa/0x9ae000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 13402112 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.464566231s of 10.624966621s, submitted: 33
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:31.588658+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964862 data_alloc: 218103808 data_used: 270336
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 13402112 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:32.588958+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 13402112 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc26e000/0x0/0x4ffc00000, data 0x8dc209/0x9ae000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:33.589136+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 13410304 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:34.589402+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _renew_subs
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 13385728 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:35.589571+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 13377536 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:36.589759+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972314 data_alloc: 218103808 data_used: 278528
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 13303808 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:37.589983+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 13303808 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:38.590212+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fc266000/0x0/0x4ffc00000, data 0x8ddde2/0x9b1000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 13279232 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:39.590381+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fc26c000/0x0/0x4ffc00000, data 0x8dde23/0x9b1000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 13271040 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 143 handle_osd_map epochs [144,145], i have 143, src has [1,145]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:40.590537+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 13180928 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:41.590676+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976430 data_alloc: 218103808 data_used: 286720
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 13180928 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:42.590867+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 13180928 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.312106133s of 12.044094086s, submitted: 91
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:43.591043+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 13172736 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:44.591222+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fc267000/0x0/0x4ffc00000, data 0x8e13f0/0x9b6000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 13164544 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:45.591513+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 13164544 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:46.591695+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978682 data_alloc: 218103808 data_used: 294912
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 13156352 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:47.591970+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 13131776 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:48.592129+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 13123584 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:49.592307+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fc263000/0x0/0x4ffc00000, data 0x8e2e30/0x9b9000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 13123584 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:50.592481+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 13107200 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:51.592632+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977930 data_alloc: 218103808 data_used: 294912
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 13115392 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:52.592785+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 13115392 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:53.592958+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 13115392 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.075264931s of 11.000350952s, submitted: 27
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:54.593062+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 13115392 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:55.593212+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fc264000/0x0/0x4ffc00000, data 0x8e2dfd/0x9b8000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 13107200 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:56.593372+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976246 data_alloc: 218103808 data_used: 294912
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 13107200 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:57.593523+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 13107200 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:58.593656+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 13107200 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:59.593791+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 13107200 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:00.594081+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 13107200 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:01.594258+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fc266000/0x0/0x4ffc00000, data 0x8e2d33/0x9b7000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976550 data_alloc: 218103808 data_used: 294912
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 13107200 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:02.594386+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 13107200 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:03.594516+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 13107200 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:04.594663+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 13107200 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:05.594808+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.013538361s of 11.418750763s, submitted: 10
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12648448 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:06.595048+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982684 data_alloc: 218103808 data_used: 294912
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12648448 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:07.595333+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fc251000/0x0/0x4ffc00000, data 0x8f7825/0x9cc000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 12615680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:08.595509+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 12615680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:09.595708+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 12271616 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:10.595931+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 12173312 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fc22b000/0x0/0x4ffc00000, data 0x91daf7/0x9f2000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:11.596093+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fc22b000/0x0/0x4ffc00000, data 0x91daf7/0x9f2000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987080 data_alloc: 218103808 data_used: 294912
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 12173312 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:12.596239+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 9641984 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:13.596425+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 79601664 unmapped: 9478144 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:14.596574+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 9363456 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:15.596730+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 9363456 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:16.596938+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.701203346s of 11.124578476s, submitted: 48
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989334 data_alloc: 218103808 data_used: 294912
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 9109504 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:17.597161+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fb02b000/0x0/0x4ffc00000, data 0x97d095/0xa52000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 9109504 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:18.598159+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 9109504 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:19.598307+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 8962048 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:20.598519+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 8986624 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:21.598754+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994082 data_alloc: 218103808 data_used: 294912
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 80257024 unmapped: 8822784 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:22.598964+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4faff0000/0x0/0x4ffc00000, data 0x9b88d9/0xa8d000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 8568832 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:23.599164+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 8568832 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:24.599289+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 8380416 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:25.599475+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 7217152 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:26.599691+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993026 data_alloc: 218103808 data_used: 294912
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 7217152 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:27.599872+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.497201920s of 10.907721519s, submitted: 61
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:28.600114+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 7036928 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4faf79000/0x0/0x4ffc00000, data 0xa2ee5e/0xb04000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:29.600279+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 7168000 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:30.600465+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 6922240 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4faf41000/0x0/0x4ffc00000, data 0xa6721a/0xb3d000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:31.600631+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 6897664 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002310 data_alloc: 218103808 data_used: 294912
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:32.600752+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 6938624 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:33.600884+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 6938624 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:34.601035+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 6963200 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:35.601223+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 7200768 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:36.601366+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 7176192 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4faef7000/0x0/0x4ffc00000, data 0xab16ad/0xb87000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007510 data_alloc: 218103808 data_used: 294912
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:37.601535+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 83238912 unmapped: 5840896 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 3.503462076s of 10.137934685s, submitted: 59
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4faede000/0x0/0x4ffc00000, data 0xaca0ee/0xba0000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:38.601678+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 5480448 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:39.601849+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 5644288 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:40.602072+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 5636096 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:41.602216+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 5423104 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010706 data_alloc: 218103808 data_used: 294912
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:42.602360+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 5300224 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4faea5000/0x0/0x4ffc00000, data 0xb020a3/0xbd9000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:43.602646+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 5267456 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fae99000/0x0/0x4ffc00000, data 0xb0e3c7/0xbe5000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [0,0,0,0,1])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:44.602827+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 5128192 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fae98000/0x0/0x4ffc00000, data 0xb0f3fa/0xbe6000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:45.603011+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 5267456 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 7483 writes, 29K keys, 7483 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7482 writes, 1560 syncs, 4.80 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1885 writes, 5550 keys, 1885 commit groups, 1.0 writes per commit group, ingest: 5.36 MB, 0.01 MB/s
                                           Interval WAL: 1884 writes, 696 syncs, 2.71 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:46.603171+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 5251072 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fae5a000/0x0/0x4ffc00000, data 0xb4c322/0xc23000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1018658 data_alloc: 218103808 data_used: 294912
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:47.603416+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 5046272 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 3.035921097s of 10.091075897s, submitted: 57
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:48.603534+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 4022272 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fae2a000/0x0/0x4ffc00000, data 0xb7b397/0xc52000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [0,0,0,0,0,0,1])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:49.603713+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 3874816 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fae13000/0x0/0x4ffc00000, data 0xb9341f/0xc6a000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [0,0,0,0,0,0,0,4])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:50.603974+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 4153344 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:51.604269+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 4030464 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1021560 data_alloc: 218103808 data_used: 294912
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:52.604447+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 85139456 unmapped: 3940352 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: mgrc ms_handle_reset ms_handle_reset con 0x562610a32000
Oct 01 17:11:25 compute-0 ceph-osd[90269]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3235544197
Oct 01 17:11:25 compute-0 ceph-osd[90269]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: get_auth_request con 0x562611b52800 auth_method 0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: mgrc handle_mgr_configure stats_period=5
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:53.604593+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 4186112 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:54.604792+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 4055040 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fad9a000/0x0/0x4ffc00000, data 0xc0c0ab/0xce3000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:55.604952+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 3645440 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:56.605183+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 2998272 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029546 data_alloc: 218103808 data_used: 294912
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:57.605341+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 2154496 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 4.690278053s of 10.004817963s, submitted: 79
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:58.605543+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 2088960 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:59.605723+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 2523136 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fad08000/0x0/0x4ffc00000, data 0xc9f894/0xd75000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fad08000/0x0/0x4ffc00000, data 0xc9f894/0xd75000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:00.605979+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86384640 unmapped: 2695168 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:01.606141+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 2646016 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1030630 data_alloc: 218103808 data_used: 294912
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:02.606313+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86663168 unmapped: 2416640 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:03.606449+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86695936 unmapped: 2383872 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4face7000/0x0/0x4ffc00000, data 0xcc11da/0xd97000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:04.606616+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86802432 unmapped: 2277376 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:05.606770+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86818816 unmapped: 2260992 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:06.606948+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86818816 unmapped: 2260992 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0xce2d96/0xdb8000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1031962 data_alloc: 218103808 data_used: 294912
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:07.607108+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86769664 unmapped: 2310144 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:08.607279+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86777856 unmapped: 2301952 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0xce2d63/0xdb8000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:09.607506+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86777856 unmapped: 2301952 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:10.607743+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86777856 unmapped: 2301952 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:11.607887+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86777856 unmapped: 2301952 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.069967270s of 13.553690910s, submitted: 48
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0xce2d63/0xdb8000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:12.608082+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1032642 data_alloc: 218103808 data_used: 294912
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86786048 unmapped: 2293760 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4facc4000/0x0/0x4ffc00000, data 0xce2e30/0xdb9000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [0,0,0,0,1])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4facc1000/0x0/0x4ffc00000, data 0xce2f58/0xdbb000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:13.608206+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86794240 unmapped: 2285568 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4facc1000/0x0/0x4ffc00000, data 0xce2f58/0xdbb000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:14.608372+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86802432 unmapped: 2277376 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:15.608542+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86810624 unmapped: 2269184 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:16.608677+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86810624 unmapped: 2269184 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:17.608802+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1035954 data_alloc: 218103808 data_used: 294912
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86810624 unmapped: 2269184 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4facc1000/0x0/0x4ffc00000, data 0xce2f57/0xdbb000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:18.608946+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86810624 unmapped: 2269184 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:19.609077+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2252800 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:20.609259+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2252800 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:21.609498+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2252800 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.607082367s of 10.006441116s, submitted: 26
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:22.609633+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1035650 data_alloc: 218103808 data_used: 294912
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2236416 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:23.609764+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2236416 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4facbf000/0x0/0x4ffc00000, data 0xce3020/0xdbc000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:24.609926+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 2211840 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:25.610062+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 2187264 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:26.610219+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 2187264 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:27.610417+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1042184 data_alloc: 218103808 data_used: 303104
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 87949312 unmapped: 1130496 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:28.610605+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 87949312 unmapped: 1130496 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa8ae000/0x0/0x4ffc00000, data 0xce4c05/0xdbf000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:29.610792+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 87957504 unmapped: 1122304 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa8ae000/0x0/0x4ffc00000, data 0xce4bd3/0xdbf000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:30.611016+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 87957504 unmapped: 1122304 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:31.611185+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 87957504 unmapped: 1122304 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.498619080s of 10.211069107s, submitted: 60
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:32.611325+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043426 data_alloc: 218103808 data_used: 303104
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa8ad000/0x0/0x4ffc00000, data 0xce4ba1/0xdbe000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 87973888 unmapped: 1105920 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:33.611455+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 87982080 unmapped: 1097728 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:34.611594+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 87982080 unmapped: 1097728 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:35.611776+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 87982080 unmapped: 1097728 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:36.611958+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 87982080 unmapped: 1097728 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa8ad000/0x0/0x4ffc00000, data 0xce6763/0xdc0000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:37.612170+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047380 data_alloc: 218103808 data_used: 311296
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 87982080 unmapped: 1097728 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:38.612364+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 88039424 unmapped: 2088960 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:39.612477+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 2023424 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:40.612622+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 88113152 unmapped: 2015232 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fa8ac000/0x0/0x4ffc00000, data 0xce6815/0xdc1000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:41.613590+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 88137728 unmapped: 1990656 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.183484077s of 10.012637138s, submitted: 165
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fa8a9000/0x0/0x4ffc00000, data 0xce83fb/0xdc4000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:42.613742+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053858 data_alloc: 218103808 data_used: 319488
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 88137728 unmapped: 1990656 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:43.613946+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 88154112 unmapped: 1974272 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fa8a6000/0x0/0x4ffc00000, data 0xce8414/0xdc5000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:44.614141+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 88154112 unmapped: 1974272 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:45.614311+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 88162304 unmapped: 1966080 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:46.614495+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0xce8489/0xdc6000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 88178688 unmapped: 1949696 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa8a4000/0x0/0x4ffc00000, data 0xce9eec/0xdc9000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:47.614671+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061560 data_alloc: 218103808 data_used: 319488
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89243648 unmapped: 884736 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:48.614847+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89251840 unmapped: 876544 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa8a1000/0x0/0x4ffc00000, data 0xce9f7e/0xdca000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:49.615026+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89251840 unmapped: 876544 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:50.615218+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89251840 unmapped: 876544 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:51.615485+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89251840 unmapped: 876544 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:52.615666+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058656 data_alloc: 218103808 data_used: 319488
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89251840 unmapped: 876544 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:53.615819+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.486909866s of 11.658401489s, submitted: 39
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89251840 unmapped: 876544 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa8a9000/0x0/0x4ffc00000, data 0xce9cb6/0xdc5000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:54.616220+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89251840 unmapped: 876544 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:55.616393+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89251840 unmapped: 876544 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:56.616586+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89251840 unmapped: 876544 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:57.616840+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057902 data_alloc: 218103808 data_used: 327680
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89251840 unmapped: 876544 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:58.617067+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fa8a6000/0x0/0x4ffc00000, data 0xceb89c/0xdc8000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89251840 unmapped: 876544 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fa8a6000/0x0/0x4ffc00000, data 0xceb89c/0xdc8000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:59.617249+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89251840 unmapped: 876544 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:00.617455+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 151 handle_osd_map epochs [151,152], i have 151, src has [1,152]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89260032 unmapped: 868352 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:01.617657+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89260032 unmapped: 1916928 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:02.617810+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1065032 data_alloc: 218103808 data_used: 335872
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89260032 unmapped: 1916928 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:03.617997+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89260032 unmapped: 1916928 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fa89f000/0x0/0x4ffc00000, data 0xceee4a/0xdcd000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:04.618147+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89260032 unmapped: 1916928 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:05.618385+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89260032 unmapped: 1916928 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:06.618571+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89260032 unmapped: 1916928 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:07.618710+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1065032 data_alloc: 218103808 data_used: 335872
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89260032 unmapped: 1916928 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.398724556s of 14.739449501s, submitted: 62
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:08.618846+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fa89f000/0x0/0x4ffc00000, data 0xceee4a/0xdcd000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89268224 unmapped: 1908736 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:09.619008+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89268224 unmapped: 1908736 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:10.619223+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 154 handle_osd_map epochs [154,155], i have 154, src has [1,155]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 90333184 unmapped: 843776 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:11.619370+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 90333184 unmapped: 843776 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:12.619497+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1071592 data_alloc: 218103808 data_used: 335872
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 90333184 unmapped: 843776 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:13.619627+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 90333184 unmapped: 843776 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:14.619724+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 155 heartbeat osd_stat(store_statfs(0x4fa899000/0x0/0x4ffc00000, data 0xcf24ef/0xdd3000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 90341376 unmapped: 835584 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:15.619983+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 90357760 unmapped: 819200 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:16.620107+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 90357760 unmapped: 819200 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fa896000/0x0/0x4ffc00000, data 0xcf400d/0xdd7000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:17.620269+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072468 data_alloc: 218103808 data_used: 344064
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91414528 unmapped: 811008 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:18.620415+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91414528 unmapped: 811008 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:19.620565+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91414528 unmapped: 811008 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:20.620718+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91414528 unmapped: 811008 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fa899000/0x0/0x4ffc00000, data 0xcf3ed7/0xdd5000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.493772507s of 12.691194534s, submitted: 54
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:21.620883+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91414528 unmapped: 811008 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:22.621102+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0xcf5abd/0xdd8000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076802 data_alloc: 218103808 data_used: 356352
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91414528 unmapped: 811008 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:23.621241+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91414528 unmapped: 811008 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:24.621393+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91414528 unmapped: 811008 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0xcf5abd/0xdd8000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:25.621526+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91422720 unmapped: 802816 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:26.621677+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91422720 unmapped: 802816 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:27.621831+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080448 data_alloc: 218103808 data_used: 356352
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91430912 unmapped: 794624 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:28.621990+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91439104 unmapped: 786432 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:29.622155+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91439104 unmapped: 786432 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fa892000/0x0/0x4ffc00000, data 0xcf7520/0xddb000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:30.622348+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91439104 unmapped: 786432 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:31.622507+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91439104 unmapped: 786432 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fa892000/0x0/0x4ffc00000, data 0xcf7520/0xddb000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:32.622644+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079776 data_alloc: 218103808 data_used: 356352
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91439104 unmapped: 786432 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:33.622864+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fa892000/0x0/0x4ffc00000, data 0xcf7520/0xddb000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91447296 unmapped: 778240 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:34.623080+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91447296 unmapped: 778240 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:35.623256+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91447296 unmapped: 778240 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:36.623446+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91447296 unmapped: 778240 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:37.623684+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079776 data_alloc: 218103808 data_used: 356352
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91447296 unmapped: 778240 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fa892000/0x0/0x4ffc00000, data 0xcf7520/0xddb000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:38.623837+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91447296 unmapped: 778240 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:39.623968+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91447296 unmapped: 778240 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:40.624136+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fa892000/0x0/0x4ffc00000, data 0xcf7520/0xddb000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91463680 unmapped: 761856 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 158 handle_osd_map epochs [158,159], i have 158, src has [1,159]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.418195724s of 20.149101257s, submitted: 36
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:41.624271+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91488256 unmapped: 737280 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 159 heartbeat osd_stat(store_statfs(0x4fa88f000/0x0/0x4ffc00000, data 0xcf9136/0xdde000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:42.624420+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082750 data_alloc: 218103808 data_used: 356352
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91488256 unmapped: 737280 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:43.624578+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91488256 unmapped: 737280 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 159 heartbeat osd_stat(store_statfs(0x4fa88f000/0x0/0x4ffc00000, data 0xcf9136/0xdde000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:44.624747+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Got map version 15
Oct 01 17:11:25 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91496448 unmapped: 729088 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: handle_auth_request added challenge on 0x5626134a6000
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:45.624889+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91496448 unmapped: 729088 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 159 heartbeat osd_stat(store_statfs(0x4fa88f000/0x0/0x4ffc00000, data 0xcf9248/0xddf000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:46.625031+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 729088 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:47.625129+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087812 data_alloc: 218103808 data_used: 364544
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 729088 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa88b000/0x0/0x4ffc00000, data 0xcfaccb/0xde2000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:48.625246+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 712704 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:49.625375+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 712704 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:50.625511+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa88b000/0x0/0x4ffc00000, data 0xcfabb9/0xde1000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 712704 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:51.625623+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 712704 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa88b000/0x0/0x4ffc00000, data 0xcfabb9/0xde1000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:52.625755+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087074 data_alloc: 218103808 data_used: 364544
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 712704 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:53.625923+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 712704 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:54.626085+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 712704 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:55.626285+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 712704 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:56.626454+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa88b000/0x0/0x4ffc00000, data 0xcfabb9/0xde1000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 704512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:57.626577+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087074 data_alloc: 218103808 data_used: 364544
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 704512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:58.626725+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 704512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:59.626885+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 704512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:00.627114+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.447078705s of 19.782892227s, submitted: 39
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 704512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:01.627298+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 704512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:02.627478+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa88c000/0x0/0x4ffc00000, data 0xcfac54/0xde2000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086932 data_alloc: 218103808 data_used: 364544
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 704512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:03.627649+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 704512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:04.627831+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 704512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:05.628032+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 704512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:06.628284+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 704512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa88c000/0x0/0x4ffc00000, data 0xcfac54/0xde2000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:07.628468+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086756 data_alloc: 218103808 data_used: 364544
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 704512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:08.628611+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 704512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:09.628809+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 704512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa88c000/0x0/0x4ffc00000, data 0xcfac54/0xde2000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:10.629033+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 704512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.371644974s of 10.547924042s, submitted: 4
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:11.629164+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 704512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:12.629331+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088524 data_alloc: 218103808 data_used: 364544
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 696320 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:13.629489+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 696320 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:14.629677+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa88b000/0x0/0x4ffc00000, data 0xcfacef/0xde3000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 696320 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa88b000/0x0/0x4ffc00000, data 0xcfacef/0xde3000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:15.629837+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 696320 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:16.630025+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 696320 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:17.630184+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087786 data_alloc: 218103808 data_used: 364544
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 696320 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa88b000/0x0/0x4ffc00000, data 0xcfacef/0xde3000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:18.630361+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa88b000/0x0/0x4ffc00000, data 0xcfac54/0xde2000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 696320 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:19.630540+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 696320 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:20.630750+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 696320 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:21.631009+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa88b000/0x0/0x4ffc00000, data 0xcfacef/0xde3000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 696320 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:22.631129+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088524 data_alloc: 218103808 data_used: 364544
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 688128 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:23.631287+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 688128 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:24.631554+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.063726425s of 13.080301285s, submitted: 4
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 688128 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:25.631709+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 688128 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:26.631850+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 688128 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:27.631984+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa88c000/0x0/0x4ffc00000, data 0xcfac54/0xde2000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086756 data_alloc: 218103808 data_used: 364544
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 688128 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:28.632104+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 688128 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:29.632277+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 688128 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:30.632520+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa88a000/0x0/0x4ffc00000, data 0xcfad8a/0xde4000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 679936 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:31.632791+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa88a000/0x0/0x4ffc00000, data 0xcfad8a/0xde4000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 679936 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:32.632934+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090292 data_alloc: 218103808 data_used: 364544
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 679936 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:33.633154+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 679936 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:34.633346+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.989791870s of 10.007966042s, submitted: 3
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa88a000/0x0/0x4ffc00000, data 0xcfad8a/0xde4000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 679936 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:35.633540+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 679936 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:36.633682+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 679936 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:37.633959+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089426 data_alloc: 218103808 data_used: 364544
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa88b000/0x0/0x4ffc00000, data 0xcfacef/0xde3000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 671744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:38.634125+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 671744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:39.634362+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 671744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa88b000/0x0/0x4ffc00000, data 0xcfacef/0xde3000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:40.634569+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 671744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:41.634749+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 671744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:42.634979+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092048 data_alloc: 218103808 data_used: 364544
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92610560 unmapped: 663552 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa889000/0x0/0x4ffc00000, data 0xcfae25/0xde5000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:43.635227+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92610560 unmapped: 663552 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:44.635424+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92610560 unmapped: 663552 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:45.635567+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _renew_subs
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.755108833s of 11.030391693s, submitted: 7
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 647168 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:46.635763+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 647168 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:47.636065+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1093636 data_alloc: 218103808 data_used: 372736
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 647168 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:48.636497+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa888000/0x0/0x4ffc00000, data 0xcfc86a/0xde5000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 647168 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:49.636850+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 647168 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:50.637287+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 647168 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:51.637519+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 647168 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:52.637718+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa888000/0x0/0x4ffc00000, data 0xcfc86a/0xde5000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 161 handle_osd_map epochs [162,162], i have 162, src has [1,162]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097282 data_alloc: 218103808 data_used: 372736
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92651520 unmapped: 622592 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:53.637855+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92651520 unmapped: 622592 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:54.638003+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92651520 unmapped: 622592 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:55.638150+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.397669792s of 10.364931107s, submitted: 39
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92651520 unmapped: 622592 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:56.638372+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92651520 unmapped: 622592 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:57.638561+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1096808 data_alloc: 218103808 data_used: 372736
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92651520 unmapped: 622592 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:58.638735+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fa886000/0x0/0x4ffc00000, data 0xcfe2ed/0xde8000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92651520 unmapped: 622592 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:59.639028+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 162 handle_osd_map epochs [162,163], i have 162, src has [1,163]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92659712 unmapped: 1662976 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:00.639367+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92659712 unmapped: 1662976 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:01.639746+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92659712 unmapped: 1662976 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:02.639977+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100116 data_alloc: 218103808 data_used: 380928
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92659712 unmapped: 1662976 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa883000/0x0/0x4ffc00000, data 0xcffe68/0xdea000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:03.640168+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92659712 unmapped: 1662976 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:04.640407+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa883000/0x0/0x4ffc00000, data 0xcffe68/0xdea000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92659712 unmapped: 1662976 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:05.640676+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 163 handle_osd_map epochs [163,164], i have 163, src has [1,164]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92667904 unmapped: 1654784 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:06.640852+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92667904 unmapped: 1654784 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:07.641099+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104682 data_alloc: 218103808 data_used: 380928
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92667904 unmapped: 1654784 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:08.641378+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa87f000/0x0/0x4ffc00000, data 0xd019e1/0xdee000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92667904 unmapped: 1654784 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:09.641626+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92676096 unmapped: 1646592 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.413195610s of 14.541606903s, submitted: 76
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:10.641838+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92676096 unmapped: 1646592 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:11.642095+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa87e000/0x0/0x4ffc00000, data 0xd01af6/0xdef000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92676096 unmapped: 1646592 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:12.642303+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106450 data_alloc: 218103808 data_used: 380928
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92676096 unmapped: 1646592 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:13.642522+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa87e000/0x0/0x4ffc00000, data 0xd01af6/0xdef000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92676096 unmapped: 1646592 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:14.642712+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92676096 unmapped: 1646592 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa87e000/0x0/0x4ffc00000, data 0xd01af6/0xdef000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:15.642923+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92676096 unmapped: 1646592 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:16.643144+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92676096 unmapped: 1646592 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:17.643364+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106610 data_alloc: 218103808 data_used: 385024
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92676096 unmapped: 1646592 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:18.643571+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa87e000/0x0/0x4ffc00000, data 0xd01af6/0xdef000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92676096 unmapped: 1646592 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa87e000/0x0/0x4ffc00000, data 0xd01af6/0xdef000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:19.643773+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92676096 unmapped: 1646592 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:20.644002+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92676096 unmapped: 1646592 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:21.644217+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92676096 unmapped: 1646592 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:22.644446+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106610 data_alloc: 218103808 data_used: 385024
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92676096 unmapped: 1646592 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa87e000/0x0/0x4ffc00000, data 0xd01af6/0xdef000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:23.644662+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.262932777s of 13.595630646s, submitted: 1
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92684288 unmapped: 1638400 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:24.644865+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92684288 unmapped: 1638400 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:25.645082+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92684288 unmapped: 1638400 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:26.645284+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0xd018eb/0xded000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92684288 unmapped: 1638400 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:27.645430+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103112 data_alloc: 218103808 data_used: 380928
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92684288 unmapped: 1638400 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:28.645587+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0xd018eb/0xded000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92684288 unmapped: 1638400 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:29.645816+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92684288 unmapped: 1638400 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:30.646083+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92700672 unmapped: 1622016 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:31.646335+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa87f000/0x0/0x4ffc00000, data 0xd01a21/0xdef000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92700672 unmapped: 1622016 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:32.646560+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106648 data_alloc: 218103808 data_used: 380928
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92700672 unmapped: 1622016 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:33.646705+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92700672 unmapped: 1622016 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:34.646963+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92700672 unmapped: 1622016 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:35.647184+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92700672 unmapped: 1622016 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:36.647386+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.171231270s of 12.513448715s, submitted: 4
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa87f000/0x0/0x4ffc00000, data 0xd01a21/0xdef000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 1597440 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:37.647588+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110470 data_alloc: 218103808 data_used: 389120
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 1597440 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:38.647766+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa87b000/0x0/0x4ffc00000, data 0xd03607/0xdf2000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 1597440 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:39.647974+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa87c000/0x0/0x4ffc00000, data 0xd0356c/0xdf1000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 1597440 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:40.648219+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 165 handle_osd_map epochs [165,166], i have 165, src has [1,166]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92749824 unmapped: 1572864 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:41.648415+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 1564672 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:42.648597+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa879000/0x0/0x4ffc00000, data 0xd04fcf/0xdf4000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114312 data_alloc: 218103808 data_used: 401408
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 1564672 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:43.648737+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 1564672 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:44.648961+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 1564672 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:45.649164+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 1564672 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:46.649362+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa879000/0x0/0x4ffc00000, data 0xd04fcf/0xdf4000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 1564672 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:47.649581+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114312 data_alloc: 218103808 data_used: 401408
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 1564672 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:48.649759+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa879000/0x0/0x4ffc00000, data 0xd04fcf/0xdf4000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.479110718s of 12.668789864s, submitted: 39
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92766208 unmapped: 1556480 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:49.649972+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92766208 unmapped: 1556480 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:50.650182+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa878000/0x0/0x4ffc00000, data 0xd0506a/0xdf5000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92774400 unmapped: 1548288 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:51.650335+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92774400 unmapped: 1548288 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:52.650464+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa87a000/0x0/0x4ffc00000, data 0xd04fcf/0xdf4000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113432 data_alloc: 218103808 data_used: 401408
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92774400 unmapped: 1548288 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:53.650584+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92774400 unmapped: 1548288 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:54.650688+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92782592 unmapped: 1540096 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:55.650800+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92782592 unmapped: 1540096 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:56.650929+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _renew_subs
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92815360 unmapped: 1507328 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:57.651048+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116740 data_alloc: 218103808 data_used: 409600
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92815360 unmapped: 1507328 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:58.651210+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa876000/0x0/0x4ffc00000, data 0xd06b1a/0xdf6000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92815360 unmapped: 1507328 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:59.651411+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92815360 unmapped: 1507328 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:00.651579+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92815360 unmapped: 1507328 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:01.651741+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92815360 unmapped: 1507328 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:02.651938+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa877000/0x0/0x4ffc00000, data 0xd06b1a/0xdf6000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.070169449s of 13.739535332s, submitted: 28
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119714 data_alloc: 218103808 data_used: 409600
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92823552 unmapped: 1499136 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:03.652248+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92823552 unmapped: 1499136 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:04.652467+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92823552 unmapped: 1499136 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:05.652626+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92823552 unmapped: 1499136 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:06.652739+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92823552 unmapped: 1499136 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:07.652847+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 168 heartbeat osd_stat(store_statfs(0x4fa874000/0x0/0x4ffc00000, data 0xd0857d/0xdf9000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119714 data_alloc: 218103808 data_used: 409600
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92823552 unmapped: 1499136 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:08.653021+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92823552 unmapped: 1499136 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:09.653142+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92823552 unmapped: 1499136 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:10.653319+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:11.653517+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92823552 unmapped: 1499136 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 168 heartbeat osd_stat(store_statfs(0x4fa874000/0x0/0x4ffc00000, data 0xd0857d/0xdf9000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:12.653649+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92831744 unmapped: 1490944 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 168 heartbeat osd_stat(store_statfs(0x4fa874000/0x0/0x4ffc00000, data 0xd0857d/0xdf9000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119714 data_alloc: 218103808 data_used: 409600
Oct 01 17:11:25 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Got map version 16
Oct 01 17:11:25 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:13.653784+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92856320 unmapped: 1466368 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.395549774s of 10.594060898s, submitted: 25
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 168 heartbeat osd_stat(store_statfs(0x4fa874000/0x0/0x4ffc00000, data 0xd0857d/0xdf9000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:14.653987+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92856320 unmapped: 1466368 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 168 heartbeat osd_stat(store_statfs(0x4fa873000/0x0/0x4ffc00000, data 0xd08618/0xdfa000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:15.654200+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92856320 unmapped: 1466368 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 168 handle_osd_map epochs [168,169], i have 168, src has [1,169]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:16.654379+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92864512 unmapped: 1458176 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 169 handle_osd_map epochs [169,170], i have 169, src has [1,170]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:17.654554+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92872704 unmapped: 1449984 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129276 data_alloc: 218103808 data_used: 417792
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:18.654680+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92872704 unmapped: 1449984 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 170 heartbeat osd_stat(store_statfs(0x4fa86b000/0x0/0x4ffc00000, data 0xd0be44/0xe00000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:19.654833+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92872704 unmapped: 1449984 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:20.655019+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92872704 unmapped: 1449984 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:21.655194+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92872704 unmapped: 1449984 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:22.655325+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92872704 unmapped: 1449984 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 170 heartbeat osd_stat(store_statfs(0x4fa86c000/0x0/0x4ffc00000, data 0xd0bda9/0xdff000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 170 handle_osd_map epochs [171,171], i have 170, src has [1,171]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129858 data_alloc: 218103808 data_used: 417792
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:23.655486+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92872704 unmapped: 1449984 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa86b000/0x0/0x4ffc00000, data 0xd0d82c/0xe02000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:24.655639+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92872704 unmapped: 1449984 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:25.655805+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92872704 unmapped: 1449984 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:26.655939+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92872704 unmapped: 1449984 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:27.656096+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92872704 unmapped: 1449984 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129858 data_alloc: 218103808 data_used: 417792
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:28.656305+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92880896 unmapped: 1441792 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:29.656480+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92880896 unmapped: 1441792 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa86b000/0x0/0x4ffc00000, data 0xd0d82c/0xe02000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:30.656672+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92880896 unmapped: 1441792 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:31.656829+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92880896 unmapped: 1441792 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:32.657977+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92880896 unmapped: 1441792 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129858 data_alloc: 218103808 data_used: 417792
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:33.658108+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92880896 unmapped: 1441792 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:34.658304+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92880896 unmapped: 1441792 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa86b000/0x0/0x4ffc00000, data 0xd0d82c/0xe02000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:35.658641+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92880896 unmapped: 1441792 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:36.658872+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92880896 unmapped: 1441792 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:37.659080+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92880896 unmapped: 1441792 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129858 data_alloc: 218103808 data_used: 417792
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:38.659200+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92880896 unmapped: 1441792 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa86b000/0x0/0x4ffc00000, data 0xd0d82c/0xe02000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:39.659447+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92880896 unmapped: 1441792 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:40.659577+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92880896 unmapped: 1441792 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:41.659709+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 26.810922623s of 27.994199753s, submitted: 63
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 171 ms_handle_reset con 0x5626134a6000 session 0x5626135681e0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93249536 unmapped: 1073152 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:42.659847+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93249536 unmapped: 1073152 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Got map version 17
Oct 01 17:11:25 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128978 data_alloc: 218103808 data_used: 417792
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:43.659986+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93265920 unmapped: 1056768 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa86c000/0x0/0x4ffc00000, data 0xd0d82c/0xe02000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa86c000/0x0/0x4ffc00000, data 0xd0d82c/0xe02000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:44.660126+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93265920 unmapped: 1056768 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 171 handle_osd_map epochs [171,172], i have 171, src has [1,172]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:45.660293+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93265920 unmapped: 1056768 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:46.660455+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93265920 unmapped: 1056768 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:47.660615+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93265920 unmapped: 1056768 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133152 data_alloc: 218103808 data_used: 425984
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:48.660761+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93265920 unmapped: 1056768 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:49.661002+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fa868000/0x0/0x4ffc00000, data 0xd0f412/0xe05000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93265920 unmapped: 1056768 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:50.661292+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93265920 unmapped: 1056768 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fa868000/0x0/0x4ffc00000, data 0xd0f412/0xe05000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:51.661429+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93265920 unmapped: 1056768 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:52.661605+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93265920 unmapped: 1056768 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133152 data_alloc: 218103808 data_used: 425984
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:53.661798+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fa868000/0x0/0x4ffc00000, data 0xd0f412/0xe05000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93265920 unmapped: 1056768 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:54.661938+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93265920 unmapped: 1056768 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:55.662059+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93265920 unmapped: 1056768 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:56.662199+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93265920 unmapped: 1056768 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 172 handle_osd_map epochs [172,173], i have 172, src has [1,173]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.232500076s of 15.341207504s, submitted: 203
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:57.662339+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93274112 unmapped: 1048576 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa865000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136126 data_alloc: 218103808 data_used: 425984
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:58.662517+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93274112 unmapped: 1048576 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:59.662688+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93274112 unmapped: 1048576 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:00.662885+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93274112 unmapped: 1048576 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:01.663080+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93274112 unmapped: 1048576 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:02.663220+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93274112 unmapped: 1048576 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:03.663431+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136126 data_alloc: 218103808 data_used: 425984
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93274112 unmapped: 1048576 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa865000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:04.663579+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93274112 unmapped: 1048576 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:05.663768+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93274112 unmapped: 1048576 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:06.663968+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93274112 unmapped: 1048576 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:07.664113+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93274112 unmapped: 1048576 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:08.664276+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136126 data_alloc: 218103808 data_used: 425984
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93282304 unmapped: 1040384 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa865000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:09.664469+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93282304 unmapped: 1040384 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:10.664712+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93282304 unmapped: 1040384 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:11.664860+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93282304 unmapped: 1040384 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:12.665019+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93282304 unmapped: 1040384 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:13.665159+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136126 data_alloc: 218103808 data_used: 425984
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa865000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93282304 unmapped: 1040384 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:14.665300+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93282304 unmapped: 1040384 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:15.665437+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93282304 unmapped: 1040384 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:16.665549+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93282304 unmapped: 1040384 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:17.665693+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93282304 unmapped: 1040384 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa865000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:18.665817+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136126 data_alloc: 218103808 data_used: 425984
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93282304 unmapped: 1040384 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:19.665965+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93290496 unmapped: 1032192 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:20.666135+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93290496 unmapped: 1032192 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:21.666279+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93290496 unmapped: 1032192 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:22.666436+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93290496 unmapped: 1032192 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:23.666562+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa865000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136126 data_alloc: 218103808 data_used: 425984
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93290496 unmapped: 1032192 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:24.666751+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93290496 unmapped: 1032192 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:25.666881+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93290496 unmapped: 1032192 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:26.667043+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93290496 unmapped: 1032192 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:27.667161+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa865000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93290496 unmapped: 1032192 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:28.667374+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136126 data_alloc: 218103808 data_used: 425984
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93290496 unmapped: 1032192 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:29.667523+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93290496 unmapped: 1032192 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:30.667676+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa865000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93290496 unmapped: 1032192 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:31.667799+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93290496 unmapped: 1032192 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa865000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:32.667919+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93298688 unmapped: 1024000 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:33.668068+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136126 data_alloc: 218103808 data_used: 425984
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93298688 unmapped: 1024000 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:34.668237+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93298688 unmapped: 1024000 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:35.668360+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93298688 unmapped: 1024000 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:36.668529+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93298688 unmapped: 1024000 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa865000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:37.668674+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93298688 unmapped: 1024000 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:38.668833+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136126 data_alloc: 218103808 data_used: 425984
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa865000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93102080 unmapped: 1220608 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:39.669001+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa865000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93102080 unmapped: 1220608 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:40.669161+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93102080 unmapped: 1220608 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 44.292816162s of 44.330963135s, submitted: 13
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 173 ms_handle_reset con 0x56261301fc00 session 0x562612fca5a0
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:41.669285+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93339648 unmapped: 983040 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:42.669399+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93347840 unmapped: 974848 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Got map version 18
Oct 01 17:11:25 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:43.669584+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93347840 unmapped: 974848 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:44.669706+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93347840 unmapped: 974848 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:45.669817+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93347840 unmapped: 974848 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:46.669981+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93347840 unmapped: 974848 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:47.670159+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93347840 unmapped: 974848 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:48.670313+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93347840 unmapped: 974848 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:49.670422+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93347840 unmapped: 974848 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:50.670570+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93347840 unmapped: 974848 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:51.670678+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93347840 unmapped: 974848 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:52.670805+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93388800 unmapped: 933888 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: do_command 'config diff' '{prefix=config diff}'
Oct 01 17:11:25 compute-0 ceph-osd[90269]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 01 17:11:25 compute-0 ceph-osd[90269]: do_command 'config show' '{prefix=config show}'
Oct 01 17:11:25 compute-0 ceph-osd[90269]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 01 17:11:25 compute-0 ceph-osd[90269]: do_command 'counter dump' '{prefix=counter dump}'
Oct 01 17:11:25 compute-0 ceph-osd[90269]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:53.670949+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: do_command 'counter schema' '{prefix=counter schema}'
Oct 01 17:11:25 compute-0 ceph-osd[90269]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:25 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:25 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 1769472 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:54.671091+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93847552 unmapped: 1523712 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:25 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:11:25 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:55.671245+0000)
Oct 01 17:11:25 compute-0 ceph-osd[90269]: do_command 'log dump' '{prefix=log dump}'
Oct 01 17:11:26 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14569 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:26 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Oct 01 17:11:26 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 17:11:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:11:26 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2923275224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:11:26 compute-0 nova_compute[259504]: 2025-10-01 17:11:26.340 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:11:26 compute-0 ceph-mon[74273]: from='client.14555 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:26 compute-0 ceph-mon[74273]: from='client.14559 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:26 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/209229292' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 01 17:11:26 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/533120042' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 01 17:11:26 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2923275224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:11:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 01 17:11:26 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1353527017' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 01 17:11:26 compute-0 nova_compute[259504]: 2025-10-01 17:11:26.497 2 WARNING nova.virt.libvirt.driver [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 17:11:26 compute-0 nova_compute[259504]: 2025-10-01 17:11:26.498 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4906MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 17:11:26 compute-0 nova_compute[259504]: 2025-10-01 17:11:26.499 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:11:26 compute-0 nova_compute[259504]: 2025-10-01 17:11:26.500 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:11:26 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14575 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:26 compute-0 nova_compute[259504]: 2025-10-01 17:11:26.580 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 17:11:26 compute-0 nova_compute[259504]: 2025-10-01 17:11:26.581 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 17:11:26 compute-0 nova_compute[259504]: 2025-10-01 17:11:26.596 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:11:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 01 17:11:26 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3020989331' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 01 17:11:26 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14579 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:11:26 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2184078815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:11:26 compute-0 nova_compute[259504]: 2025-10-01 17:11:26.996 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:11:27 compute-0 nova_compute[259504]: 2025-10-01 17:11:27.001 2 DEBUG nova.compute.provider_tree [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed in ProviderTree for provider: 2417da73-53f1-4edf-ae4c-fbd9fa470d6b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 17:11:27 compute-0 nova_compute[259504]: 2025-10-01 17:11:27.024 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 17:11:27 compute-0 nova_compute[259504]: 2025-10-01 17:11:27.026 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 17:11:27 compute-0 nova_compute[259504]: 2025-10-01 17:11:27.026 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.527s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:11:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:11:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 01 17:11:27 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1019665513' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 01 17:11:27 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14585 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:27 compute-0 ceph-mon[74273]: from='client.14561 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:27 compute-0 ceph-mon[74273]: from='client.14565 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:27 compute-0 ceph-mon[74273]: from='client.14569 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:27 compute-0 ceph-mon[74273]: pgmap v1281: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Oct 01 17:11:27 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1353527017' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 01 17:11:27 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3020989331' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 01 17:11:27 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2184078815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:11:27 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1019665513' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 01 17:11:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct 01 17:11:27 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3886655900' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 01 17:11:27 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14593 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:27 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 01 17:11:27 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:11:27.988+0000 7f816b913640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 01 17:11:28 compute-0 nova_compute[259504]: 2025-10-01 17:11:28.025 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:11:28 compute-0 nova_compute[259504]: 2025-10-01 17:11:28.026 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:11:28 compute-0 nova_compute[259504]: 2025-10-01 17:11:28.026 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:11:28 compute-0 nova_compute[259504]: 2025-10-01 17:11:28.027 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:11:28 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Oct 01 17:11:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct 01 17:11:28 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/533563385' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 01 17:11:28 compute-0 ceph-mon[74273]: from='client.14575 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:28 compute-0 ceph-mon[74273]: from='client.14579 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:28 compute-0 ceph-mon[74273]: from='client.14585 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:28 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3886655900' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 01 17:11:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct 01 17:11:28 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3148814483' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 01 17:11:28 compute-0 nova_compute[259504]: 2025-10-01 17:11:28.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:11:28 compute-0 nova_compute[259504]: 2025-10-01 17:11:28.751 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 17:11:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct 01 17:11:28 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4260592874' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 01 17:11:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct 01 17:11:28 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3477856268' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 01 17:11:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct 01 17:11:29 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1269972093' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 01 17:11:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct 01 17:11:29 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/897296467' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 01 17:11:29 compute-0 ceph-mon[74273]: from='client.14593 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:29 compute-0 ceph-mon[74273]: pgmap v1282: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Oct 01 17:11:29 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/533563385' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 01 17:11:29 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3148814483' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 01 17:11:29 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4260592874' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 01 17:11:29 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3477856268' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 01 17:11:29 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1269972093' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 01 17:11:29 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/897296467' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 01 17:11:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct 01 17:11:29 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3577970147' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 01 17:11:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Oct 01 17:11:29 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1929943715' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 01 17:11:29 compute-0 crontab[282545]: (root) LIST (root)
Oct 01 17:11:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct 01 17:11:29 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3025202361' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 01 17:11:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Oct 01 17:11:30 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2416444125' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1171456 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:05.576605+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1171456 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:06.576746+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1163264 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 808997 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:07.577034+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1163264 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:08.577204+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1146880 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:09.577366+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1146880 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 2.a scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 2.a scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:10.577504+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:39:40.537866+0000 osd.1 (osd.1) 126 : cluster [DBG] 2.a scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:39:40.551939+0000 osd.1 (osd.1) 127 : cluster [DBG] 2.a scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 127) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:39:40.537866+0000 osd.1 (osd.1) 126 : cluster [DBG] 2.a scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:39:40.551939+0000 osd.1 (osd.1) 127 : cluster [DBG] 2.a scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 1138688 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:11.577713+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 1138688 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810144 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:12.577931+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 1138688 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 5.c scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.062666893s of 12.074812889s, submitted: 4
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:13.578044+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 1 last_log 128 sent 127 num 1 unsent 1 sending 1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:39:43.564215+0000 osd.1 (osd.1) 128 : cluster [DBG] 5.c scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 5.c scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 128) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:39:43.564215+0000 osd.1 (osd.1) 128 : cluster [DBG] 5.c scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1130496 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 2.7 deep-scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:14.578214+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 130 sent 128 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:39:43.578321+0000 osd.1 (osd.1) 129 : cluster [DBG] 5.c scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:39:44.575101+0000 osd.1 (osd.1) 130 : cluster [DBG] 2.7 deep-scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 2.7 deep-scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 130) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:39:43.578321+0000 osd.1 (osd.1) 129 : cluster [DBG] 5.c scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:39:44.575101+0000 osd.1 (osd.1) 130 : cluster [DBG] 2.7 deep-scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1130496 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:15.578499+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 1 last_log 131 sent 130 num 1 unsent 1 sending 1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:39:44.589232+0000 osd.1 (osd.1) 131 : cluster [DBG] 2.7 deep-scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 10.f scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 10.f scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 131) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:39:44.589232+0000 osd.1 (osd.1) 131 : cluster [DBG] 2.7 deep-scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 1122304 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:16.578753+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:39:45.593883+0000 osd.1 (osd.1) 132 : cluster [DBG] 10.f scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:39:45.608069+0000 osd.1 (osd.1) 133 : cluster [DBG] 10.f scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 133) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:39:45.593883+0000 osd.1 (osd.1) 132 : cluster [DBG] 10.f scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:39:45.608069+0000 osd.1 (osd.1) 133 : cluster [DBG] 10.f scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 1122304 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 813586 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:17.578936+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:39:47.545857+0000 osd.1 (osd.1) 134 : cluster [DBG] 10.2 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:39:47.559848+0000 osd.1 (osd.1) 135 : cluster [DBG] 10.2 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 135) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:39:47.545857+0000 osd.1 (osd.1) 134 : cluster [DBG] 10.2 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:39:47.559848+0000 osd.1 (osd.1) 135 : cluster [DBG] 10.2 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 1122304 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:18.579142+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 1122304 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:19.579703+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1114112 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:20.579856+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:39:50.495736+0000 osd.1 (osd.1) 136 : cluster [DBG] 2.6 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:39:50.509997+0000 osd.1 (osd.1) 137 : cluster [DBG] 2.6 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 137) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:39:50.495736+0000 osd.1 (osd.1) 136 : cluster [DBG] 2.6 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:39:50.509997+0000 osd.1 (osd.1) 137 : cluster [DBG] 2.6 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1097728 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:21.580033+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1097728 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 815881 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:22.581221+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1097728 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:23.581344+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1089536 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.944370270s of 10.980671883s, submitted: 10
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:24.581478+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:39:54.545136+0000 osd.1 (osd.1) 138 : cluster [DBG] 10.11 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:39:54.559265+0000 osd.1 (osd.1) 139 : cluster [DBG] 10.11 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 139) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:39:54.545136+0000 osd.1 (osd.1) 138 : cluster [DBG] 10.11 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:39:54.559265+0000 osd.1 (osd.1) 139 : cluster [DBG] 10.11 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1081344 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:25.581964+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1081344 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:26.582389+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 1073152 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817030 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:27.582690+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 1073152 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:28.582925+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:39:57.646596+0000 osd.1 (osd.1) 140 : cluster [DBG] 5.1 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:39:57.660691+0000 osd.1 (osd.1) 141 : cluster [DBG] 5.1 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 141) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:39:57.646596+0000 osd.1 (osd.1) 140 : cluster [DBG] 5.1 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:39:57.660691+0000 osd.1 (osd.1) 141 : cluster [DBG] 5.1 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1056768 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:29.583140+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1056768 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:30.583291+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1048576 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:31.583663+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1048576 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 818177 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:32.584130+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 1040384 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:33.584412+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 1040384 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:34.584540+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.220841408s of 10.232955933s, submitted: 4
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 1040384 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:35.584759+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:40:04.778257+0000 osd.1 (osd.1) 142 : cluster [DBG] 10.10 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:40:04.792200+0000 osd.1 (osd.1) 143 : cluster [DBG] 10.10 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 143) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:40:04.778257+0000 osd.1 (osd.1) 142 : cluster [DBG] 10.10 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:40:04.792200+0000 osd.1 (osd.1) 143 : cluster [DBG] 10.10 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 1032192 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:36.585039+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 1024000 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 820475 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:37.585298+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:40:06.826569+0000 osd.1 (osd.1) 144 : cluster [DBG] 10.13 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:40:06.840686+0000 osd.1 (osd.1) 145 : cluster [DBG] 10.13 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 145) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:40:06.826569+0000 osd.1 (osd.1) 144 : cluster [DBG] 10.13 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:40:06.840686+0000 osd.1 (osd.1) 145 : cluster [DBG] 10.13 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 1015808 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:38.585549+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 1015808 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:39.585765+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:40:08.882191+0000 osd.1 (osd.1) 146 : cluster [DBG] 2.1b scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:40:08.896265+0000 osd.1 (osd.1) 147 : cluster [DBG] 2.1b scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 147) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:40:08.882191+0000 osd.1 (osd.1) 146 : cluster [DBG] 2.1b scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:40:08.896265+0000 osd.1 (osd.1) 147 : cluster [DBG] 2.1b scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 1015808 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:40.586002+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:40:09.924003+0000 osd.1 (osd.1) 148 : cluster [DBG] 10.12 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:40:09.938068+0000 osd.1 (osd.1) 149 : cluster [DBG] 10.12 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 149) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:40:09.924003+0000 osd.1 (osd.1) 148 : cluster [DBG] 10.12 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:40:09.938068+0000 osd.1 (osd.1) 149 : cluster [DBG] 10.12 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 999424 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:41.586209+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 151 sent 149 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:40:10.906009+0000 osd.1 (osd.1) 150 : cluster [DBG] 5.1d scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:40:10.920090+0000 osd.1 (osd.1) 151 : cluster [DBG] 5.1d scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 151) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:40:10.906009+0000 osd.1 (osd.1) 150 : cluster [DBG] 5.1d scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:40:10.920090+0000 osd.1 (osd.1) 151 : cluster [DBG] 5.1d scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 999424 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 823920 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:42.586361+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 983040 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:43.586525+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 983040 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:44.586727+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 974848 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:45.586930+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 974848 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:46.587120+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 974848 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 823920 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:47.587265+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 966656 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:48.587433+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.195523262s of 14.240509987s, submitted: 10
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 958464 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:49.587626+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:40:19.018664+0000 osd.1 (osd.1) 152 : cluster [DBG] 2.9 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:40:19.032738+0000 osd.1 (osd.1) 153 : cluster [DBG] 2.9 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 942080 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 153) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:40:19.018664+0000 osd.1 (osd.1) 152 : cluster [DBG] 2.9 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:40:19.032738+0000 osd.1 (osd.1) 153 : cluster [DBG] 2.9 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:50.587811+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:40:20.041774+0000 osd.1 (osd.1) 154 : cluster [DBG] 5.1a scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:40:20.056184+0000 osd.1 (osd.1) 155 : cluster [DBG] 5.1a scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 933888 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 155) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:40:20.041774+0000 osd.1 (osd.1) 154 : cluster [DBG] 5.1a scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:40:20.056184+0000 osd.1 (osd.1) 155 : cluster [DBG] 5.1a scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:51.588032+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 933888 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826215 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:52.588188+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 925696 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:53.588559+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:54.588960+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 925696 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:55.589297+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 917504 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:56.589529+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 917504 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826215 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:57.589682+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 909312 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:58.589833+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 909312 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:59.590116+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 909312 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:00.590262+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 901120 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:01.590439+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 901120 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.010319710s of 13.026381493s, submitted: 4
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:02.590712+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:40:32.045091+0000 osd.1 (osd.1) 156 : cluster [DBG] 10.14 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:40:32.062802+0000 osd.1 (osd.1) 157 : cluster [DBG] 10.14 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 827364 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 901120 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 157) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:40:32.045091+0000 osd.1 (osd.1) 156 : cluster [DBG] 10.14 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:40:32.062802+0000 osd.1 (osd.1) 157 : cluster [DBG] 10.14 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:03.591066+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 892928 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:04.591331+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 892928 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:05.591655+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 892928 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:06.591859+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:40:36.011371+0000 osd.1 (osd.1) 158 : cluster [DBG] 5.18 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:40:36.025486+0000 osd.1 (osd.1) 159 : cluster [DBG] 5.18 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 884736 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 159) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:40:36.011371+0000 osd.1 (osd.1) 158 : cluster [DBG] 5.18 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:40:36.025486+0000 osd.1 (osd.1) 159 : cluster [DBG] 5.18 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:07.592130+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828512 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 884736 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:08.599026+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 876544 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:09.599172+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 876544 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:10.599345+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 868352 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:11.599498+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 868352 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:12.599705+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828512 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 860160 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:13.600004+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 860160 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:14.600164+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 860160 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.940883636s of 12.953789711s, submitted: 4
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:15.600338+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:40:44.998915+0000 osd.1 (osd.1) 160 : cluster [DBG] 5.19 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:40:45.013048+0000 osd.1 (osd.1) 161 : cluster [DBG] 5.19 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 843776 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 161) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:40:44.998915+0000 osd.1 (osd.1) 160 : cluster [DBG] 5.19 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:40:45.013048+0000 osd.1 (osd.1) 161 : cluster [DBG] 5.19 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:16.600589+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:40:45.965685+0000 osd.1 (osd.1) 162 : cluster [DBG] 4.10 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:40:45.979778+0000 osd.1 (osd.1) 163 : cluster [DBG] 4.10 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 835584 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 163) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:40:45.965685+0000 osd.1 (osd.1) 162 : cluster [DBG] 4.10 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:40:45.979778+0000 osd.1 (osd.1) 163 : cluster [DBG] 4.10 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:17.600767+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 830808 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 835584 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:18.600949+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 835584 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:19.601145+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 827392 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:20.601856+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 827392 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:21.602001+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 819200 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:22.602150+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 830808 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 819200 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:23.602300+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 819200 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:24.602409+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 811008 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:25.602553+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 811008 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:26.602682+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 811008 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:27.602805+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 830808 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 802816 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:28.602947+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 794624 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.f scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.901079178s of 13.914241791s, submitted: 4
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.f scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:29.603117+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:40:58.913224+0000 osd.1 (osd.1) 164 : cluster [DBG] 4.f scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:40:58.927256+0000 osd.1 (osd.1) 165 : cluster [DBG] 4.f scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 1835008 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 165) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:40:58.913224+0000 osd.1 (osd.1) 164 : cluster [DBG] 4.f scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:40:58.927256+0000 osd.1 (osd.1) 165 : cluster [DBG] 4.f scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:30.603333+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 1835008 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:31.603479+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:00.975696+0000 osd.1 (osd.1) 166 : cluster [DBG] 4.14 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:00.989885+0000 osd.1 (osd.1) 167 : cluster [DBG] 4.14 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 1826816 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 167) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:00.975696+0000 osd.1 (osd.1) 166 : cluster [DBG] 4.14 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:00.989885+0000 osd.1 (osd.1) 167 : cluster [DBG] 4.14 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:32.603678+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 833103 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 1826816 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.d scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.d scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:33.603822+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:03.005758+0000 osd.1 (osd.1) 168 : cluster [DBG] 4.d scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:03.019885+0000 osd.1 (osd.1) 169 : cluster [DBG] 4.d scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 1826816 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 169) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:03.005758+0000 osd.1 (osd.1) 168 : cluster [DBG] 4.d scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:03.019885+0000 osd.1 (osd.1) 169 : cluster [DBG] 4.d scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:34.604046+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 1818624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:35.604353+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:05.008532+0000 osd.1 (osd.1) 170 : cluster [DBG] 6.1 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:05.022661+0000 osd.1 (osd.1) 171 : cluster [DBG] 6.1 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 1810432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 171) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:05.008532+0000 osd.1 (osd.1) 170 : cluster [DBG] 6.1 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:05.022661+0000 osd.1 (osd.1) 171 : cluster [DBG] 6.1 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:36.604560+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:06.051477+0000 osd.1 (osd.1) 172 : cluster [DBG] 4.9 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:06.065620+0000 osd.1 (osd.1) 173 : cluster [DBG] 4.9 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 1794048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 173) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:06.051477+0000 osd.1 (osd.1) 172 : cluster [DBG] 4.9 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:06.065620+0000 osd.1 (osd.1) 173 : cluster [DBG] 4.9 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:37.604822+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836544 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 1794048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.4 deep-scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.4 deep-scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:38.605026+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:08.042433+0000 osd.1 (osd.1) 174 : cluster [DBG] 4.4 deep-scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:08.056563+0000 osd.1 (osd.1) 175 : cluster [DBG] 4.4 deep-scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 1785856 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 175) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:08.042433+0000 osd.1 (osd.1) 174 : cluster [DBG] 4.4 deep-scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:08.056563+0000 osd.1 (osd.1) 175 : cluster [DBG] 4.4 deep-scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.109175682s of 10.159690857s, submitted: 12
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:39.605332+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:09.072877+0000 osd.1 (osd.1) 176 : cluster [DBG] 4.12 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:09.086994+0000 osd.1 (osd.1) 177 : cluster [DBG] 4.12 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 1777664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 177) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:09.072877+0000 osd.1 (osd.1) 176 : cluster [DBG] 4.12 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:09.086994+0000 osd.1 (osd.1) 177 : cluster [DBG] 4.12 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.2 deep-scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.2 deep-scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:40.605608+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:10.116596+0000 osd.1 (osd.1) 178 : cluster [DBG] 4.2 deep-scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:10.130680+0000 osd.1 (osd.1) 179 : cluster [DBG] 4.2 deep-scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 1761280 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 179) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:10.116596+0000 osd.1 (osd.1) 178 : cluster [DBG] 4.2 deep-scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:10.130680+0000 osd.1 (osd.1) 179 : cluster [DBG] 4.2 deep-scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:41.605813+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:11.114203+0000 osd.1 (osd.1) 180 : cluster [DBG] 4.5 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:11.128304+0000 osd.1 (osd.1) 181 : cluster [DBG] 4.5 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 1761280 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 181) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:11.114203+0000 osd.1 (osd.1) 180 : cluster [DBG] 4.5 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:11.128304+0000 osd.1 (osd.1) 181 : cluster [DBG] 4.5 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:42.606043+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841133 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 1761280 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:43.606173+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:13.122428+0000 osd.1 (osd.1) 182 : cluster [DBG] 4.8 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:13.136608+0000 osd.1 (osd.1) 183 : cluster [DBG] 4.8 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 1753088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 183) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:13.122428+0000 osd.1 (osd.1) 182 : cluster [DBG] 4.8 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:13.136608+0000 osd.1 (osd.1) 183 : cluster [DBG] 4.8 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:44.606369+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:14.082874+0000 osd.1 (osd.1) 184 : cluster [DBG] 4.7 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:14.096992+0000 osd.1 (osd.1) 185 : cluster [DBG] 4.7 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 1753088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 185) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:14.082874+0000 osd.1 (osd.1) 184 : cluster [DBG] 4.7 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:14.096992+0000 osd.1 (osd.1) 185 : cluster [DBG] 4.7 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:45.606550+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 1744896 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:46.606684+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 1744896 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:47.606843+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 843427 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 1736704 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:48.607058+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74760192 unmapped: 1728512 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:49.607195+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74760192 unmapped: 1728512 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:50.607331+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 1720320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:51.607518+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 1720320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:52.607752+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 843427 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 1720320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:53.607861+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 1712128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:54.608012+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 1712128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:55.609496+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 1703936 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:56.609999+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 1703936 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:57.610130+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 843427 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 1695744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:58.610804+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 1695744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:59.610951+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 1695744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.732542038s of 20.781471252s, submitted: 10
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:00.611327+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:29.854461+0000 osd.1 (osd.1) 186 : cluster [DBG] 6.6 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:29.872140+0000 osd.1 (osd.1) 187 : cluster [DBG] 6.6 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 1687552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 6.2 deep-scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 6.2 deep-scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 187) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:29.854461+0000 osd.1 (osd.1) 186 : cluster [DBG] 6.6 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:29.872140+0000 osd.1 (osd.1) 187 : cluster [DBG] 6.6 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:01.611613+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:30.885087+0000 osd.1 (osd.1) 188 : cluster [DBG] 6.2 deep-scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:30.899243+0000 osd.1 (osd.1) 189 : cluster [DBG] 6.2 deep-scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 1687552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 6.e scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 6.e scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 189) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:30.885087+0000 osd.1 (osd.1) 188 : cluster [DBG] 6.2 deep-scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:30.899243+0000 osd.1 (osd.1) 189 : cluster [DBG] 6.2 deep-scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:02.612088+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:31.886753+0000 osd.1 (osd.1) 190 : cluster [DBG] 6.e scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:31.904434+0000 osd.1 (osd.1) 191 : cluster [DBG] 6.e scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 846868 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 1679360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 191) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:31.886753+0000 osd.1 (osd.1) 190 : cluster [DBG] 6.e scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:31.904434+0000 osd.1 (osd.1) 191 : cluster [DBG] 6.e scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:03.612367+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 1679360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:04.612585+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 1671168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:05.612814+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 1671168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:06.613124+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 1671168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:07.613509+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 846868 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 6.c deep-scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 1662976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 6.c deep-scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:08.613874+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:37.884175+0000 osd.1 (osd.1) 192 : cluster [DBG] 6.c deep-scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:37.901776+0000 osd.1 (osd.1) 193 : cluster [DBG] 6.c deep-scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 1662976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 193) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:37.884175+0000 osd.1 (osd.1) 192 : cluster [DBG] 6.c deep-scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:37.901776+0000 osd.1 (osd.1) 193 : cluster [DBG] 6.c deep-scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:09.614272+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:38.923521+0000 osd.1 (osd.1) 194 : cluster [DBG] 6.4 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:38.951702+0000 osd.1 (osd.1) 195 : cluster [DBG] 6.4 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 1654784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 6.b scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.035073280s of 10.072850227s, submitted: 10
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 6.b scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 195) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:38.923521+0000 osd.1 (osd.1) 194 : cluster [DBG] 6.4 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:38.951702+0000 osd.1 (osd.1) 195 : cluster [DBG] 6.4 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:10.614471+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 197 sent 195 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:39.927207+0000 osd.1 (osd.1) 196 : cluster [DBG] 6.b scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:39.944924+0000 osd.1 (osd.1) 197 : cluster [DBG] 6.b scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 1638400 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 6.d deep-scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 6.d deep-scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 197) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:39.927207+0000 osd.1 (osd.1) 196 : cluster [DBG] 6.b scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:39.944924+0000 osd.1 (osd.1) 197 : cluster [DBG] 6.b scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:11.614654+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 199 sent 197 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:40.943658+0000 osd.1 (osd.1) 198 : cluster [DBG] 6.d deep-scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:40.964913+0000 osd.1 (osd.1) 199 : cluster [DBG] 6.d deep-scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 1622016 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:12.614995+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 199) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:40.943658+0000 osd.1 (osd.1) 198 : cluster [DBG] 6.d deep-scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:40.964913+0000 osd.1 (osd.1) 199 : cluster [DBG] 6.d deep-scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 851456 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 1622016 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:13.615160+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 201 sent 199 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:42.901045+0000 osd.1 (osd.1) 200 : cluster [DBG] 9.15 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:42.929313+0000 osd.1 (osd.1) 201 : cluster [DBG] 9.15 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 201) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:42.901045+0000 osd.1 (osd.1) 200 : cluster [DBG] 9.15 scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:42.929313+0000 osd.1 (osd.1) 201 : cluster [DBG] 9.15 scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74883072 unmapped: 1605632 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:14.615432+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74883072 unmapped: 1605632 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:15.615635+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  log_queue is 2 last_log 203 sent 201 num 2 unsent 2 sending 2
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:44.846935+0000 osd.1 (osd.1) 202 : cluster [DBG] 9.1f scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  will send 2025-10-01T16:41:44.878420+0000 osd.1 (osd.1) 203 : cluster [DBG] 9.1f scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client handle_log_ack log(last 203) v1
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:44.846935+0000 osd.1 (osd.1) 202 : cluster [DBG] 9.1f scrub starts
Oct 01 17:11:30 compute-0 ceph-osd[89167]: log_client  logged 2025-10-01T16:41:44.878420+0000 osd.1 (osd.1) 203 : cluster [DBG] 9.1f scrub ok
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74883072 unmapped: 1605632 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:16.615852+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 1597440 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:17.616009+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 1597440 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:18.616158+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 1589248 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:19.616451+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 1589248 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:20.616796+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 1589248 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:21.616985+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 1581056 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:22.617346+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 1581056 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:23.617616+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 1572864 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:24.617747+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 1572864 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:25.617975+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 1572864 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:26.618145+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 1564672 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:27.618448+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 1564672 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:28.618601+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 1556480 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:29.618787+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 1556480 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:30.619227+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 1548288 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:31.619791+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 1548288 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:32.622173+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 1548288 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:33.624764+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 1540096 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:34.626161+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 1540096 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:35.626857+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 1531904 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:36.627335+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 1531904 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:37.628754+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 1523712 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:38.630021+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 1523712 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:39.630200+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 1523712 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:40.630715+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 1523712 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:41.631007+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 1507328 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:42.631130+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 1507328 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:43.631733+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 1499136 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:44.632281+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 1499136 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:45.632536+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 1499136 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:46.632746+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 1490944 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:47.632967+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 1482752 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:48.633145+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 1482752 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:49.633457+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 1482752 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:50.633643+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 1474560 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:51.633822+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 1474560 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:52.634037+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 1466368 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:53.634251+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 1466368 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:54.634454+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 1458176 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:55.634671+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 1458176 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:56.634797+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 1458176 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:57.634987+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 1441792 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:58.635340+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 1441792 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:59.635832+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 1433600 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:00.636037+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 1433600 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:01.636753+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 1433600 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:02.637014+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 1425408 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:03.637272+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 1425408 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:04.637490+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 1417216 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:05.638119+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 1417216 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:06.638362+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 1417216 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:07.638628+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 1409024 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:08.638842+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 1409024 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:09.639860+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 1400832 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:10.640265+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 1400832 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:11.640438+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 1392640 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:12.640598+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 1392640 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:13.640763+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 1392640 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:14.641258+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 1384448 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:15.641430+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 1384448 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:16.641711+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 1376256 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:17.641867+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 1368064 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:18.642210+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 1359872 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:19.642352+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 1359872 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:20.642589+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 1359872 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:21.642761+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 1351680 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:22.642944+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 1351680 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:23.643098+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75145216 unmapped: 1343488 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:24.643283+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75145216 unmapped: 1343488 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:25.643501+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75145216 unmapped: 1343488 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:26.643736+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 1335296 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:27.643916+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 1335296 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:28.644073+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75161600 unmapped: 1327104 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:29.644287+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75161600 unmapped: 1327104 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:30.644457+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75161600 unmapped: 1327104 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:31.644620+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 1318912 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:32.644809+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 1318912 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:33.644982+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75177984 unmapped: 1310720 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:34.645140+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75177984 unmapped: 1310720 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:35.645354+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 1302528 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:36.645681+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 1302528 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:37.645831+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 1294336 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:38.646038+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 1294336 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:39.646214+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 1294336 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:40.646456+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75202560 unmapped: 1286144 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:41.646668+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 1269760 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:42.646941+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 1261568 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:43.647108+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 1261568 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:44.647351+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 1261568 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:45.647532+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1253376 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:46.647660+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1253376 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:47.647833+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 1245184 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:48.648117+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 1245184 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:49.648262+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 1245184 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:50.648434+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 1236992 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:51.648555+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 1236992 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:52.648712+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 1228800 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:53.648847+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 1228800 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:54.649019+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 1228800 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:55.649380+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 1220608 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:56.649584+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 1220608 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:57.649731+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1212416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:58.649884+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1212416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:59.650033+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1212416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:00.650162+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 1204224 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:01.650337+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 1204224 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:02.650504+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 1196032 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:03.650659+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 1196032 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:04.650847+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 1187840 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:05.651009+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 1187840 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:06.651187+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 1179648 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:07.651313+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 1179648 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:08.651457+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 1179648 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:09.651576+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 1179648 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:10.651698+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 1171456 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:11.651834+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 1171456 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:12.651956+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75325440 unmapped: 1163264 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:13.652079+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75325440 unmapped: 1163264 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:14.652213+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75325440 unmapped: 1163264 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:15.652347+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 1155072 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:16.652519+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 1155072 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:17.652692+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 1146880 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:18.652818+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 1146880 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:19.653010+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 1146880 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:20.653185+0000)
Oct 01 17:11:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Oct 01 17:11:30 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1974448892' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 1138688 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:21.653326+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 1138688 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:22.653492+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 1130496 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:23.653652+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 1130496 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:24.653844+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 1130496 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:25.654080+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 1122304 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:26.654254+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 1122304 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:27.654410+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:28.654578+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:29.654763+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:30.654884+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:31.655010+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:32.655146+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 1097728 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:33.655261+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 1097728 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:34.655434+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:35.655602+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:36.655747+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:37.655934+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:38.656052+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1073152 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:39.656195+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1073152 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:40.656344+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1073152 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:41.656475+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:42.656628+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:43.656824+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:44.657058+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:45.657303+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:46.657442+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 1048576 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:47.657575+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 1048576 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:48.657716+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 1048576 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:49.657848+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:50.657990+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:51.658155+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:52.659081+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:53.659268+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:54.659450+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1024000 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:55.659643+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1024000 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:56.659833+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1024000 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:57.660025+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 1007616 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:58.660182+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 1007616 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:59.660346+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 999424 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:00.660464+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 999424 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:01.660631+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 991232 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:02.660794+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 991232 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:03.660965+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 983040 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:04.661149+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 983040 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:05.661314+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 983040 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:06.661414+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 974848 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:07.661527+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:08.661653+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 974848 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:09.661803+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 966656 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:10.664027+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 966656 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:11.664181+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 958464 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:12.664418+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 958464 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:13.664567+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 958464 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:14.664680+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 950272 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:15.664850+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 950272 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:16.665018+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 942080 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:17.665199+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 942080 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:18.665385+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 942080 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:19.665537+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 933888 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:20.665701+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 933888 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:21.665857+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 925696 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:22.666037+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 925696 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:23.666189+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75571200 unmapped: 917504 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:24.666383+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75571200 unmapped: 917504 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:25.666581+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75571200 unmapped: 917504 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:26.666729+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75579392 unmapped: 909312 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:27.666920+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75579392 unmapped: 909312 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:28.667109+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75587584 unmapped: 901120 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:29.667306+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75587584 unmapped: 901120 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:30.667456+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 892928 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:31.667622+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 892928 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:32.667794+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 892928 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:33.667942+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 884736 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:34.668112+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 884736 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:35.668285+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75612160 unmapped: 876544 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:36.668399+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75612160 unmapped: 876544 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:37.668512+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75620352 unmapped: 868352 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:38.668653+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75620352 unmapped: 868352 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:39.668753+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75620352 unmapped: 868352 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:40.668910+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 860160 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Cumulative writes: 6669 writes, 27K keys, 6669 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6669 writes, 1198 syncs, 5.57 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6669 writes, 27K keys, 6669 commit groups, 1.0 writes per commit group, ingest: 19.45 MB, 0.03 MB/s
                                           Interval WAL: 6669 writes, 1198 syncs, 5.57 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f21090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f21090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f21090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:41.669023+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:42.669127+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 786432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:43.669302+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 778240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:44.669427+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:45.669918+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:46.670120+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:47.670288+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75726848 unmapped: 761856 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:48.670472+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75726848 unmapped: 761856 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:49.670588+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:50.670728+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:51.670854+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:52.671059+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:53.671208+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:54.671383+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 737280 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:55.671595+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 737280 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:56.671808+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:57.671980+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:58.672086+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:59.672208+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 720896 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:00.672407+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 720896 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:01.672701+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 712704 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:02.672987+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 712704 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:03.673290+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:04.673547+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:05.673795+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:06.673968+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:07.674215+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:08.674394+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 679936 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:09.674567+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 679936 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:10.674710+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:11.674926+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:12.675518+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:13.675652+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:14.675796+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:15.676008+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:16.676158+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:17.676600+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:18.676838+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:19.677028+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:20.677212+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:21.677383+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:22.677607+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:23.677765+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:24.678011+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:25.678282+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 622592 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:26.678422+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 622592 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:27.678562+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 614400 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:28.679034+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 606208 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:29.679306+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 606208 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:30.679552+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 598016 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:31.679726+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 598016 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:32.679980+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 589824 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:33.680175+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 589824 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:34.680345+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 581632 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:35.680529+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 581632 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:36.680681+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 573440 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:37.680865+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 267.960327148s of 267.990997314s, submitted: 8
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 573440 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:38.681046+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853824 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 540672 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:39.681213+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 483328 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:40.681544+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 483328 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:41.681720+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 475136 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:42.681980+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 475136 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:43.682146+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 475136 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:44.682301+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 475136 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:45.682467+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 475136 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:46.682642+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 475136 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:47.682784+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 475136 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:48.682878+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 475136 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:49.683050+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 475136 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:50.683185+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 475136 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:51.683369+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 466944 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:52.683481+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 466944 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:53.683606+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 458752 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:54.683747+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 458752 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:55.683979+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 458752 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:56.684125+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 450560 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:57.684241+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 450560 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:58.684358+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 442368 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:59.684479+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 442368 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:00.684596+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 434176 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:01.684716+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 434176 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:02.684855+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 425984 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:03.684942+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 425984 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:04.685105+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 417792 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:05.685329+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 417792 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:06.685471+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 417792 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:07.685660+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 409600 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:08.685829+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 409600 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:09.685978+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 401408 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:10.686105+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 401408 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:11.686246+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 393216 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:12.686482+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 385024 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:13.686681+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 385024 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:14.686861+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 376832 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:15.687374+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 376832 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:16.687539+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 368640 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:17.687698+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 368640 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:18.688111+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 360448 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:19.688256+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 360448 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:20.688457+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 352256 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:21.688833+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 352256 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:22.688968+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 344064 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:23.689103+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 344064 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:24.689259+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 335872 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:25.689448+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 335872 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:26.689616+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 335872 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:27.689767+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 327680 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:28.689905+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 327680 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:29.690067+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 319488 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:30.690237+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 319488 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:31.690388+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 319488 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:32.690572+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 319488 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:33.690797+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 319488 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:34.691001+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 311296 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:35.691231+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 311296 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:36.691369+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 303104 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:37.691565+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 303104 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:38.691707+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 303104 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:39.691842+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 294912 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:40.691948+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 294912 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:41.692076+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:42.692196+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:43.692344+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:44.692506+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:45.692657+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:46.692793+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:47.692999+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:48.693129+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:49.693245+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:50.693374+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:51.693524+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:52.693659+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:53.693795+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:54.693920+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:55.694072+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:56.694184+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:57.694295+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 278528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:58.694425+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:59.694563+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:00.694747+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:01.694875+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:02.694997+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:03.695126+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:04.695259+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:05.695402+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:06.695527+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:07.695653+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:08.695785+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:09.696003+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:10.696126+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:11.696241+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:12.696363+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:13.696495+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:14.696611+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:15.696766+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:16.696991+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:17.697144+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:18.697268+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:19.697490+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:20.697662+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:21.697811+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:22.697996+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:23.698155+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:24.698320+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:25.698511+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:26.698659+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:27.698803+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:28.698936+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:29.699059+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:30.699193+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:31.699332+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:32.699629+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:33.699779+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:34.699984+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 253952 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:35.700227+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 253952 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:36.700472+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 253952 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:37.700675+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 253952 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:38.700838+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 253952 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:39.701026+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 253952 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:40.701202+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 253952 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:41.701354+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 245760 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:42.701560+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 237568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:43.701712+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 237568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:44.701846+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 237568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:45.702078+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 237568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:46.702261+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 237568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:47.702442+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 237568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:48.702559+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 237568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:49.702773+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 237568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:50.702950+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 237568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:51.703124+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 237568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:52.703295+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 237568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:53.703478+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 237568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:54.703644+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:55.703856+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:56.704294+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:57.704752+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:58.704933+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:59.705654+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:00.705815+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:01.706014+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:02.706218+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:03.706396+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:04.706581+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:05.706788+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:06.707552+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:07.707722+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:08.707925+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:09.708148+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:10.708304+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:11.708454+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:12.708601+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:13.708735+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:14.709008+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:15.709180+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:16.709304+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 221184 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:17.709504+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 221184 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:18.709678+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 221184 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:19.709804+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 221184 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:20.709965+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 221184 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:21.710114+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 221184 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:22.710259+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 221184 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:23.710408+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 221184 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:24.710585+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 221184 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:25.710731+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 221184 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:26.710865+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 221184 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:27.711012+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:28.711199+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:29.711345+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:30.711510+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:31.711647+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:32.711816+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:33.711969+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:34.712089+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:35.712312+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:36.712465+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:37.712607+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:38.712760+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:39.712915+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:40.713064+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:41.713227+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:42.713408+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:43.713569+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:44.713725+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:45.713920+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:46.714092+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:47.714228+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:48.714392+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:49.714533+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:50.714728+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:51.714932+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:52.715090+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:53.715230+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:54.715390+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:55.715550+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:56.715705+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:57.715867+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:58.716058+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:59.716229+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:00.716470+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:01.717286+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:02.717469+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:03.717681+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:04.717853+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:05.718115+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:06.718244+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:07.718486+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:08.718812+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:09.719085+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:10.719223+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:11.719374+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:12.719543+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:13.719686+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:14.720106+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:15.720314+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:16.720444+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:17.720563+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:18.720678+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:19.720816+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:20.721136+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:21.721275+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:22.721442+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:23.721631+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:24.721862+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:25.722086+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:26.722230+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:27.722378+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:28.722587+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:29.722726+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:30.722876+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:31.723039+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:32.723226+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:33.723397+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:34.723547+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:35.723733+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:36.723929+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:37.724084+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:38.724265+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:39.724431+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:40.724561+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:41.724714+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:42.724921+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:43.725046+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:44.725176+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 155648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:45.725354+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 155648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:46.725471+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 155648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:47.725625+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 155648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 ms_handle_reset con 0x5624c62e0400 session 0x5624c70741e0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: handle_auth_request added challenge on 0x5624c62e1800
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 ms_handle_reset con 0x5624c62e1c00 session 0x5624c6887a40
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: handle_auth_request added challenge on 0x5624c62e0400
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:48.725745+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 155648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:49.725916+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 155648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:50.726084+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 155648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:51.726240+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 155648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:52.726381+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 155648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:53.726566+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 155648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:54.726765+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 155648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:55.726959+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 155648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:56.727086+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 155648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:57.727252+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 155648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:58.727422+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77389824 unmapped: 147456 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:59.727567+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77389824 unmapped: 147456 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:00.727725+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77389824 unmapped: 147456 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:01.727921+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77389824 unmapped: 147456 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:02.728054+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77389824 unmapped: 147456 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:03.728189+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:04.728335+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:05.728490+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:06.728621+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:07.728835+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:08.728995+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:09.729175+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:10.729322+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:11.729439+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:12.729616+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:13.729748+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:14.729955+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:15.730172+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:16.730317+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:17.730467+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:18.730617+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:19.730757+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:20.730925+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:21.731083+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:22.731234+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:23.731391+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:24.731536+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:25.731741+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:26.731913+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:27.732060+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:28.732185+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:29.732341+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:30.732464+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:31.732629+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:32.732803+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:33.733016+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:34.733145+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:35.733283+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:36.733627+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:37.733769+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:38.733905+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:39.734029+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:40.734192+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:41.734418+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:42.734575+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:43.734746+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:44.734941+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:45.735117+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:46.735291+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:47.735466+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:48.735630+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:49.735795+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:50.735975+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:51.736133+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:52.736263+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:53.736391+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:54.736585+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:55.736807+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:56.736963+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:57.737122+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:58.737282+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:59.737450+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:00.737617+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:01.737799+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:02.737962+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:03.738117+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:04.738280+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:05.738472+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:06.738702+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:07.738881+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:08.739083+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:09.740058+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:10.740289+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:11.740466+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:12.740703+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:13.741074+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:14.741243+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:15.741409+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:16.741569+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:17.741747+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:18.741871+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:19.742047+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:20.742206+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:21.742388+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:22.742599+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:23.742795+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:24.743010+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:25.743252+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:26.743431+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:27.743623+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 114688 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:28.743793+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 106496 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:29.743960+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 106496 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:30.744073+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 106496 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:31.744219+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 106496 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:32.744376+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 106496 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:33.744627+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 106496 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:34.744804+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 106496 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:35.745004+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 106496 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:36.745997+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 106496 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:37.746115+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 106496 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:38.746258+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 106496 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:39.746387+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 106496 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:40.746512+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 106496 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:41.746732+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 90112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:42.746974+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 90112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:43.747106+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 90112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:44.747368+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 90112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:45.747587+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 90112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:46.747853+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 90112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:47.748141+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 90112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:48.748420+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 90112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:49.748595+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 90112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:50.748839+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 90112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:51.749113+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 90112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:52.749313+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 90112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:53.749522+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 90112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:54.749721+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 90112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:55.749918+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:56.750115+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:57.750364+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:58.750554+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:59.750652+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:00.750751+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:01.750952+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:02.751152+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:03.751315+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:04.751479+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:05.751663+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:06.751795+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:07.751936+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:08.752092+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:09.752240+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:10.752399+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:11.752578+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:12.752759+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:13.752942+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:14.753111+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:15.753288+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:16.753454+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:17.753615+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:18.753818+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:19.753999+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:20.754156+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:21.754314+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:22.754502+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:23.754681+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:24.754818+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:25.755042+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:26.755227+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:27.755425+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:28.755592+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:29.755762+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:30.755901+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:31.756157+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:32.756641+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:33.756776+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:34.756999+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:35.757221+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:36.757387+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:37.757549+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:38.757732+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:39.757889+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:40.758068+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:41.758227+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:42.758462+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:43.758707+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:44.758964+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 65536 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:45.759196+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 65536 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:46.759365+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 65536 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:47.759562+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 65536 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:48.759689+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 65536 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:49.759854+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 65536 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:50.760062+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 65536 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:51.760344+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 65536 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:52.760568+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 65536 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:53.760729+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 65536 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:54.760933+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 65536 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:55.761294+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 65536 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:56.761467+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 65536 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:57.761697+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 57344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:58.761938+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 57344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:59.762232+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 57344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:00.762472+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 57344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:01.762639+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 57344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:02.762774+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 57344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:03.762961+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 57344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:04.763151+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 57344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:05.763489+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 57344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:06.763708+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 57344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:07.763878+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 57344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:08.764110+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 57344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:09.764236+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 57344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:10.764492+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 57344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:11.764625+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 57344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:12.764864+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 57344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:13.765064+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:14.765214+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:15.765504+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:16.765712+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:17.766025+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:18.766240+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:19.766415+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:20.766560+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:21.766831+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:22.767020+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:23.767289+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:24.767463+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:25.768011+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:26.768423+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:27.768582+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:28.768791+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:29.768933+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:30.769109+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:31.769247+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:32.769392+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:33.769559+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:34.769782+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:35.770025+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:36.770295+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:37.770550+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:38.770738+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:39.770983+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:40.771172+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:41.771361+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:42.771592+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 40960 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:43.771984+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 40960 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:44.772239+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 40960 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:45.772535+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 40960 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:46.772796+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 40960 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:47.773074+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:48.773320+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:49.775265+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:50.775597+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:51.776028+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:52.776260+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:53.776527+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:54.776775+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:55.776994+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:56.777215+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:57.777644+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:58.778149+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:59.778461+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:00.778833+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:01.778995+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:02.779333+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:03.779625+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:04.780004+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:05.780245+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:06.780453+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:07.780651+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:08.780784+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:09.780929+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:10.781172+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:11.781279+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:12.781470+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:13.781648+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:14.781996+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:15.782252+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:16.782402+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:17.782560+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:18.782747+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:19.782966+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:20.783158+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:21.783350+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:22.783492+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:23.783657+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:24.783844+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:25.784092+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:26.784277+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:27.784449+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:28.784614+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:29.784771+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:30.785013+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:31.785175+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:32.785326+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 16384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:33.785468+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 16384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:34.785668+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 16384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:35.786006+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 16384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:36.786149+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 16384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:37.786310+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:38.786513+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 16384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:39.786654+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 16384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:40.786800+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 16384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 6849 writes, 28K keys, 6849 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6849 writes, 1288 syncs, 5.32 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 277 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f21090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f21090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f21090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:41.787021+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:42.787164+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:43.787310+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:44.787487+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:45.787679+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:46.787844+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:47.788017+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:48.788175+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:49.788354+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:50.788465+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:51.788615+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:52.788791+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:53.788994+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:54.789303+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:55.789523+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:56.789689+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:57.789935+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:58.790196+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:59.790359+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:00.800086+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:01.800282+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:02.800488+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:03.800682+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:04.800875+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:05.801169+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:06.801443+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:07.801636+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:08.801871+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:09.802215+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:10.802470+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:11.802658+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:12.802961+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:13.803132+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:14.803298+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:15.803492+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:16.803627+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:17.803808+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:18.804039+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:19.804190+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:20.804359+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:21.804600+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:22.804748+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:23.804942+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:24.805063+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:25.805205+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:26.805341+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:27.805484+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:28.805614+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:29.805752+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:30.805880+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:31.806179+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:32.806304+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:33.806406+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:34.806573+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:35.806781+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:36.806974+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:37.807114+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:38.807273+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 600.429077148s of 601.145690918s, submitted: 90
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:39.807415+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 1171456 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:40.807536+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 1163264 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:41.807671+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:42.807782+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:43.807928+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:44.808085+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:45.808262+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:46.808404+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:47.808526+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:48.808659+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:49.808822+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:50.808981+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:51.809149+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:52.810165+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:53.810315+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:54.810492+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:55.810725+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:56.811213+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:57.811375+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:58.811491+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:59.811617+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:00.811722+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:01.811934+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:02.812057+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:03.812212+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:04.812387+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:05.812603+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:06.812775+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:07.812929+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:08.813059+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:09.813196+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:10.813392+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:11.813562+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:12.813710+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:13.813829+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:14.814044+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:15.814235+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:16.814360+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:17.814672+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:18.814801+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:19.815016+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:20.815147+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:21.815285+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:22.815461+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:23.815599+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:24.815785+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:25.816008+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:26.816162+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:27.816318+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:28.816514+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:29.816760+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:30.816871+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:31.817052+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:32.817203+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:33.817346+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:34.817514+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:35.817677+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:36.817813+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:37.817943+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:38.818091+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:39.818237+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:40.818392+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:41.818559+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:42.818704+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:43.818927+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:44.819089+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:45.819326+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:46.819493+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:47.819688+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:48.819872+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:49.820080+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:50.820238+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:51.820389+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:52.820550+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:53.820752+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:54.821029+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:55.821233+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:56.821380+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:57.821524+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:58.821663+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:59.821838+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:00.821970+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:01.822124+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:02.822306+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:03.822552+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:04.822781+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:05.822988+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:06.823996+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:07.824211+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:08.824481+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:09.824786+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:10.824933+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:11.825138+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:12.825299+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:13.825533+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:14.825670+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:15.825861+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:16.826048+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:17.826224+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:18.826381+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:19.826565+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:20.826721+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:21.826936+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:22.827068+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:23.827188+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:24.827342+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:25.827545+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:26.827736+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:27.827962+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:28.828148+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:29.828385+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:30.828637+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:31.828803+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:32.829029+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:33.829218+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:34.829403+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:35.829598+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:36.829826+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:37.830039+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:38.830236+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:39.830379+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:40.830523+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:41.830732+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:42.830986+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:43.831190+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:44.831365+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:45.831544+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:46.831745+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:47.831988+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:48.832221+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:49.832389+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:50.832553+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:51.832744+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:52.832974+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:53.833160+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:54.833343+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:55.833592+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:56.833852+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:57.834010+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:58.834172+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:59.834357+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:00.834516+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:01.834643+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:02.834812+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:03.834968+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:04.835094+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:05.835281+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:06.835457+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:07.835635+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:08.835822+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:09.835965+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:10.836094+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:11.836223+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:12.836380+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:13.836533+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:14.836716+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:15.836957+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:16.837176+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:17.837315+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:18.837469+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:19.837632+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:20.837803+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:21.837931+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:22.838065+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:23.838221+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:24.838401+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:25.838586+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:26.838722+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:27.838874+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:28.839092+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:29.839244+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:30.839370+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:31.839501+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:32.839691+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:33.839804+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:34.839988+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:35.840173+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:36.840309+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:37.840442+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:38.840609+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:39.840794+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:40.840989+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:41.841187+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:42.841402+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:43.841609+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:44.841813+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:45.842014+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:46.843087+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:47.843302+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:48.843525+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:49.843667+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:50.843816+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:51.843954+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:52.844093+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:53.844273+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:54.844430+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:55.844635+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:56.844809+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:57.844987+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:58.845143+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:59.845323+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:00.845437+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:01.845580+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:02.845726+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:03.845881+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:04.846103+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:05.846313+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:06.846485+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:07.846635+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:08.846933+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:09.847234+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:10.847474+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:11.847653+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:12.847801+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:13.848022+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:14.848171+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 2179072 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:15.848361+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 2179072 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:16.848503+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 2179072 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:17.848684+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 2179072 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:18.848851+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 2179072 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:19.849007+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 2179072 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:20.849158+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 2179072 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:21.849301+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: handle_auth_request added challenge on 0x5624c76f0800
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 222.424591064s of 223.227157593s, submitted: 90
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 2154496 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:22.849446+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 123 heartbeat osd_stat(store_statfs(0x4fca3b000/0x0/0x4ffc00000, data 0x12b85e/0x1e2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:23.849607+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 18939904 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 124 handle_osd_map epochs [124,125], i have 124, src has [1,125]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 125 ms_handle_reset con 0x5624c76f0800 session 0x5624c96925a0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:24.849742+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 17784832 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: handle_auth_request added challenge on 0x5624c89dec00
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:25.849978+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 17776640 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1065174 data_alloc: 218103808 data_used: 253952
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:26.850137+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 17743872 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 126 ms_handle_reset con 0x5624c89dec00 session 0x5624c93f25a0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fadc3000/0x0/0x4ffc00000, data 0x1d9f016/0x1e5b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:27.850273+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 17735680 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:28.850411+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 17735680 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:29.850580+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 17735680 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:30.851586+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 17735680 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073946 data_alloc: 218103808 data_used: 266240
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:31.851699+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 17727488 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:32.851849+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 17727488 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fadbb000/0x0/0x4ffc00000, data 0x1da2635/0x1e62000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:33.851981+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 17727488 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:34.852180+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 17727488 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:35.852402+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 17727488 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073946 data_alloc: 218103808 data_used: 266240
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:36.852562+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 17727488 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:37.852677+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 17727488 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:38.852834+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 17727488 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fadbb000/0x0/0x4ffc00000, data 0x1da2635/0x1e62000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fadbb000/0x0/0x4ffc00000, data 0x1da2635/0x1e62000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:39.852991+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 17727488 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:40.853149+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 17727488 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073946 data_alloc: 218103808 data_used: 266240
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:41.853278+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 17727488 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:42.853406+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 17727488 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:43.853562+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 17727488 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: handle_auth_request added challenge on 0x5624c89dfc00
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.713157654s of 21.930625916s, submitted: 48
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fadbb000/0x0/0x4ffc00000, data 0x1da2635/0x1e62000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:44.853719+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78774272 unmapped: 17645568 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:45.853883+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 16547840 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Got map version 10
Oct 01 17:11:30 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: handle_auth_request added challenge on 0x5624c906f800
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076438 data_alloc: 218103808 data_used: 266240
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:46.854281+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 16777216 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:47.854399+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 16777216 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:48.854548+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 16556032 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:49.854699+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 16515072 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fada1000/0x0/0x4ffc00000, data 0x1dbb67c/0x1e7d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:50.854884+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 16465920 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080228 data_alloc: 218103808 data_used: 266240
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:51.855165+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 16465920 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:52.855311+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 16433152 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:53.855443+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 16392192 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Got map version 11
Oct 01 17:11:30 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.376805305s of 10.613768578s, submitted: 47
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:54.855581+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 16359424 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:55.855789+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 15228928 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fad8c000/0x0/0x4ffc00000, data 0x1dd04d9/0x1e92000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082408 data_alloc: 218103808 data_used: 266240
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:56.855976+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 15147008 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:57.856150+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 15147008 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:58.856300+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fad87000/0x0/0x4ffc00000, data 0x1dd57f8/0x1e97000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 15114240 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:59.856427+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 15073280 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:00.856553+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 15048704 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fad80000/0x0/0x4ffc00000, data 0x1ddc97f/0x1e9e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082632 data_alloc: 218103808 data_used: 266240
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:01.856676+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 15015936 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:02.856873+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 15015936 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:03.857206+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 15015936 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.274366379s of 10.007667542s, submitted: 35
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:04.857391+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 14942208 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fad70000/0x0/0x4ffc00000, data 0x1deae65/0x1eae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:05.857569+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 14884864 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088736 data_alloc: 218103808 data_used: 266240
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:06.857715+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 14786560 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:07.857943+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 14753792 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:08.858122+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 14614528 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:09.858297+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fad5e000/0x0/0x4ffc00000, data 0x1dfda54/0x1ec0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 14614528 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:10.858443+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 14475264 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1096132 data_alloc: 218103808 data_used: 274432
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:11.858594+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 14401536 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:12.858779+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 14376960 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:13.858955+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 14303232 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:14.859101+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.836385727s of 10.053128242s, submitted: 94
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 14254080 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 128 heartbeat osd_stat(store_statfs(0x4fad36000/0x0/0x4ffc00000, data 0x1e24f66/0x1ee8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:15.859253+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 14254080 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092610 data_alloc: 218103808 data_used: 274432
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:16.859383+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 14229504 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:17.859555+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 13131776 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:18.859701+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84336640 unmapped: 12083200 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:19.859878+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f9b85000/0x0/0x4ffc00000, data 0x1e3563a/0x1ef9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 12066816 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:20.860117+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 12042240 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f9b82000/0x0/0x4ffc00000, data 0x1e3aaac/0x1efc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1096620 data_alloc: 218103808 data_used: 282624
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:21.860296+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 12042240 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:22.860464+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 12017664 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f9b7d000/0x0/0x4ffc00000, data 0x1e3deac/0x1f00000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:23.860651+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 12017664 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:24.860817+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 12017664 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.922076225s of 10.262989998s, submitted: 41
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:25.861032+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 11952128 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099828 data_alloc: 218103808 data_used: 282624
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:26.861182+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 11952128 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f9b72000/0x0/0x4ffc00000, data 0x1e47d7b/0x1f0c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:27.861372+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 11952128 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:28.861519+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 11935744 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:29.861684+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 11935744 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:30.861951+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 11919360 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:31.862135+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097878 data_alloc: 218103808 data_used: 282624
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 11919360 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f9b72000/0x0/0x4ffc00000, data 0x1e48e86/0x1f0c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:32.862297+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 11788288 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f9b68000/0x0/0x4ffc00000, data 0x1e516b2/0x1f16000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:33.862463+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 11788288 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:34.862671+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 11788288 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.909614563s of 10.000017166s, submitted: 23
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:35.862834+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 11722752 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:36.862982+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1105184 data_alloc: 218103808 data_used: 290816
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 11706368 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:37.863163+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 11706368 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:38.863344+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 11706368 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 130 heartbeat osd_stat(store_statfs(0x4f9b5c000/0x0/0x4ffc00000, data 0x1e5b885/0x1f21000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:39.863500+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 11706368 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 130 heartbeat osd_stat(store_statfs(0x4f9b57000/0x0/0x4ffc00000, data 0x1e616a2/0x1f27000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:40.863692+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 11689984 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:41.863827+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103212 data_alloc: 218103808 data_used: 290816
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 11689984 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 130 heartbeat osd_stat(store_statfs(0x4f9b56000/0x0/0x4ffc00000, data 0x1e62682/0x1f28000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:42.863999+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 11689984 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:43.864135+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 11665408 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:44.864272+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 11665408 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.819531441s of 10.000153542s, submitted: 46
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:45.864504+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 130 heartbeat osd_stat(store_statfs(0x4f9b4b000/0x0/0x4ffc00000, data 0x1e6cc8a/0x1f33000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 11640832 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:46.864675+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111098 data_alloc: 218103808 data_used: 299008
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 11501568 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:47.864786+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 85172224 unmapped: 11247616 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:48.864967+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 11214848 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:49.866996+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 11255808 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:50.867127+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 11255808 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 131 heartbeat osd_stat(store_statfs(0x4f9b27000/0x0/0x4ffc00000, data 0x1e90a50/0x1f57000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:51.867276+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111162 data_alloc: 218103808 data_used: 299008
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 11206656 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 131 heartbeat osd_stat(store_statfs(0x4f9b23000/0x0/0x4ffc00000, data 0x1e94fb8/0x1f5b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:52.867567+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 11206656 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:53.867750+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 11206656 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:54.867952+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.782431602s of 10.000225067s, submitted: 42
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 11149312 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:55.868134+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 11091968 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:56.868319+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116414 data_alloc: 218103808 data_used: 303104
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 10895360 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 131 heartbeat osd_stat(store_statfs(0x4f9b05000/0x0/0x4ffc00000, data 0x1eb2a69/0x1f79000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:57.868526+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 10895360 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:58.868700+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 85622784 unmapped: 10797056 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:59.868850+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 9682944 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:00.869034+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 9494528 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 131 handle_osd_map epochs [132,132], i have 132, src has [1,132]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:01.869215+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121752 data_alloc: 218103808 data_used: 311296
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 9404416 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9adf000/0x0/0x4ffc00000, data 0x1ed6642/0x1f9e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:02.869381+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 9863168 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:03.869544+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 86679552 unmapped: 9740288 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:04.869683+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 3.387005329s of 10.018237114s, submitted: 70
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 86720512 unmapped: 9699328 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:05.869838+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9abc000/0x0/0x4ffc00000, data 0x1ef9b32/0x1fc2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,2])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 132 handle_osd_map epochs [133,133], i have 133, src has [1,133]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 9494528 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:06.870074+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132746 data_alloc: 218103808 data_used: 319488
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9aa3000/0x0/0x4ffc00000, data 0x1f0fc1a/0x1fda000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 9461760 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:07.870230+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 87023616 unmapped: 9396224 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Got map version 12
Oct 01 17:11:30 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:08.870372+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 9445376 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: handle_auth_request added challenge on 0x5624c906e000
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:09.870554+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 87105536 unmapped: 9314304 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:10.870735+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 87089152 unmapped: 9330688 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:11.870967+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1142894 data_alloc: 218103808 data_used: 327680
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 87195648 unmapped: 9224192 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:12.871129+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f963e000/0x0/0x4ffc00000, data 0x1f62185/0x202f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 87474176 unmapped: 8945664 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:13.871283+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 88375296 unmapped: 8044544 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:14.871426+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 2.033059359s of 10.011897087s, submitted: 159
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 8036352 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:15.871627+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 8019968 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:16.871751+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148250 data_alloc: 218103808 data_used: 339968
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 8175616 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:17.871936+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 88276992 unmapped: 8142848 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f95ff000/0x0/0x4ffc00000, data 0x1fa032f/0x206f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:18.872047+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 88276992 unmapped: 8142848 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f95ff000/0x0/0x4ffc00000, data 0x1fa035e/0x206e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:19.872187+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 8028160 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:20.872335+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 89571328 unmapped: 6848512 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:21.872460+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154230 data_alloc: 218103808 data_used: 348160
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 89587712 unmapped: 6832128 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:22.872792+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 89464832 unmapped: 6955008 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:23.872929+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 89464832 unmapped: 6955008 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f95e4000/0x0/0x4ffc00000, data 0x1fba00f/0x208a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:24.873053+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 6930432 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:25.873215+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 89628672 unmapped: 6791168 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.069796562s of 11.403741837s, submitted: 118
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:26.873280+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1156420 data_alloc: 218103808 data_used: 360448
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 89227264 unmapped: 7192576 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:27.873438+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 89227264 unmapped: 7192576 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:28.873600+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 89227264 unmapped: 7192576 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:29.873738+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f95c0000/0x0/0x4ffc00000, data 0x1fdb698/0x20ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 89227264 unmapped: 7192576 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:30.873848+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 89292800 unmapped: 7127040 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:31.873971+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160980 data_alloc: 218103808 data_used: 364544
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f95a6000/0x0/0x4ffc00000, data 0x1ff64a7/0x20c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 7020544 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:32.874104+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 90750976 unmapped: 5668864 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f9586000/0x0/0x4ffc00000, data 0x2015d8e/0x20e8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:33.874217+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 90750976 unmapped: 5668864 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:34.874450+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 90890240 unmapped: 5529600 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:35.874592+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 90423296 unmapped: 5996544 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:36.874701+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165594 data_alloc: 218103808 data_used: 364544
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.134466171s of 10.624387741s, submitted: 50
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 90587136 unmapped: 5832704 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f9549000/0x0/0x4ffc00000, data 0x20539a6/0x2125000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:37.875004+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 90365952 unmapped: 6053888 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:38.875115+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 90456064 unmapped: 5963776 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:39.875273+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 90701824 unmapped: 5718016 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:40.875408+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 90578944 unmapped: 5840896 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9511000/0x0/0x4ffc00000, data 0x208b773/0x215c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:41.875572+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169704 data_alloc: 218103808 data_used: 368640
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 90693632 unmapped: 5726208 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:42.875718+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 90955776 unmapped: 5464064 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f94f0000/0x0/0x4ffc00000, data 0x20acef8/0x217d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:43.875832+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 90824704 unmapped: 5595136 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:44.876009+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f94da000/0x0/0x4ffc00000, data 0x20c3552/0x2194000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 90873856 unmapped: 5545984 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:45.876176+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 92028928 unmapped: 4390912 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:46.876314+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178448 data_alloc: 218103808 data_used: 376832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.355510712s of 10.051280022s, submitted: 102
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 92069888 unmapped: 4349952 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:47.876491+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 92069888 unmapped: 4349952 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:48.876628+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 92069888 unmapped: 4349952 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:49.877084+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f94ad000/0x0/0x4ffc00000, data 0x20f0f7c/0x21c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,4])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 92135424 unmapped: 4284416 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:50.877204+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 4816896 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:51.877290+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180540 data_alloc: 218103808 data_used: 380928
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 4816896 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:52.877433+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f948b000/0x0/0x4ffc00000, data 0x2111fa4/0x21e3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 91947008 unmapped: 4472832 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:53.877588+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 91947008 unmapped: 4472832 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:54.877742+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 92299264 unmapped: 4120576 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:55.877961+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 92053504 unmapped: 4366336 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:56.878105+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186900 data_alloc: 218103808 data_used: 389120
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 92127232 unmapped: 4292608 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f943c000/0x0/0x4ffc00000, data 0x215ea3d/0x2231000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:57.878347+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 4210688 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:58.878721+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.927650452s of 12.131904602s, submitted: 70
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 92430336 unmapped: 3989504 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:59.879443+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 91717632 unmapped: 4702208 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:00.879986+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 91717632 unmapped: 4702208 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:01.880147+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191678 data_alloc: 218103808 data_used: 397312
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 91914240 unmapped: 4505600 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9407000/0x0/0x4ffc00000, data 0x2191fd3/0x2266000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:02.880345+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 3284992 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:03.880531+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93118464 unmapped: 3301376 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:04.880710+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93257728 unmapped: 3162112 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:05.881090+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f93df000/0x0/0x4ffc00000, data 0x21bbc75/0x228f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93257728 unmapped: 3162112 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f93df000/0x0/0x4ffc00000, data 0x21bbc75/0x228f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:06.881290+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192768 data_alloc: 218103808 data_used: 397312
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93257728 unmapped: 3162112 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:07.881505+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93413376 unmapped: 3006464 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:08.881721+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93413376 unmapped: 3006464 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:09.881980+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.396786690s of 10.760301590s, submitted: 51
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93446144 unmapped: 2973696 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:10.882219+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f93be000/0x0/0x4ffc00000, data 0x21dd92a/0x22b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93552640 unmapped: 2867200 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:11.882460+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197862 data_alloc: 218103808 data_used: 397312
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93552640 unmapped: 2867200 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:12.882639+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93585408 unmapped: 2834432 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Got map version 13
Oct 01 17:11:30 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:13.882817+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93552640 unmapped: 2867200 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:14.883015+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93552640 unmapped: 2867200 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:15.883237+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93552640 unmapped: 2867200 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:16.883427+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f939d000/0x0/0x4ffc00000, data 0x21feab8/0x22d1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199996 data_alloc: 218103808 data_used: 397312
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93691904 unmapped: 2727936 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:17.883616+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93691904 unmapped: 2727936 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f939d000/0x0/0x4ffc00000, data 0x21feb82/0x22d1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:18.883794+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93691904 unmapped: 2727936 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:19.883989+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.056099892s of 10.000339508s, submitted: 22
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93782016 unmapped: 2637824 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:20.884172+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93872128 unmapped: 2547712 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:21.884294+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208408 data_alloc: 218103808 data_used: 397312
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9364000/0x0/0x4ffc00000, data 0x2236003/0x230a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93880320 unmapped: 2539520 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:22.884473+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93683712 unmapped: 2736128 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:23.884627+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93683712 unmapped: 2736128 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:24.884755+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93798400 unmapped: 2621440 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:25.884985+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f933d000/0x0/0x4ffc00000, data 0x225b5b0/0x2330000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93863936 unmapped: 2555904 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:26.885155+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206676 data_alloc: 218103808 data_used: 397312
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 2162688 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:27.885399+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 2162688 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:28.885547+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 94388224 unmapped: 2031616 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:29.885719+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.778255463s of 10.000174522s, submitted: 48
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 2007040 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:30.885869+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9305000/0x0/0x4ffc00000, data 0x2293118/0x2369000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 94568448 unmapped: 1851392 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:31.886009+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210244 data_alloc: 218103808 data_used: 397312
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 94806016 unmapped: 1613824 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:32.886124+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 94806016 unmapped: 1613824 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:33.886262+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 95059968 unmapped: 1359872 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:34.886422+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f92d7000/0x0/0x4ffc00000, data 0x22c2b58/0x2397000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 95232000 unmapped: 2236416 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f929a000/0x0/0x4ffc00000, data 0x22ff727/0x23d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:35.886598+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 95240192 unmapped: 2228224 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:36.886729+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218020 data_alloc: 218103808 data_used: 397312
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 95240192 unmapped: 2228224 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:37.886875+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 95846400 unmapped: 1622016 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:38.887060+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 95895552 unmapped: 1572864 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:39.887217+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.700897217s of 10.000336647s, submitted: 69
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 1531904 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:40.887404+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9263000/0x0/0x4ffc00000, data 0x2337974/0x240b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 142 ms_handle_reset con 0x5624c906e000 session 0x5624c9698780
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 770048 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:41.887550+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1220104 data_alloc: 218103808 data_used: 397312
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 96706560 unmapped: 761856 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:42.887695+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Got map version 14
Oct 01 17:11:30 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 96845824 unmapped: 1671168 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:43.887990+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9236000/0x0/0x4ffc00000, data 0x2364498/0x2438000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 97034240 unmapped: 1482752 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:44.888127+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 2531328 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f91fe000/0x0/0x4ffc00000, data 0x239c08a/0x2470000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:45.888290+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 2531328 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:46.888520+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231956 data_alloc: 218103808 data_used: 397312
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 96313344 unmapped: 2203648 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:47.888749+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 96313344 unmapped: 2203648 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:48.888954+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 96108544 unmapped: 2408448 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f91cb000/0x0/0x4ffc00000, data 0x23ce53a/0x24a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:49.889119+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.741598129s of 10.000253677s, submitted: 243
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 97427456 unmapped: 2138112 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:50.889250+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 97427456 unmapped: 2138112 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:51.889426+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f919b000/0x0/0x4ffc00000, data 0x23fd8a5/0x24d2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237066 data_alloc: 218103808 data_used: 397312
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 97583104 unmapped: 1982464 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:52.889604+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 98091008 unmapped: 1474560 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9176000/0x0/0x4ffc00000, data 0x2422384/0x24f6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:53.889771+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 98091008 unmapped: 1474560 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:54.889917+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 98107392 unmapped: 1458176 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:55.890108+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 97607680 unmapped: 1957888 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:56.890244+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238070 data_alloc: 218103808 data_used: 397312
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 97771520 unmapped: 1794048 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:57.890416+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 1777664 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:58.890586+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f912d000/0x0/0x4ffc00000, data 0x246cf23/0x2541000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 97976320 unmapped: 1589248 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:59.890717+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.649096489s of 10.000499725s, submitted: 77
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 99082240 unmapped: 483328 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:00.890935+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 99082240 unmapped: 483328 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:01.891135+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f90f4000/0x0/0x4ffc00000, data 0x24a58f7/0x257a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1241438 data_alloc: 218103808 data_used: 397312
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 99328000 unmapped: 1286144 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:02.891242+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 99426304 unmapped: 1187840 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:03.891396+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 99426304 unmapped: 1187840 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:04.891553+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 99270656 unmapped: 2392064 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:05.891731+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9089000/0x0/0x4ffc00000, data 0x2510b2b/0x25e5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 99278848 unmapped: 2383872 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:06.891870+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248896 data_alloc: 218103808 data_used: 397312
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 99328000 unmapped: 2334720 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:07.892063+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 99614720 unmapped: 2048000 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:08.892218+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 99614720 unmapped: 2048000 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:09.892413+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.705707550s of 10.003671646s, submitted: 73
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 100671488 unmapped: 991232 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:10.892527+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f903b000/0x0/0x4ffc00000, data 0x255cf13/0x2632000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 100671488 unmapped: 991232 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:11.892608+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9024000/0x0/0x4ffc00000, data 0x257514a/0x264a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250104 data_alloc: 218103808 data_used: 397312
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 100679680 unmapped: 983040 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:12.892766+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 2007040 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:13.892987+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9024000/0x0/0x4ffc00000, data 0x257514a/0x264a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 2179072 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:14.893116+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 100597760 unmapped: 2113536 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:15.893288+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 100597760 unmapped: 2113536 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:16.893453+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1257856 data_alloc: 218103808 data_used: 397312
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 100810752 unmapped: 1900544 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:17.893624+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 100810752 unmapped: 1900544 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f8f97000/0x0/0x4ffc00000, data 0x2601c98/0x26d6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:18.893729+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 100810752 unmapped: 1900544 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:19.893843+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.710945129s of 10.001224518s, submitted: 71
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 102039552 unmapped: 1720320 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:20.893978+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 102031360 unmapped: 1728512 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:21.894091+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f8f71000/0x0/0x4ffc00000, data 0x262894d/0x26fd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1268920 data_alloc: 218103808 data_used: 397312
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 102031360 unmapped: 1728512 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:22.894216+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 101146624 unmapped: 2613248 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:23.894385+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 101146624 unmapped: 2613248 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:24.894506+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 101146624 unmapped: 2613248 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:25.894687+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 101466112 unmapped: 2293760 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:26.894871+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264818 data_alloc: 218103808 data_used: 397312
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 101564416 unmapped: 2195456 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:27.895213+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f8eef000/0x0/0x4ffc00000, data 0x26aac43/0x277f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 103047168 unmapped: 712704 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:28.895577+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 103120896 unmapped: 638976 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:29.895843+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.532333374s of 10.003606796s, submitted: 97
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f8e9b000/0x0/0x4ffc00000, data 0x26ffb29/0x27d3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 843776 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:30.896072+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 102924288 unmapped: 1884160 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:31.896235+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278808 data_alloc: 218103808 data_used: 397312
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 103292928 unmapped: 1515520 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:32.896437+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f8e49000/0x0/0x4ffc00000, data 0x2751053/0x2825000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 103309312 unmapped: 1499136 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:33.896613+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 103309312 unmapped: 1499136 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:34.896772+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 104939520 unmapped: 917504 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:35.896972+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 1753088 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:36.897150+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293012 data_alloc: 218103808 data_used: 405504
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 1728512 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:37.897383+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f8ddc000/0x0/0x4ffc00000, data 0x27bbdb3/0x2891000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 2408448 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:38.897518+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 104554496 unmapped: 2351104 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:39.897738+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.513430595s of 10.029569626s, submitted: 121
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 104742912 unmapped: 2162688 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:40.897997+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 143 handle_osd_map epochs [145,145], i have 143, src has [1,145]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 143 handle_osd_map epochs [144,145], i have 143, src has [1,145]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 104873984 unmapped: 2031616 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:41.898195+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297696 data_alloc: 218103808 data_used: 413696
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 104996864 unmapped: 1908736 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:42.898407+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 106168320 unmapped: 737280 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:43.898574+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8d5d000/0x0/0x4ffc00000, data 0x2837930/0x2910000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 598016 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:44.898747+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 491520 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:45.898954+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 145 handle_osd_map epochs [146,146], i have 146, src has [1,146]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f8d04000/0x0/0x4ffc00000, data 0x288db98/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,4])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 106479616 unmapped: 425984 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:46.899133+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305906 data_alloc: 218103808 data_used: 421888
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 106782720 unmapped: 122880 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:47.899302+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 106782720 unmapped: 122880 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:48.899471+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f8cea000/0x0/0x4ffc00000, data 0x28a9d25/0x2984000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 107175936 unmapped: 1826816 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:49.899618+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 2.949766874s of 10.132806778s, submitted: 104
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:50.899794+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 106651648 unmapped: 2351104 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:51.900026+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 106651648 unmapped: 2351104 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313234 data_alloc: 218103808 data_used: 421888
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:52.900219+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 106717184 unmapped: 2285568 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:53.900356+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108158976 unmapped: 843776 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f8c75000/0x0/0x4ffc00000, data 0x2921057/0x29f9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:54.900511+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 835584 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:55.900696+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108199936 unmapped: 1851392 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:56.901012+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 1540096 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f7a94000/0x0/0x4ffc00000, data 0x2960a9c/0x2a3a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x572f9c7), peers [0,2] op hist [0,0,0,1])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1318994 data_alloc: 218103808 data_used: 421888
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:57.901165+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108240896 unmapped: 1810432 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:58.901351+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 1802240 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:59.901660+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 1662976 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:00.901875+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 1662976 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.616237640s of 11.104003906s, submitted: 67
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:01.902108+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 1662976 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1317914 data_alloc: 218103808 data_used: 421888
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:02.902311+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 1662976 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f7a6a000/0x0/0x4ffc00000, data 0x298b94d/0x2a64000, compress 0x0/0x0/0x0, omap 0x639, meta 0x572f9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:03.902526+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 1630208 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:04.902766+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 1630208 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f68ba000/0x0/0x4ffc00000, data 0x299aed9/0x2a74000, compress 0x0/0x0/0x0, omap 0x639, meta 0x68cf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:05.902973+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 109387776 unmapped: 1712128 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:06.903201+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 2113536 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324516 data_alloc: 218103808 data_used: 421888
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:07.903369+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 2113536 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:08.903494+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108609536 unmapped: 2490368 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:09.903634+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108609536 unmapped: 2490368 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:10.903758+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108609536 unmapped: 2490368 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.896729469s of 10.008993149s, submitted: 23
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f6890000/0x0/0x4ffc00000, data 0x29c5ae8/0x2a9e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x68cf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:11.903876+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108609536 unmapped: 2490368 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1317916 data_alloc: 218103808 data_used: 421888
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:12.904112+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108617728 unmapped: 2482176 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:13.904245+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 2457600 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:14.904380+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 2457600 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:15.904561+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 2457600 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f688f000/0x0/0x4ffc00000, data 0x29c5b50/0x2a9e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x68cf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:16.904760+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 2457600 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1319508 data_alloc: 218103808 data_used: 421888
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:17.904938+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 2449408 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:18.905109+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 2449408 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:19.905280+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 2449408 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:20.905548+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 2449408 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.780126572s of 10.010715485s, submitted: 9
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:21.905748+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 2449408 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f6890000/0x0/0x4ffc00000, data 0x29c5b4e/0x2a9e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x68cf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1318994 data_alloc: 218103808 data_used: 421888
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:22.905956+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 2449408 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:23.906142+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 2449408 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:24.906327+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 2449408 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:25.906606+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 2449408 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f688f000/0x0/0x4ffc00000, data 0x29c5c22/0x2a9f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x68cf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:26.906750+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 2441216 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323974 data_alloc: 218103808 data_used: 421888
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:27.906993+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 2441216 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:28.907138+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 2441216 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:29.907275+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 2441216 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:30.907466+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 2441216 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:31.908253+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 2441216 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f688d000/0x0/0x4ffc00000, data 0x29c5d22/0x2aa1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x68cf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323974 data_alloc: 218103808 data_used: 421888
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:32.908384+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 2441216 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f688d000/0x0/0x4ffc00000, data 0x29c5d22/0x2aa1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x68cf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:33.908516+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 2441216 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:34.908712+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 2441216 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.401308060s of 13.708648682s, submitted: 8
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:35.908950+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 2433024 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:36.909136+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 2433024 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325566 data_alloc: 218103808 data_used: 421888
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:37.909269+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f688d000/0x0/0x4ffc00000, data 0x29c5d22/0x2aa1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x68cf9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 2433024 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:38.909458+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 2433024 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f688c000/0x0/0x4ffc00000, data 0x29c5e22/0x2aa2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x68cf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:39.909601+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 2433024 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:40.909747+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 2433024 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 10K writes, 41K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 2753 syncs, 3.79 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3588 writes, 13K keys, 3588 commit groups, 1.0 writes per commit group, ingest: 19.98 MB, 0.03 MB/s
                                           Interval WAL: 3588 writes, 1465 syncs, 2.45 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:41.910035+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108691456 unmapped: 2408448 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f688c000/0x0/0x4ffc00000, data 0x29c5e22/0x2aa2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x68cf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325566 data_alloc: 218103808 data_used: 421888
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:42.910260+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108691456 unmapped: 2408448 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:43.910459+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108691456 unmapped: 2408448 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:44.910688+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108691456 unmapped: 2408448 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.408632278s of 10.221953392s, submitted: 7
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:45.911517+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108691456 unmapped: 2408448 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:46.911665+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 146 ms_handle_reset con 0x5624c6308c00 session 0x5624c5d04f00
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: handle_auth_request added challenge on 0x5624c906e000
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108691456 unmapped: 2408448 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: mgrc ms_handle_reset ms_handle_reset con 0x5624c6309c00
Oct 01 17:11:30 compute-0 ceph-osd[89167]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3235544197
Oct 01 17:11:30 compute-0 ceph-osd[89167]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: get_auth_request con 0x5624c682fc00 auth_method 0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: mgrc handle_mgr_configure stats_period=5
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f688a000/0x0/0x4ffc00000, data 0x29c5f32/0x2aa4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x68cf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:47.911796+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328398 data_alloc: 218103808 data_used: 421888
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108797952 unmapped: 2301952 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 146 ms_handle_reset con 0x5624c62e1800 session 0x5624c923b4a0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: handle_auth_request added challenge on 0x5624c76f0c00
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 146 ms_handle_reset con 0x5624c62e0400 session 0x5624c966a5a0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: handle_auth_request added challenge on 0x5624c62e1800
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f688a000/0x0/0x4ffc00000, data 0x29c5f32/0x2aa4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x68cf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:48.911930+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108806144 unmapped: 2293760 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:49.912074+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108814336 unmapped: 2285568 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:50.912221+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108814336 unmapped: 2285568 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:51.912427+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108814336 unmapped: 2285568 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f647b000/0x0/0x4ffc00000, data 0x29c5fc6/0x2aa3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:52.929069+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327884 data_alloc: 218103808 data_used: 421888
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108814336 unmapped: 2285568 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:53.929371+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108838912 unmapped: 2260992 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:54.929507+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108838912 unmapped: 2260992 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.027173042s of 10.099073410s, submitted: 15
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:55.929703+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108838912 unmapped: 2260992 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f647b000/0x0/0x4ffc00000, data 0x29c6015/0x2aa2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:56.929983+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108838912 unmapped: 2260992 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:57.930100+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328946 data_alloc: 218103808 data_used: 421888
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108838912 unmapped: 2260992 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:58.930216+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108847104 unmapped: 2252800 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:59.930403+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108847104 unmapped: 2252800 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f647a000/0x0/0x4ffc00000, data 0x29c620c/0x2aa3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:00.930738+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108847104 unmapped: 2252800 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f647a000/0x0/0x4ffc00000, data 0x29c620c/0x2aa3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:01.930882+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108847104 unmapped: 2252800 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:02.931182+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327920 data_alloc: 218103808 data_used: 421888
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108847104 unmapped: 2252800 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:03.931286+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108847104 unmapped: 2252800 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:04.931538+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108847104 unmapped: 2252800 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.809282780s of 10.259056091s, submitted: 15
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f647c000/0x0/0x4ffc00000, data 0x29c629e/0x2aa2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:05.931842+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108871680 unmapped: 2228224 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:06.931953+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108871680 unmapped: 2228224 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:07.932133+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330894 data_alloc: 218103808 data_used: 421888
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108871680 unmapped: 2228224 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:08.932406+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108879872 unmapped: 2220032 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f6479000/0x0/0x4ffc00000, data 0x29c649b/0x2aa5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:09.932540+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108879872 unmapped: 2220032 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f6479000/0x0/0x4ffc00000, data 0x29c649b/0x2aa5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:10.932652+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108879872 unmapped: 2220032 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:11.932947+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108879872 unmapped: 2220032 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:12.933249+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335318 data_alloc: 218103808 data_used: 421888
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108879872 unmapped: 2220032 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:13.933394+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108879872 unmapped: 2220032 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f6477000/0x0/0x4ffc00000, data 0x29c65c8/0x2aa6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:14.933945+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f6477000/0x0/0x4ffc00000, data 0x29c65c8/0x2aa6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108879872 unmapped: 2220032 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.801453590s of 10.081443787s, submitted: 18
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:15.934158+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108879872 unmapped: 2220032 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:16.934325+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108879872 unmapped: 2220032 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:17.934437+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336524 data_alloc: 218103808 data_used: 421888
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108879872 unmapped: 2220032 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:18.934615+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 2203648 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:19.934744+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108904448 unmapped: 2195456 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f6475000/0x0/0x4ffc00000, data 0x29c66c4/0x2aa7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:20.934933+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108904448 unmapped: 2195456 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:21.935070+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108912640 unmapped: 2187264 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:22.935234+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335770 data_alloc: 218103808 data_used: 421888
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108912640 unmapped: 2187264 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f6479000/0x0/0x4ffc00000, data 0x29c65fe/0x2aa5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:23.935352+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108912640 unmapped: 2187264 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:24.935482+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108912640 unmapped: 2187264 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:25.935666+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108912640 unmapped: 3235840 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.637481689s of 10.894768715s, submitted: 29
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:26.935816+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 3252224 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:27.936003+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f6477000/0x0/0x4ffc00000, data 0x29c8283/0x2aa6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1338020 data_alloc: 218103808 data_used: 430080
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 3252224 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:28.936133+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108904448 unmapped: 3244032 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:29.936246+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108904448 unmapped: 3244032 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:30.936392+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108929024 unmapped: 3219456 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:31.936577+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108929024 unmapped: 3219456 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:32.936711+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1337544 data_alloc: 218103808 data_used: 430080
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108929024 unmapped: 3219456 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f6478000/0x0/0x4ffc00000, data 0x29c84e7/0x2aa6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:33.936848+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108953600 unmapped: 3194880 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:34.937003+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108961792 unmapped: 3186688 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:35.937196+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 3178496 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:36.937344+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 3178496 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f647c000/0x0/0x4ffc00000, data 0x29c860e/0x2aa2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.254523277s of 10.673042297s, submitted: 45
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 147 handle_osd_map epochs [148,148], i have 148, src has [1,148]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:37.937489+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341592 data_alloc: 218103808 data_used: 438272
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108961792 unmapped: 3186688 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:38.937643+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 109035520 unmapped: 3112960 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:39.937832+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 110305280 unmapped: 1843200 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f6433000/0x0/0x4ffc00000, data 0x2a10233/0x2aeb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:40.937983+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 110305280 unmapped: 1843200 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:41.938135+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 110698496 unmapped: 1449984 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:42.938279+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1354162 data_alloc: 218103808 data_used: 446464
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 1441792 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:43.938431+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 111919104 unmapped: 1277952 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:44.938599+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 111927296 unmapped: 1269760 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f63d8000/0x0/0x4ffc00000, data 0x2a691ea/0x2b46000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [0,0,0,0,1])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 149 handle_osd_map epochs [149,150], i have 149, src has [1,150]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:45.938769+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 2555904 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:46.938883+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 112336896 unmapped: 1908736 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.776722908s of 10.055137634s, submitted: 245
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:47.939027+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1368320 data_alloc: 218103808 data_used: 454656
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 491520 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:48.939167+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 1540096 heap: 115294208 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:49.939313+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 1155072 heap: 115294208 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f630f000/0x0/0x4ffc00000, data 0x2b313b4/0x2c0f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:50.939498+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 114491392 unmapped: 802816 heap: 115294208 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:51.939634+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 1425408 heap: 115294208 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:52.939807+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1368654 data_alloc: 218103808 data_used: 454656
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 1425408 heap: 115294208 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:53.939959+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 1425408 heap: 115294208 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:54.940119+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 1343488 heap: 115294208 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:55.940292+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 113958912 unmapped: 1335296 heap: 115294208 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f62d4000/0x0/0x4ffc00000, data 0x2b6a1e8/0x2c49000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:56.940444+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 113958912 unmapped: 1335296 heap: 115294208 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:57.940664+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1374096 data_alloc: 218103808 data_used: 462848
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 113958912 unmapped: 1335296 heap: 115294208 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.353273392s of 10.966333389s, submitted: 54
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:58.940832+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 114024448 unmapped: 1269760 heap: 115294208 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:59.941065+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 1064960 heap: 115294208 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f62af000/0x0/0x4ffc00000, data 0x2b8fa42/0x2c6f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:00.941178+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 1048576 heap: 115294208 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f62af000/0x0/0x4ffc00000, data 0x2b8fa42/0x2c6f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:01.941264+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 1048576 heap: 115294208 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:02.941438+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1386208 data_alloc: 218103808 data_used: 471040
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 2203648 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:03.941622+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 2203648 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:04.941810+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 114327552 unmapped: 2015232 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:05.942010+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 114532352 unmapped: 1810432 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 153 heartbeat osd_stat(store_statfs(0x4f625f000/0x0/0x4ffc00000, data 0x2bdcffd/0x2cbf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:06.942162+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 153 heartbeat osd_stat(store_statfs(0x4f625f000/0x0/0x4ffc00000, data 0x2bdcffd/0x2cbf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 115572736 unmapped: 770048 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:07.942323+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1386664 data_alloc: 218103808 data_used: 471040
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 115572736 unmapped: 770048 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.617740631s of 10.144355774s, submitted: 76
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:08.942466+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 115105792 unmapped: 1236992 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:09.942641+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 115105792 unmapped: 1236992 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:10.942717+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 1228800 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:11.942851+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 115335168 unmapped: 1007616 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 155 heartbeat osd_stat(store_statfs(0x4f6214000/0x0/0x4ffc00000, data 0x2c24d78/0x2d09000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:12.942981+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1395922 data_alloc: 218103808 data_used: 479232
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 115359744 unmapped: 983040 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:13.943176+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 1024000 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:14.943325+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 115580928 unmapped: 2859008 heap: 118439936 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:15.943472+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 155 heartbeat osd_stat(store_statfs(0x4f61db000/0x0/0x4ffc00000, data 0x2c5e838/0x2d43000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 115630080 unmapped: 2809856 heap: 118439936 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:16.943638+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 115630080 unmapped: 2809856 heap: 118439936 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:17.943755+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1401148 data_alloc: 218103808 data_used: 487424
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 2580480 heap: 118439936 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:18.943938+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 115867648 unmapped: 2572288 heap: 118439936 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.538421631s of 10.944991112s, submitted: 85
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:19.944123+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f61c3000/0x0/0x4ffc00000, data 0x2c74d32/0x2d5b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 115834880 unmapped: 2605056 heap: 118439936 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:20.944274+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 115834880 unmapped: 2605056 heap: 118439936 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:21.944438+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 2596864 heap: 118439936 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:22.944591+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1407156 data_alloc: 218103808 data_used: 499712
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 2596864 heap: 118439936 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:23.944717+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 157 heartbeat osd_stat(store_statfs(0x4f6186000/0x0/0x4ffc00000, data 0x2cb06b7/0x2d97000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 2523136 heap: 118439936 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:24.944846+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 1474560 heap: 118439936 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 157 handle_osd_map epochs [157,158], i have 157, src has [1,158]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:25.945004+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 1523712 heap: 118439936 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:26.945198+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 1269760 heap: 118439936 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f6146000/0x0/0x4ffc00000, data 0x2cef02d/0x2dd7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:27.945374+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1414474 data_alloc: 218103808 data_used: 507904
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 118431744 unmapped: 1056768 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:28.945519+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117424128 unmapped: 2064384 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:29.945652+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.955653191s of 10.619499207s, submitted: 67
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 2834432 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:30.945800+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 2744320 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:31.946035+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 116801536 unmapped: 2686976 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:32.946167+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1419254 data_alloc: 218103808 data_used: 507904
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 2539520 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f60f0000/0x0/0x4ffc00000, data 0x2d469fb/0x2e2e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:33.946377+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 2539520 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:34.946539+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 2490368 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:35.946747+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 2539520 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:36.946980+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 2539520 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f60a4000/0x0/0x4ffc00000, data 0x2d90c10/0x2e7a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:37.947237+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426050 data_alloc: 218103808 data_used: 507904
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 2449408 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:38.947410+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 2572288 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:39.947579+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 4.613806725s of 10.206427574s, submitted: 34
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 2547712 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:40.947684+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f6079000/0x0/0x4ffc00000, data 0x2dbbd31/0x2ea4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [0,0,0,0,0,2])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 3588096 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:41.947820+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 3293184 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: handle_auth_request added challenge on 0x5624c67fc400
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:42.947953+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1438690 data_alloc: 218103808 data_used: 520192
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117407744 unmapped: 3129344 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:43.948113+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117407744 unmapped: 3129344 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Got map version 15
Oct 01 17:11:30 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:44.948273+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f6028000/0x0/0x4ffc00000, data 0x2e099f6/0x2ef5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117555200 unmapped: 2981888 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 159 handle_osd_map epochs [159,160], i have 159, src has [1,160]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:45.948433+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117571584 unmapped: 2965504 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:46.948573+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117579776 unmapped: 2957312 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:47.948724+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1445772 data_alloc: 218103808 data_used: 524288
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 2834432 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:48.948921+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5ff8000/0x0/0x4ffc00000, data 0x2e39ff5/0x2f25000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 2793472 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:49.949085+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.518175602s of 10.031496048s, submitted: 85
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117800960 unmapped: 2736128 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:50.949226+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117800960 unmapped: 2736128 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:51.949455+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117800960 unmapped: 2736128 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:52.949670+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1445478 data_alloc: 218103808 data_used: 524288
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117833728 unmapped: 2703360 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:53.949848+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117833728 unmapped: 2703360 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5fda000/0x0/0x4ffc00000, data 0x2e58a03/0x2f44000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:54.950000+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5fda000/0x0/0x4ffc00000, data 0x2e58a03/0x2f44000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [0,0,0,0,0,0,4])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 118038528 unmapped: 2498560 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:55.950233+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 118038528 unmapped: 2498560 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:56.950604+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 2555904 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:57.950781+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1444040 data_alloc: 218103808 data_used: 524288
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 2555904 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:58.950920+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 2555904 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:59.951120+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 3448832 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:00.951266+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.308528900s of 10.589768410s, submitted: 13
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5fa6000/0x0/0x4ffc00000, data 0x2e8d0b2/0x2f78000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 3448832 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:01.951402+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 3448832 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:02.951560+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1447192 data_alloc: 218103808 data_used: 524288
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 118145024 unmapped: 3440640 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:03.951705+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 118120448 unmapped: 3465216 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5f6d000/0x0/0x4ffc00000, data 0x2ec5c5a/0x2fb1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:04.951870+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 118169600 unmapped: 3416064 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5f2e000/0x0/0x4ffc00000, data 0x2f051ab/0x2ff0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:05.952120+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 118349824 unmapped: 3235840 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:06.952287+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 119406592 unmapped: 2179072 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:07.952448+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1452536 data_alloc: 218103808 data_used: 524288
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 119603200 unmapped: 1982464 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:08.952677+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 119668736 unmapped: 1916928 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:09.952887+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5efd000/0x0/0x4ffc00000, data 0x2f36505/0x3021000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 118702080 unmapped: 2883584 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:10.953158+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.688433647s of 10.379757881s, submitted: 35
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 118931456 unmapped: 2654208 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:11.953399+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 2400256 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:12.953570+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457500 data_alloc: 218103808 data_used: 524288
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 2301952 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5ec4000/0x0/0x4ffc00000, data 0x2f6eb7b/0x305a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:13.953722+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 119300096 unmapped: 2285568 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:14.953881+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5e78000/0x0/0x4ffc00000, data 0x2fbaf75/0x30a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 119611392 unmapped: 3022848 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:15.954126+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 119611392 unmapped: 3022848 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:16.954271+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 119627776 unmapped: 3006464 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:17.954438+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456464 data_alloc: 218103808 data_used: 524288
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 119808000 unmapped: 2826240 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:18.954656+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 120102912 unmapped: 2531328 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:19.954822+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 2048000 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:20.955005+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5e25000/0x0/0x4ffc00000, data 0x300dbd9/0x30f9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 1802240 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:21.955124+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.541758537s of 11.184603691s, submitted: 34
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 1777664 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:22.955251+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5e09000/0x0/0x4ffc00000, data 0x302a004/0x3115000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464780 data_alloc: 218103808 data_used: 524288
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 119980032 unmapped: 2654208 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:23.955423+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 1474560 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:24.955593+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 1368064 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:25.955807+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 1368064 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:26.955985+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5dd2000/0x0/0x4ffc00000, data 0x30608bd/0x314c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121380864 unmapped: 1253376 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:27.956115+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1465692 data_alloc: 218103808 data_used: 524288
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121397248 unmapped: 1236992 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:28.956281+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5dd2000/0x0/0x4ffc00000, data 0x30608bd/0x314c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121397248 unmapped: 1236992 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:29.956486+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121503744 unmapped: 2179072 heap: 123682816 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:30.956680+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121503744 unmapped: 2179072 heap: 123682816 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:31.956920+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5d9c000/0x0/0x4ffc00000, data 0x3095bce/0x3182000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.672442436s of 10.087901115s, submitted: 29
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121626624 unmapped: 2056192 heap: 123682816 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:32.957080+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1471104 data_alloc: 218103808 data_used: 524288
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121880576 unmapped: 1802240 heap: 123682816 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:33.957209+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 2367488 heap: 123682816 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:34.957347+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 2367488 heap: 123682816 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:35.957502+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5d47000/0x0/0x4ffc00000, data 0x30eb6ef/0x31d7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121528320 unmapped: 2154496 heap: 123682816 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:36.957631+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 3465216 heap: 123682816 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:37.957785+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5d29000/0x0/0x4ffc00000, data 0x3108d56/0x31f5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472848 data_alloc: 218103808 data_used: 524288
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 3448832 heap: 123682816 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:38.957965+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 3383296 heap: 123682816 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:39.958151+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 120111104 unmapped: 3571712 heap: 123682816 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:40.958315+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5d17000/0x0/0x4ffc00000, data 0x311af7f/0x3207000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121192448 unmapped: 3538944 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:41.958504+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5ce0000/0x0/0x4ffc00000, data 0x315238b/0x323e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.636823654s of 10.060282707s, submitted: 40
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121495552 unmapped: 3235840 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:42.958715+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473798 data_alloc: 218103808 data_used: 524288
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 3252224 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:43.958871+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5c9c000/0x0/0x4ffc00000, data 0x31974b9/0x3282000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121495552 unmapped: 3235840 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:44.959117+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121184256 unmapped: 3547136 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:45.959397+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121184256 unmapped: 3547136 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:46.960412+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f5c8a000/0x0/0x4ffc00000, data 0x31a762d/0x3293000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 3465216 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:47.960607+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1486688 data_alloc: 218103808 data_used: 532480
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121397248 unmapped: 3334144 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:48.960784+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121397248 unmapped: 3334144 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:49.960955+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121397248 unmapped: 3334144 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:50.961191+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121503744 unmapped: 3227648 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:51.961355+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.885109901s of 10.018849373s, submitted: 67
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 3219456 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f5c22000/0x0/0x4ffc00000, data 0x320d09b/0x32fb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:52.961566+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1493854 data_alloc: 218103808 data_used: 540672
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 3211264 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:53.961727+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121536512 unmapped: 3194880 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:54.961950+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121536512 unmapped: 3194880 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:55.962134+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121667584 unmapped: 4112384 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:56.962388+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f5bf8000/0x0/0x4ffc00000, data 0x3237b0b/0x3326000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121757696 unmapped: 4022272 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:57.962553+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497076 data_alloc: 218103808 data_used: 540672
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121757696 unmapped: 4022272 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:58.962814+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121757696 unmapped: 4022272 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:59.963975+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 123043840 unmapped: 2736128 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:00.964113+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 123052032 unmapped: 2727936 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:01.964431+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f5b95000/0x0/0x4ffc00000, data 0x3299583/0x3388000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 123052032 unmapped: 2727936 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:02.964760+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3577970147' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 01 17:11:30 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1929943715' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 01 17:11:30 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3025202361' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 01 17:11:30 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2416444125' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 01 17:11:30 compute-0 ceph-mon[74273]: pgmap v1283: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:11:30 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1974448892' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1505500 data_alloc: 218103808 data_used: 548864
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 123248640 unmapped: 2531328 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:03.965008+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 123248640 unmapped: 2531328 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:04.965172+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.575702667s of 12.447751045s, submitted: 99
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f5b4c000/0x0/0x4ffc00000, data 0x32e3e80/0x33d2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122421248 unmapped: 3358720 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:05.965630+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122535936 unmapped: 4292608 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:06.965827+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122535936 unmapped: 4292608 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:07.966067+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f5b48000/0x0/0x4ffc00000, data 0x32e5970/0x33d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1508010 data_alloc: 218103808 data_used: 557056
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122544128 unmapped: 4284416 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:08.966408+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122560512 unmapped: 4268032 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:09.966714+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122429440 unmapped: 4399104 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:10.966926+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 4325376 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:11.967234+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 4325376 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:12.967519+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f5b1f000/0x0/0x4ffc00000, data 0x330f6bc/0x33ff000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510426 data_alloc: 218103808 data_used: 557056
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 4325376 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:13.967750+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 4325376 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:14.967989+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.771459103s of 10.038169861s, submitted: 34
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122511360 unmapped: 4317184 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:15.968206+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 4325376 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:16.968413+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122601472 unmapped: 4227072 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:17.968554+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515650 data_alloc: 218103808 data_used: 557056
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122814464 unmapped: 4014080 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:18.968748+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f5ad6000/0x0/0x4ffc00000, data 0x3358e22/0x3448000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122822656 unmapped: 4005888 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:19.968953+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 4161536 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:20.969150+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122740736 unmapped: 4087808 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:21.969330+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122986496 unmapped: 3842048 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:22.969516+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1522438 data_alloc: 218103808 data_used: 557056
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122986496 unmapped: 3842048 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:23.969702+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f5a87000/0x0/0x4ffc00000, data 0x33a506b/0x3497000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 123002880 unmapped: 3825664 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:24.969965+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 0.578993261s of 10.013916969s, submitted: 24
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 123068416 unmapped: 3760128 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:25.970172+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 3702784 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:26.970318+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 3448832 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:27.970517+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1525874 data_alloc: 218103808 data_used: 561152
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 3448832 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:28.970659+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f5a31000/0x0/0x4ffc00000, data 0x33fac86/0x34ed000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 3448832 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:29.977839+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f5a1f000/0x0/0x4ffc00000, data 0x340cbeb/0x34ff000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [0,0,0,0,0,0,0,1,2])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 124526592 unmapped: 2301952 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:30.977989+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 124526592 unmapped: 3350528 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:31.978190+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 124600320 unmapped: 3276800 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:32.978399+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f59eb000/0x0/0x4ffc00000, data 0x343fb27/0x3533000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,2])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1534542 data_alloc: 218103808 data_used: 561152
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:33.978585+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 3203072 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:34.978723+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 3203072 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 1.888821959s of 10.038423538s, submitted: 34
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:35.978961+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 124829696 unmapped: 3047424 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 164 handle_osd_map epochs [164,165], i have 164, src has [1,165]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 164 handle_osd_map epochs [165,165], i have 165, src has [1,165]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f5986000/0x0/0x4ffc00000, data 0x34a3485/0x3596000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:36.979152+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 125288448 unmapped: 2588672 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f5986000/0x0/0x4ffc00000, data 0x34a3485/0x3596000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:37.979415+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 125526016 unmapped: 2351104 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541706 data_alloc: 218103808 data_used: 569344
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:38.979588+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 125534208 unmapped: 2342912 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:39.979932+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 124542976 unmapped: 3334144 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:40.980139+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 124542976 unmapped: 3334144 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f5944000/0x0/0x4ffc00000, data 0x34e784c/0x35da000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:41.980455+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 124542976 unmapped: 3334144 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:42.980697+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f592d000/0x0/0x4ffc00000, data 0x34fc260/0x35f0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 3973120 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1545516 data_alloc: 218103808 data_used: 577536
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:43.980954+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 124010496 unmapped: 3866624 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:44.981132+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 124166144 unmapped: 3710976 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 4.039592266s of 10.316161156s, submitted: 69
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:45.981367+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 124526592 unmapped: 3350528 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:46.981559+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 124526592 unmapped: 3350528 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f58c2000/0x0/0x4ffc00000, data 0x3567939/0x365c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,2])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:47.981753+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 124526592 unmapped: 3350528 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1556924 data_alloc: 218103808 data_used: 581632
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:48.981953+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 124575744 unmapped: 3301376 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:49.982118+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 125706240 unmapped: 2170880 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:50.982276+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 125526016 unmapped: 2351104 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:51.982429+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 125558784 unmapped: 2318336 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:52.982558+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f5875000/0x0/0x4ffc00000, data 0x35b2f55/0x36a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [0,0,1])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 125648896 unmapped: 2228224 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1554298 data_alloc: 218103808 data_used: 581632
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:53.982692+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 125648896 unmapped: 2228224 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:54.982830+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 125886464 unmapped: 1990656 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 2.302689075s of 10.060779572s, submitted: 29
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:55.983006+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126935040 unmapped: 1990656 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:56.983128+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126935040 unmapped: 1990656 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f583b000/0x0/0x4ffc00000, data 0x35ec827/0x36e2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:57.983269+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126943232 unmapped: 1982464 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1558724 data_alloc: 218103808 data_used: 589824
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:58.983416+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126943232 unmapped: 1982464 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:59.983540+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f583c000/0x0/0x4ffc00000, data 0x35ec88c/0x36e2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126943232 unmapped: 1982464 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:00.983663+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126943232 unmapped: 1982464 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 167 handle_osd_map epochs [167,168], i have 167, src has [1,168]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:01.983769+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126943232 unmapped: 1982464 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:02.983888+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126943232 unmapped: 1982464 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1562016 data_alloc: 218103808 data_used: 598016
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:03.984085+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126943232 unmapped: 1982464 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:04.984214+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126959616 unmapped: 1966080 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 168 heartbeat osd_stat(store_statfs(0x4f5839000/0x0/0x4ffc00000, data 0x35ee31e/0x36e4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:05.984364+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126959616 unmapped: 1966080 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.955657005s of 11.050517082s, submitted: 60
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:06.984504+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 168 heartbeat osd_stat(store_statfs(0x4f5839000/0x0/0x4ffc00000, data 0x35ee31e/0x36e4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126976000 unmapped: 1949696 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:07.984688+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126976000 unmapped: 1949696 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1562904 data_alloc: 218103808 data_used: 598016
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:08.984811+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126976000 unmapped: 1949696 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:09.984959+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126976000 unmapped: 1949696 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:10.985081+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126976000 unmapped: 1949696 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 168 heartbeat osd_stat(store_statfs(0x4f5839000/0x0/0x4ffc00000, data 0x35ee483/0x36e5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:11.985221+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126976000 unmapped: 1949696 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:12.985412+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126976000 unmapped: 1949696 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Got map version 16
Oct 01 17:11:30 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1563982 data_alloc: 218103808 data_used: 598016
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:13.985551+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126992384 unmapped: 1933312 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:14.985663+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126992384 unmapped: 1933312 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 168 heartbeat osd_stat(store_statfs(0x4f5838000/0x0/0x4ffc00000, data 0x35ee587/0x36e6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 168 handle_osd_map epochs [169,169], i have 168, src has [1,169]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:15.985816+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127008768 unmapped: 1916928 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.202703476s of 10.113478661s, submitted: 41
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 169 handle_osd_map epochs [170,170], i have 169, src has [1,170]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:16.985986+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 1892352 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:17.986101+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 1892352 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f5832000/0x0/0x4ffc00000, data 0x35f1d76/0x36ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1571080 data_alloc: 218103808 data_used: 602112
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:18.986263+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 1892352 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:19.986419+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 1892352 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:20.986584+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 1892352 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 170 handle_osd_map epochs [170,171], i have 170, src has [1,171]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:21.986692+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 1892352 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:22.986872+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 1892352 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f5831000/0x0/0x4ffc00000, data 0x35f38f2/0x36ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572596 data_alloc: 218103808 data_used: 610304
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:23.987083+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 1892352 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:24.987239+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 1892352 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:25.987443+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 1892352 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:26.987637+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 1892352 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:27.987844+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 1892352 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572596 data_alloc: 218103808 data_used: 610304
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:28.988184+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.914357185s of 12.439829826s, submitted: 50
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127049728 unmapped: 1875968 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f5831000/0x0/0x4ffc00000, data 0x35f38f2/0x36ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:29.988378+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127049728 unmapped: 1875968 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:30.988550+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127049728 unmapped: 1875968 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:31.988739+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127049728 unmapped: 1875968 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f5832000/0x0/0x4ffc00000, data 0x35f38f2/0x36ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:32.988875+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127049728 unmapped: 1875968 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f5832000/0x0/0x4ffc00000, data 0x35f38f2/0x36ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1571892 data_alloc: 218103808 data_used: 610304
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:33.989088+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127049728 unmapped: 1875968 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:34.989310+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127049728 unmapped: 1875968 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:35.989597+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f5832000/0x0/0x4ffc00000, data 0x35f38f2/0x36ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127049728 unmapped: 1875968 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:36.989780+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127049728 unmapped: 1875968 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f5832000/0x0/0x4ffc00000, data 0x35f38f2/0x36ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:37.989956+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127049728 unmapped: 1875968 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1573484 data_alloc: 218103808 data_used: 610304
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:38.990127+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f5831000/0x0/0x4ffc00000, data 0x35f398d/0x36ed000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127049728 unmapped: 1875968 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.773765564s of 10.584938049s, submitted: 4
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:39.990318+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127057920 unmapped: 1867776 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:40.990484+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127057920 unmapped: 1867776 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 171 ms_handle_reset con 0x5624c67fc400 session 0x5624c96994a0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:41.990612+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:42.990778+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Got map version 17
Oct 01 17:11:30 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:43.991009+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572810 data_alloc: 218103808 data_used: 610304
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f5831000/0x0/0x4ffc00000, data 0x35f3abc/0x36ed000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:44.991221+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:45.991415+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:46.991554+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:47.991718+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f5832000/0x0/0x4ffc00000, data 0x35f3a86/0x36ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 171 handle_osd_map epochs [172,172], i have 171, src has [1,172]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 171 handle_osd_map epochs [172,172], i have 172, src has [1,172]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 171 handle_osd_map epochs [172,172], i have 172, src has [1,172]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:48.991862+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1577160 data_alloc: 218103808 data_used: 618496
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:49.992060+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:50.992285+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:51.992443+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 172 heartbeat osd_stat(store_statfs(0x4f582e000/0x0/0x4ffc00000, data 0x35f566c/0x36ef000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:52.992637+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:53.992885+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576792 data_alloc: 218103808 data_used: 618496
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:54.993102+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:55.993304+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 172 heartbeat osd_stat(store_statfs(0x4f582e000/0x0/0x4ffc00000, data 0x35f566c/0x36ef000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:56.993467+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:57.993633+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 172 handle_osd_map epochs [173,173], i have 172, src has [1,173]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.244745255s of 18.642427444s, submitted: 223
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582b000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:58.993803+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579766 data_alloc: 218103808 data_used: 618496
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:59.993971+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582b000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:00.994161+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 2531328 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:01.994300+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 2531328 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582b000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:02.994444+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 2531328 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:03.994650+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579766 data_alloc: 218103808 data_used: 618496
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 2531328 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:04.994805+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 2531328 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:05.995039+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 2531328 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:06.995227+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582b000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 2531328 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:07.995413+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 2531328 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582b000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:08.995591+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579766 data_alloc: 218103808 data_used: 618496
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 2531328 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:09.995764+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 2531328 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:10.996648+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 2531328 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:11.996878+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 2531328 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582b000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:12.997076+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582b000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 2531328 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:13.997228+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579766 data_alloc: 218103808 data_used: 618496
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 2531328 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:14.997380+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 2531328 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:15.997558+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 2531328 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582b000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:16.997725+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 2531328 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:17.997929+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 2523136 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:18.998085+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579766 data_alloc: 218103808 data_used: 618496
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 2523136 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:19.998245+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582b000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 2523136 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582b000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:20.998469+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 2523136 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:21.998628+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 2523136 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:22.998793+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 2523136 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:23.999009+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579766 data_alloc: 218103808 data_used: 618496
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 2523136 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:24.999133+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 2523136 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:25.999314+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 2523136 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582b000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:26.999518+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 2523136 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:27.999639+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 2523136 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:28.999714+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579766 data_alloc: 218103808 data_used: 618496
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 2523136 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582b000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:29.999948+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 2523136 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:31.000071+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 2523136 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:32.000207+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 2514944 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:33.000333+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 2514944 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:34.000451+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579766 data_alloc: 218103808 data_used: 618496
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 2514944 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:35.000615+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 2514944 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582b000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:36.000804+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 2514944 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:37.000972+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 2514944 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582b000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:38.001149+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582b000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 2514944 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:39.001322+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579766 data_alloc: 218103808 data_used: 618496
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 2514944 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:40.001468+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 2514944 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:41.001631+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 42.923511505s of 42.943687439s, submitted: 15
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 173 ms_handle_reset con 0x5624c906f800 session 0x5624c865c000
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 1998848 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:42.001823+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 129032192 unmapped: 1990656 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:43.001973+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 129032192 unmapped: 1990656 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Got map version 18
Oct 01 17:11:30 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:44.002136+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128475136 unmapped: 2547712 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:45.002317+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128475136 unmapped: 2547712 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:46.002531+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128475136 unmapped: 2547712 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:47.002672+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128475136 unmapped: 2547712 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:48.002834+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128475136 unmapped: 2547712 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:49.002966+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128475136 unmapped: 2547712 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:50.003504+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128475136 unmapped: 2547712 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:51.003624+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128475136 unmapped: 2547712 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:52.003767+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128475136 unmapped: 2547712 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:53.004009+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128475136 unmapped: 2547712 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:54.004152+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128475136 unmapped: 2547712 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:55.004264+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128475136 unmapped: 2547712 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:56.005968+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128475136 unmapped: 2547712 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:57.006095+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 2555904 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: do_command 'config diff' '{prefix=config diff}'
Oct 01 17:11:30 compute-0 ceph-osd[89167]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 01 17:11:30 compute-0 ceph-osd[89167]: do_command 'config show' '{prefix=config show}'
Oct 01 17:11:30 compute-0 ceph-osd[89167]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 01 17:11:30 compute-0 ceph-osd[89167]: do_command 'counter dump' '{prefix=counter dump}'
Oct 01 17:11:30 compute-0 ceph-osd[89167]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:58.006236+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: do_command 'counter schema' '{prefix=counter schema}'
Oct 01 17:11:30 compute-0 ceph-osd[89167]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128475136 unmapped: 2547712 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:59.006435+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:30 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:30 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:11:30 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:11:30 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128630784 unmapped: 2392064 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:11:30 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:00.006589+0000)
Oct 01 17:11:30 compute-0 ceph-osd[89167]: do_command 'log dump' '{prefix=log dump}'
Oct 01 17:11:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Oct 01 17:11:30 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/741208154' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 01 17:11:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Oct 01 17:11:30 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3689349067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 01 17:11:30 compute-0 rsyslogd[1001]: imjournal from <np0005464933:ceph-osd>: begin to drop messages due to rate-limiting
Oct 01 17:11:30 compute-0 nova_compute[259504]: 2025-10-01 17:11:30.751 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:11:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Oct 01 17:11:30 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2678468904' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 01 17:11:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 01 17:11:30 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/12043240' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 01 17:11:31 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14625 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Oct 01 17:11:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/314667791' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 01 17:11:31 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/741208154' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 01 17:11:31 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3689349067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 01 17:11:31 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2678468904' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 01 17:11:31 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/12043240' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 01 17:11:31 compute-0 ceph-mon[74273]: from='client.14625 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:31 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/314667791' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 01 17:11:31 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14629 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:31 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14631 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:31 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14633 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:11:32 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14635 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:32 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:11:32 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14637 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:32 compute-0 ceph-mon[74273]: from='client.14629 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:32 compute-0 ceph-mon[74273]: from='client.14631 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:32 compute-0 ceph-mon[74273]: from='client.14633 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:32 compute-0 ceph-mon[74273]: from='client.14635 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:32 compute-0 ceph-mon[74273]: pgmap v1284: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:11:32 compute-0 ceph-mon[74273]: from='client.14637 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:32 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14641 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Oct 01 17:11:33 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3070090672' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 01 17:11:33 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14645 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Oct 01 17:11:33 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1882010050' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 01 17:11:33 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14649 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:33 compute-0 ceph-mon[74273]: from='client.14641 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:33 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3070090672' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 01 17:11:33 compute-0 ceph-mon[74273]: from='client.14645 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:33 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1882010050' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 01 17:11:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 01 17:11:33 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/941689180' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 01 17:11:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Oct 01 17:11:34 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1894268338' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 01 17:11:34 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:11:34 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 01 17:11:34 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 01 17:11:34 compute-0 ceph-mon[74273]: from='client.14649 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:34 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/941689180' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 01 17:11:34 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1894268338' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 01 17:11:34 compute-0 ceph-mon[74273]: pgmap v1285: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:11:34 compute-0 ceph-mon[74273]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 01 17:11:34 compute-0 ceph-mon[74273]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 01 17:11:34 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Oct 01 17:11:34 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/596620069' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 1277952 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 815342 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:03.241165+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 1277952 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:04.241292+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 1277952 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:05.241426+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 1269760 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:06.241537+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 1269760 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:07.241684+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 1261568 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 815342 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:08.241817+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 1261568 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:09.241998+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 1261568 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:10.242144+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 1253376 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:11.242448+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 1245184 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:12.242633+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 1236992 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 815342 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:13.242828+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 1236992 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:14.243023+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 1236992 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:15.243159+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 1228800 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:16.243351+0000)
Oct 01 17:11:34 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 1228800 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:17.243467+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 3.a scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.080343246s of 19.097831726s, submitted: 4
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 3.a scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 1220608 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 816489 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:18.243613+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:39:47.314638+0000 osd.0 (osd.0) 120 : cluster [DBG] 3.a scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:39:47.328734+0000 osd.0 (osd.0) 121 : cluster [DBG] 3.a scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 121) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:39:47.314638+0000 osd.0 (osd.0) 120 : cluster [DBG] 3.a scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:39:47.328734+0000 osd.0 (osd.0) 121 : cluster [DBG] 3.a scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 1196032 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:19.243826+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:39:48.346455+0000 osd.0 (osd.0) 122 : cluster [DBG] 7.4 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:39:48.360546+0000 osd.0 (osd.0) 123 : cluster [DBG] 7.4 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 123) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:39:48.346455+0000 osd.0 (osd.0) 122 : cluster [DBG] 7.4 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:39:48.360546+0000 osd.0 (osd.0) 123 : cluster [DBG] 7.4 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 1196032 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:20.243996+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.f scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.f scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 1187840 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:21.244788+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:39:50.333695+0000 osd.0 (osd.0) 124 : cluster [DBG] 8.f scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:39:50.354881+0000 osd.0 (osd.0) 125 : cluster [DBG] 8.f scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 125) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:39:50.333695+0000 osd.0 (osd.0) 124 : cluster [DBG] 8.f scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:39:50.354881+0000 osd.0 (osd.0) 125 : cluster [DBG] 8.f scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 1171456 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:22.244990+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 1171456 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 818783 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:23.245618+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 1163264 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:24.245796+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 7.f scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 7.f scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 1155072 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:25.246213+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:39:54.292244+0000 osd.0 (osd.0) 126 : cluster [DBG] 7.f scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:39:54.306373+0000 osd.0 (osd.0) 127 : cluster [DBG] 7.f scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 127) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:39:54.292244+0000 osd.0 (osd.0) 126 : cluster [DBG] 7.f scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:39:54.306373+0000 osd.0 (osd.0) 127 : cluster [DBG] 7.f scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 1146880 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:26.246520+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 1138688 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:27.246828+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.b deep-scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.971094131s of 10.003036499s, submitted: 8
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.b deep-scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 1130496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 821077 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:28.246995+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:39:57.317611+0000 osd.0 (osd.0) 128 : cluster [DBG] 8.b deep-scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:39:57.331679+0000 osd.0 (osd.0) 129 : cluster [DBG] 8.b deep-scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 129) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:39:57.317611+0000 osd.0 (osd.0) 128 : cluster [DBG] 8.b deep-scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:39:57.331679+0000 osd.0 (osd.0) 129 : cluster [DBG] 8.b deep-scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 1130496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:29.247253+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:39:58.304491+0000 osd.0 (osd.0) 130 : cluster [DBG] 3.9 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:39:58.318527+0000 osd.0 (osd.0) 131 : cluster [DBG] 3.9 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 131) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:39:58.304491+0000 osd.0 (osd.0) 130 : cluster [DBG] 3.9 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:39:58.318527+0000 osd.0 (osd.0) 131 : cluster [DBG] 3.9 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 1130496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:30.247609+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 1122304 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:31.247827+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 4 last_log 135 sent 131 num 4 unsent 4 sending 4
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:00.259093+0000 osd.0 (osd.0) 132 : cluster [DBG] 11.1 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:00.273114+0000 osd.0 (osd.0) 133 : cluster [DBG] 11.1 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:01.222934+0000 osd.0 (osd.0) 134 : cluster [DBG] 11.4 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:01.237249+0000 osd.0 (osd.0) 135 : cluster [DBG] 11.4 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 135) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:00.259093+0000 osd.0 (osd.0) 132 : cluster [DBG] 11.1 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:00.273114+0000 osd.0 (osd.0) 133 : cluster [DBG] 11.1 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:01.222934+0000 osd.0 (osd.0) 134 : cluster [DBG] 11.4 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:01.237249+0000 osd.0 (osd.0) 135 : cluster [DBG] 11.4 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 1122304 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:32.248470+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 1114112 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 824520 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:33.248984+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 1097728 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:34.249202+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 1097728 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:35.249380+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 3.c scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 3.c scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:36.249528+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:05.311228+0000 osd.0 (osd.0) 136 : cluster [DBG] 3.c scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:05.325332+0000 osd.0 (osd.0) 137 : cluster [DBG] 3.c scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 137) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:05.311228+0000 osd.0 (osd.0) 136 : cluster [DBG] 3.c scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:05.325332+0000 osd.0 (osd.0) 137 : cluster [DBG] 3.c scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 1073152 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:37.249783+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:06.291581+0000 osd.0 (osd.0) 138 : cluster [DBG] 7.9 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:06.305710+0000 osd.0 (osd.0) 139 : cluster [DBG] 7.9 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 139) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:06.291581+0000 osd.0 (osd.0) 138 : cluster [DBG] 7.9 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:06.305710+0000 osd.0 (osd.0) 139 : cluster [DBG] 7.9 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 827962 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:38.249984+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:07.272337+0000 osd.0 (osd.0) 140 : cluster [DBG] 11.6 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:07.286479+0000 osd.0 (osd.0) 141 : cluster [DBG] 11.6 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.922298431s of 10.973222733s, submitted: 14
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 141) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:07.272337+0000 osd.0 (osd.0) 140 : cluster [DBG] 11.6 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:07.286479+0000 osd.0 (osd.0) 141 : cluster [DBG] 11.6 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:39.250178+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:08.291049+0000 osd.0 (osd.0) 142 : cluster [DBG] 8.6 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:08.308621+0000 osd.0 (osd.0) 143 : cluster [DBG] 8.6 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 143) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:08.291049+0000 osd.0 (osd.0) 142 : cluster [DBG] 8.6 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:08.308621+0000 osd.0 (osd.0) 143 : cluster [DBG] 8.6 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:40.250393+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 3.f scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 3.f scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:41.250581+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:10.341048+0000 osd.0 (osd.0) 144 : cluster [DBG] 3.f scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:10.355175+0000 osd.0 (osd.0) 145 : cluster [DBG] 3.f scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 145) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:10.341048+0000 osd.0 (osd.0) 144 : cluster [DBG] 3.f scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:10.355175+0000 osd.0 (osd.0) 145 : cluster [DBG] 3.f scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:42.250765+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:43.250942+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 830256 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:44.251118+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:13.319390+0000 osd.0 (osd.0) 146 : cluster [DBG] 8.1a scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:13.333497+0000 osd.0 (osd.0) 147 : cluster [DBG] 8.1a scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 147) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:13.319390+0000 osd.0 (osd.0) 146 : cluster [DBG] 8.1a scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:13.333497+0000 osd.0 (osd.0) 147 : cluster [DBG] 8.1a scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:45.251359+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:14.326150+0000 osd.0 (osd.0) 148 : cluster [DBG] 3.12 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:14.340197+0000 osd.0 (osd.0) 149 : cluster [DBG] 3.12 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 149) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:14.326150+0000 osd.0 (osd.0) 148 : cluster [DBG] 3.12 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:14.340197+0000 osd.0 (osd.0) 149 : cluster [DBG] 3.12 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:46.251561+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:47.251704+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 151 sent 149 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:16.301683+0000 osd.0 (osd.0) 150 : cluster [DBG] 11.19 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:16.315849+0000 osd.0 (osd.0) 151 : cluster [DBG] 11.19 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 151) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:16.301683+0000 osd.0 (osd.0) 150 : cluster [DBG] 11.19 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:16.315849+0000 osd.0 (osd.0) 151 : cluster [DBG] 11.19 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:48.251861+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 833701 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.065608978s of 10.104915619s, submitted: 10
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:49.252014+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:18.395848+0000 osd.0 (osd.0) 152 : cluster [DBG] 8.1f scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:18.410102+0000 osd.0 (osd.0) 153 : cluster [DBG] 8.1f scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 153) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:18.395848+0000 osd.0 (osd.0) 152 : cluster [DBG] 8.1f scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:18.410102+0000 osd.0 (osd.0) 153 : cluster [DBG] 8.1f scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:50.252208+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:19.364886+0000 osd.0 (osd.0) 154 : cluster [DBG] 3.15 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:19.378970+0000 osd.0 (osd.0) 155 : cluster [DBG] 3.15 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 155) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:19.364886+0000 osd.0 (osd.0) 154 : cluster [DBG] 3.15 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:19.378970+0000 osd.0 (osd.0) 155 : cluster [DBG] 3.15 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:51.252419+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:52.253539+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:21.340974+0000 osd.0 (osd.0) 156 : cluster [DBG] 8.18 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:21.355014+0000 osd.0 (osd.0) 157 : cluster [DBG] 8.18 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 983040 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 157) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:21.340974+0000 osd.0 (osd.0) 156 : cluster [DBG] 8.18 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:21.355014+0000 osd.0 (osd.0) 157 : cluster [DBG] 8.18 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:53.254758+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 983040 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837145 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:54.255187+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:55.255623+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:24.319744+0000 osd.0 (osd.0) 158 : cluster [DBG] 8.1d scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:24.333928+0000 osd.0 (osd.0) 159 : cluster [DBG] 8.1d scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 159) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:24.319744+0000 osd.0 (osd.0) 158 : cluster [DBG] 8.1d scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:24.333928+0000 osd.0 (osd.0) 159 : cluster [DBG] 8.1d scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:56.256108+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:57.256491+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:58.256720+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838293 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:39:59.256952+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.933362961s of 10.961439133s, submitted: 8
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:00.257204+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:29.357312+0000 osd.0 (osd.0) 160 : cluster [DBG] 7.13 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:29.371381+0000 osd.0 (osd.0) 161 : cluster [DBG] 7.13 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 161) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:29.357312+0000 osd.0 (osd.0) 160 : cluster [DBG] 7.13 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:29.371381+0000 osd.0 (osd.0) 161 : cluster [DBG] 7.13 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:01.257463+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:02.257587+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:31.408194+0000 osd.0 (osd.0) 162 : cluster [DBG] 3.17 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:31.422338+0000 osd.0 (osd.0) 163 : cluster [DBG] 3.17 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 163) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:31.408194+0000 osd.0 (osd.0) 162 : cluster [DBG] 3.17 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:31.422338+0000 osd.0 (osd.0) 163 : cluster [DBG] 3.17 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:03.257753+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 925696 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 840589 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:04.257946+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:33.500205+0000 osd.0 (osd.0) 164 : cluster [DBG] 8.9 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:33.514317+0000 osd.0 (osd.0) 165 : cluster [DBG] 8.9 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 165) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:33.500205+0000 osd.0 (osd.0) 164 : cluster [DBG] 8.9 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:33.514317+0000 osd.0 (osd.0) 165 : cluster [DBG] 8.9 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:05.258182+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:34.469691+0000 osd.0 (osd.0) 166 : cluster [DBG] 7.6 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:34.483932+0000 osd.0 (osd.0) 167 : cluster [DBG] 7.6 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 167) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:34.469691+0000 osd.0 (osd.0) 166 : cluster [DBG] 7.6 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:34.483932+0000 osd.0 (osd.0) 167 : cluster [DBG] 7.6 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:06.258430+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 909312 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:07.258660+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 909312 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:08.258835+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:37.444577+0000 osd.0 (osd.0) 168 : cluster [DBG] 9.3 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:37.486954+0000 osd.0 (osd.0) 169 : cluster [DBG] 9.3 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 844030 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 169) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:37.444577+0000 osd.0 (osd.0) 168 : cluster [DBG] 9.3 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:37.486954+0000 osd.0 (osd.0) 169 : cluster [DBG] 9.3 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:09.259120+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.065844536s of 10.100466728s, submitted: 10
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:10.259249+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:39.457868+0000 osd.0 (osd.0) 170 : cluster [DBG] 9.1b scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:39.479037+0000 osd.0 (osd.0) 171 : cluster [DBG] 9.1b scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 171) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:39.457868+0000 osd.0 (osd.0) 170 : cluster [DBG] 9.1b scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:39.479037+0000 osd.0 (osd.0) 171 : cluster [DBG] 9.1b scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:11.259435+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:12.259629+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 884736 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:13.259769+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:42.482297+0000 osd.0 (osd.0) 172 : cluster [DBG] 9.1 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:42.521160+0000 osd.0 (osd.0) 173 : cluster [DBG] 9.1 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 846325 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.d scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.d scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 173) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:42.482297+0000 osd.0 (osd.0) 172 : cluster [DBG] 9.1 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:42.521160+0000 osd.0 (osd.0) 173 : cluster [DBG] 9.1 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:14.259971+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:43.498685+0000 osd.0 (osd.0) 174 : cluster [DBG] 9.d scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:43.541042+0000 osd.0 (osd.0) 175 : cluster [DBG] 9.d scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 175) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:43.498685+0000 osd.0 (osd.0) 174 : cluster [DBG] 9.d scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:43.541042+0000 osd.0 (osd.0) 175 : cluster [DBG] 9.d scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:15.260136+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:44.506720+0000 osd.0 (osd.0) 176 : cluster [DBG] 9.9 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:44.538503+0000 osd.0 (osd.0) 177 : cluster [DBG] 9.9 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 177) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:44.506720+0000 osd.0 (osd.0) 176 : cluster [DBG] 9.9 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:44.538503+0000 osd.0 (osd.0) 177 : cluster [DBG] 9.9 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:16.260308+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.b deep-scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.b deep-scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:17.260432+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:46.495121+0000 osd.0 (osd.0) 178 : cluster [DBG] 9.b deep-scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:46.523320+0000 osd.0 (osd.0) 179 : cluster [DBG] 9.b deep-scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 179) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:46.495121+0000 osd.0 (osd.0) 178 : cluster [DBG] 9.b deep-scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:46.523320+0000 osd.0 (osd.0) 179 : cluster [DBG] 9.b deep-scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:18.260605+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 849766 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:19.260760+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:48.522716+0000 osd.0 (osd.0) 180 : cluster [DBG] 6.7 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:48.540402+0000 osd.0 (osd.0) 181 : cluster [DBG] 6.7 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68321280 unmapped: 827392 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 181) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:48.522716+0000 osd.0 (osd.0) 180 : cluster [DBG] 6.7 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:48.540402+0000 osd.0 (osd.0) 181 : cluster [DBG] 6.7 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:20.260973+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68321280 unmapped: 827392 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:21.261154+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68329472 unmapped: 819200 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:22.261274+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68329472 unmapped: 819200 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:23.261445+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68329472 unmapped: 819200 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 850913 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:24.261763+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68337664 unmapped: 811008 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:25.261905+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68337664 unmapped: 811008 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:26.262061+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68345856 unmapped: 802816 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.044855118s of 17.112829208s, submitted: 12
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:27.262214+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:56.570731+0000 osd.0 (osd.0) 182 : cluster [DBG] 6.3 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:56.592106+0000 osd.0 (osd.0) 183 : cluster [DBG] 6.3 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68345856 unmapped: 802816 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 183) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:56.570731+0000 osd.0 (osd.0) 182 : cluster [DBG] 6.3 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:56.592106+0000 osd.0 (osd.0) 183 : cluster [DBG] 6.3 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:28.262420+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68345856 unmapped: 802816 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852060 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:29.262587+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:58.546571+0000 osd.0 (osd.0) 184 : cluster [DBG] 9.1d scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:40:58.578349+0000 osd.0 (osd.0) 185 : cluster [DBG] 9.1d scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68362240 unmapped: 786432 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 185) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:58.546571+0000 osd.0 (osd.0) 184 : cluster [DBG] 9.1d scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:40:58.578349+0000 osd.0 (osd.0) 185 : cluster [DBG] 9.1d scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:30.262803+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68362240 unmapped: 786432 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:31.262951+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68362240 unmapped: 786432 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:32.263094+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:41:01.647176+0000 osd.0 (osd.0) 186 : cluster [DBG] 9.5 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:41:01.685984+0000 osd.0 (osd.0) 187 : cluster [DBG] 9.5 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68370432 unmapped: 778240 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 187) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:41:01.647176+0000 osd.0 (osd.0) 186 : cluster [DBG] 9.5 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:41:01.685984+0000 osd.0 (osd.0) 187 : cluster [DBG] 9.5 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:33.263433+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68370432 unmapped: 778240 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 854355 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:34.263527+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68378624 unmapped: 770048 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:35.263631+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:41:04.606357+0000 osd.0 (osd.0) 188 : cluster [DBG] 9.11 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:41:04.638078+0000 osd.0 (osd.0) 189 : cluster [DBG] 9.11 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68378624 unmapped: 770048 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 189) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:41:04.606357+0000 osd.0 (osd.0) 188 : cluster [DBG] 9.11 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:41:04.638078+0000 osd.0 (osd.0) 189 : cluster [DBG] 9.11 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:36.263825+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68386816 unmapped: 761856 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:37.263962+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68386816 unmapped: 761856 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.991504669s of 11.019228935s, submitted: 8
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:38.264110+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:41:07.589934+0000 osd.0 (osd.0) 190 : cluster [DBG] 6.5 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:41:07.611062+0000 osd.0 (osd.0) 191 : cluster [DBG] 6.5 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68395008 unmapped: 753664 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 856650 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 191) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:41:07.589934+0000 osd.0 (osd.0) 190 : cluster [DBG] 6.5 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:41:07.611062+0000 osd.0 (osd.0) 191 : cluster [DBG] 6.5 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:39.264339+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68403200 unmapped: 745472 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:40.264472+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:41:09.582640+0000 osd.0 (osd.0) 192 : cluster [DBG] 6.9 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:41:09.596719+0000 osd.0 (osd.0) 193 : cluster [DBG] 6.9 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68403200 unmapped: 745472 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 6.a scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 6.a scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 193) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:41:09.582640+0000 osd.0 (osd.0) 192 : cluster [DBG] 6.9 scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:41:09.596719+0000 osd.0 (osd.0) 193 : cluster [DBG] 6.9 scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:41.264644+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:41:10.625950+0000 osd.0 (osd.0) 194 : cluster [DBG] 6.a scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:41:10.639983+0000 osd.0 (osd.0) 195 : cluster [DBG] 6.a scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68419584 unmapped: 729088 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 195) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:41:10.625950+0000 osd.0 (osd.0) 194 : cluster [DBG] 6.a scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:41:10.639983+0000 osd.0 (osd.0) 195 : cluster [DBG] 6.a scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:42.264853+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68419584 unmapped: 729088 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.16 deep-scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.16 deep-scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:43.264989+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 197 sent 195 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:41:12.614058+0000 osd.0 (osd.0) 196 : cluster [DBG] 9.16 deep-scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:41:12.642493+0000 osd.0 (osd.0) 197 : cluster [DBG] 9.16 deep-scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68419584 unmapped: 729088 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 860092 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 197) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:41:12.614058+0000 osd.0 (osd.0) 196 : cluster [DBG] 9.16 deep-scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:41:12.642493+0000 osd.0 (osd.0) 197 : cluster [DBG] 9.16 deep-scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:44.265179+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68427776 unmapped: 720896 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:45.265312+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 199 sent 197 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:41:14.607834+0000 osd.0 (osd.0) 198 : cluster [DBG] 9.1c scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:41:14.646674+0000 osd.0 (osd.0) 199 : cluster [DBG] 9.1c scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68427776 unmapped: 720896 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 199) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:41:14.607834+0000 osd.0 (osd.0) 198 : cluster [DBG] 9.1c scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:41:14.646674+0000 osd.0 (osd.0) 199 : cluster [DBG] 9.1c scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:46.265489+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68427776 unmapped: 720896 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:47.265680+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 712704 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.960006714s of 10.011592865s, submitted: 10
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:48.265884+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  log_queue is 2 last_log 201 sent 199 num 2 unsent 2 sending 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:41:17.601488+0000 osd.0 (osd.0) 200 : cluster [DBG] 9.1e scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  will send 2025-10-01T16:41:17.633298+0000 osd.0 (osd.0) 201 : cluster [DBG] 9.1e scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client handle_log_ack log(last 201) v1
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:41:17.601488+0000 osd.0 (osd.0) 200 : cluster [DBG] 9.1e scrub starts
Oct 01 17:11:34 compute-0 ceph-osd[88140]: log_client  logged 2025-10-01T16:41:17.633298+0000 osd.0 (osd.0) 201 : cluster [DBG] 9.1e scrub ok
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 712704 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:49.266096+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 704512 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:50.266245+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 704512 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:51.266434+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:52.266585+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:53.266744+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:54.266885+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 688128 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:55.267038+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 688128 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:56.267429+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 688128 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:57.267599+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 679936 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:58.267804+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 679936 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:40:59.268011+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:00.268319+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:01.268514+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 663552 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:02.268708+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 663552 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:03.268963+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 663552 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:04.269215+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68493312 unmapped: 655360 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:05.269414+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68493312 unmapped: 655360 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:06.269595+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68493312 unmapped: 655360 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:07.270352+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68501504 unmapped: 647168 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:08.270599+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68501504 unmapped: 647168 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:09.270933+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68501504 unmapped: 647168 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:10.271239+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:11.271445+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:12.271606+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 630784 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:13.271818+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 630784 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:14.272013+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:15.272183+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:16.272328+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:17.272483+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:18.272645+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:19.272946+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:20.273093+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:21.273259+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:22.273478+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:23.273639+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:24.274112+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:25.274329+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 589824 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:26.274484+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 589824 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:27.274730+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 589824 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:28.274945+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 573440 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:29.275722+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 573440 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:30.275963+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 573440 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:31.276166+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68583424 unmapped: 565248 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:32.276292+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68583424 unmapped: 565248 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:33.276525+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 557056 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:34.277207+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 557056 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:35.278203+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 557056 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:36.279108+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68599808 unmapped: 548864 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:37.279477+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68599808 unmapped: 548864 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:38.279745+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 532480 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:39.280365+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68624384 unmapped: 524288 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:40.280709+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68624384 unmapped: 524288 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:41.281234+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 516096 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:42.281393+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 516096 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:43.281533+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 507904 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:44.281658+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 499712 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:45.281781+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 499712 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:46.282121+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 499712 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:47.282261+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 491520 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:48.282391+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 483328 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:49.282537+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 466944 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:50.282748+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 466944 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:51.282952+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 466944 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:52.283121+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 458752 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:53.283311+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 458752 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:54.283441+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 450560 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:55.283636+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 450560 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:56.283813+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 450560 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:57.283996+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 442368 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:58.284496+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 442368 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:41:59.284768+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 442368 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:00.284929+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 434176 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:01.285101+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 434176 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:02.285261+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 425984 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:03.285502+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 409600 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:04.285818+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 409600 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:05.285990+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:06.286157+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:07.286358+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:08.286560+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:09.286712+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:10.286969+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:11.287182+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:12.287506+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:13.287700+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:14.288032+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:15.288295+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:16.288538+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:17.288711+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:18.288961+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:19.289103+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:20.289255+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:21.289448+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:22.289616+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68812800 unmapped: 335872 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:23.289771+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:24.289911+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:25.290085+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 319488 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:26.290272+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 319488 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:27.290419+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:28.290600+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:29.290813+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:30.291009+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:31.291203+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68853760 unmapped: 294912 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:32.291359+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68853760 unmapped: 294912 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:33.291514+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68853760 unmapped: 294912 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:34.291704+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68853760 unmapped: 294912 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:35.294966+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:36.296239+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:37.296402+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:38.296790+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:39.297825+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:40.297969+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68878336 unmapped: 270336 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:41.298855+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68886528 unmapped: 262144 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:42.299261+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68886528 unmapped: 262144 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:43.299609+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68894720 unmapped: 253952 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:44.299827+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68894720 unmapped: 253952 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:45.299956+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68902912 unmapped: 245760 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:46.300094+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68902912 unmapped: 245760 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:47.300451+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68902912 unmapped: 245760 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:48.300581+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68919296 unmapped: 229376 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:49.300726+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68919296 unmapped: 229376 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:50.300862+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68927488 unmapped: 221184 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:51.301072+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68927488 unmapped: 221184 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:52.301201+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68927488 unmapped: 221184 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:53.301330+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68935680 unmapped: 212992 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:54.301476+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68935680 unmapped: 212992 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:55.301732+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68943872 unmapped: 204800 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:56.301884+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68943872 unmapped: 204800 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:57.302078+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68952064 unmapped: 196608 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:58.302246+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68952064 unmapped: 196608 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:42:59.302389+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68952064 unmapped: 196608 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:00.303028+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68960256 unmapped: 188416 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:01.303245+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68960256 unmapped: 188416 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:02.303453+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68960256 unmapped: 188416 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:03.303577+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 180224 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:04.303747+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 180224 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:05.304034+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68976640 unmapped: 172032 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:06.304199+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68976640 unmapped: 172032 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:07.304340+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68976640 unmapped: 172032 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:08.304525+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68984832 unmapped: 163840 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:09.304841+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68984832 unmapped: 163840 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:10.305231+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 155648 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:11.305981+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 155648 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:12.306112+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 155648 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:13.306456+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69001216 unmapped: 147456 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:14.306633+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69009408 unmapped: 139264 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:15.306796+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69009408 unmapped: 139264 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:16.307044+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 131072 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:17.307207+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 131072 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:18.307366+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 131072 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:19.307639+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69025792 unmapped: 122880 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:20.307865+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69025792 unmapped: 122880 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:21.308079+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69025792 unmapped: 122880 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:22.308262+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 114688 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:23.308656+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 114688 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:24.308851+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 106496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:25.309060+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 106496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:26.309216+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 106496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:27.309598+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 98304 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:28.309869+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 98304 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:29.310066+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 90112 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:30.310180+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 90112 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:31.310357+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 90112 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:32.310496+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 81920 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:33.310639+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 73728 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:34.310760+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 65536 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:35.310915+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 65536 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:36.311045+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 57344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:37.311189+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 57344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:38.311324+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 57344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:39.311457+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 49152 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:40.311608+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 49152 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:41.311762+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 40960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:42.312020+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69115904 unmapped: 32768 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:43.312236+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69115904 unmapped: 32768 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:44.312421+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69115904 unmapped: 32768 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:45.312552+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 24576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:46.312738+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 24576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:47.312967+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 16384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:48.313165+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 16384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:49.313353+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 16384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:50.313531+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 8192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:51.313832+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 8192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:52.314032+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 8192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:53.314252+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:54.314440+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 0 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:55.314564+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 0 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:56.314727+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 0 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:57.314874+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69156864 unmapped: 1040384 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:58.315068+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69156864 unmapped: 1040384 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:43:59.315242+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69165056 unmapped: 1032192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:00.315407+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69165056 unmapped: 1032192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:01.315658+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:02.315851+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:03.316017+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:04.316183+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:05.316337+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:06.316543+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69189632 unmapped: 1007616 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:07.316696+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69189632 unmapped: 1007616 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:08.316858+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69189632 unmapped: 1007616 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:09.317058+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69197824 unmapped: 999424 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:10.317201+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:11.317378+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:12.317505+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69214208 unmapped: 983040 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:13.317671+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69214208 unmapped: 983040 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:14.317871+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69214208 unmapped: 983040 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:15.318035+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:16.318171+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:17.318349+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69230592 unmapped: 966656 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:18.318460+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69230592 unmapped: 966656 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:19.318600+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69230592 unmapped: 966656 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:20.318787+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69238784 unmapped: 958464 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:21.318964+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69238784 unmapped: 958464 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:22.319084+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:23.319309+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:24.319442+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:25.319577+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:26.319689+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:27.319820+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:28.319970+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69263360 unmapped: 933888 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:29.320141+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69263360 unmapped: 933888 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:30.320311+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69271552 unmapped: 925696 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:31.320483+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69271552 unmapped: 925696 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:32.320659+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69271552 unmapped: 925696 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:33.320818+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:34.320967+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:35.321102+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:36.321216+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 909312 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 5514 writes, 23K keys, 5514 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5514 writes, 832 syncs, 6.63 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5514 writes, 23K keys, 5514 commit groups, 1.0 writes per commit group, ingest: 18.56 MB, 0.03 MB/s
                                           Interval WAL: 5514 writes, 832 syncs, 6.63 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:37.321351+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69353472 unmapped: 843776 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:38.321470+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69353472 unmapped: 843776 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:39.321579+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69361664 unmapped: 835584 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:40.321684+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69361664 unmapped: 835584 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:41.321832+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69369856 unmapped: 827392 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:42.321943+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69369856 unmapped: 827392 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:43.322087+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69369856 unmapped: 827392 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:44.322233+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69369856 unmapped: 827392 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:45.322387+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 819200 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:46.322492+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 819200 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:47.322639+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69386240 unmapped: 811008 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:48.322747+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69386240 unmapped: 811008 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:49.322939+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69394432 unmapped: 802816 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:50.323069+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69394432 unmapped: 802816 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:51.323286+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69394432 unmapped: 802816 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:52.323539+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69402624 unmapped: 794624 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:53.323691+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69410816 unmapped: 786432 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:54.323873+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69419008 unmapped: 778240 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:55.324116+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69419008 unmapped: 778240 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:56.324297+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69419008 unmapped: 778240 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:57.324517+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69427200 unmapped: 770048 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:58.324643+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69427200 unmapped: 770048 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:44:59.324842+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69427200 unmapped: 770048 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:00.324959+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69435392 unmapped: 761856 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:01.325163+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69435392 unmapped: 761856 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:02.325295+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69443584 unmapped: 753664 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:03.325464+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69443584 unmapped: 753664 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:04.325609+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69451776 unmapped: 745472 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:05.325791+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69451776 unmapped: 745472 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:06.325945+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69451776 unmapped: 745472 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:07.326099+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69459968 unmapped: 737280 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:08.326238+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69459968 unmapped: 737280 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:09.326435+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69459968 unmapped: 737280 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:10.326569+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69468160 unmapped: 729088 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:11.326789+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69468160 unmapped: 729088 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:12.326945+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69476352 unmapped: 720896 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:13.327073+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69476352 unmapped: 720896 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:14.327333+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69476352 unmapped: 720896 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:15.327967+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69484544 unmapped: 712704 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:16.328489+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69484544 unmapped: 712704 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:17.328636+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69492736 unmapped: 704512 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:18.328772+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69492736 unmapped: 704512 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:19.329007+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69492736 unmapped: 704512 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:20.329151+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69500928 unmapped: 696320 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:21.329285+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69500928 unmapped: 696320 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:22.329415+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69500928 unmapped: 696320 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:23.329570+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:24.329748+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:25.329932+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:26.330097+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69517312 unmapped: 679936 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:27.330210+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69517312 unmapped: 679936 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:28.330362+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 671744 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:29.330514+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 671744 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:30.330679+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 671744 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:31.330853+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69533696 unmapped: 663552 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:32.331102+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69533696 unmapped: 663552 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:33.331296+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69541888 unmapped: 655360 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:34.331435+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69541888 unmapped: 655360 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:35.331580+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69541888 unmapped: 655360 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:36.331761+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 647168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:37.331950+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 647168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 290.326019287s of 290.333953857s, submitted: 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [1])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:38.332244+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 614400 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:39.332392+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 70836224 unmapped: 409600 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:40.332543+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 221184 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:41.332746+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 204800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:42.332955+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 204800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:43.333088+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 204800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:44.333209+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 188416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:45.333374+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 188416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:46.333562+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 188416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:47.333866+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 188416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:48.334019+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 188416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:49.334161+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 188416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:50.334304+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 188416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:51.334478+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 180224 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:52.334631+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 180224 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:53.334760+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 163840 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:54.334869+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 163840 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:55.334963+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 163840 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:56.335057+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71090176 unmapped: 155648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:57.335329+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71090176 unmapped: 155648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:58.335448+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71098368 unmapped: 147456 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:59.335620+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71098368 unmapped: 147456 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:00.335748+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71106560 unmapped: 139264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:01.336045+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71106560 unmapped: 139264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:02.336181+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71106560 unmapped: 139264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:03.336346+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 131072 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:04.336477+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 131072 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:05.336630+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 131072 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:06.336794+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 122880 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:07.336943+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 122880 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:08.337088+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71131136 unmapped: 114688 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:09.337304+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71131136 unmapped: 114688 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:10.337552+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71139328 unmapped: 106496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:11.337677+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71139328 unmapped: 106496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:12.337851+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 98304 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:13.337985+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 98304 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:14.338164+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 90112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:15.338285+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 81920 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:16.338418+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 81920 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:17.338569+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 73728 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:18.338707+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 73728 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:19.338868+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 57344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:20.339004+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 49152 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:21.339194+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 49152 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:22.339349+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71204864 unmapped: 40960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:23.339503+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71204864 unmapped: 40960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:24.339643+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 32768 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:25.339773+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 32768 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:26.339966+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 32768 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:27.340130+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 24576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:28.340349+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 24576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:29.340509+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 24576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:30.340663+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:31.340944+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:32.341114+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 8192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:33.341293+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 8192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:34.341522+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 0 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:35.341738+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 0 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:36.341926+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1040384 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:37.342049+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1040384 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:38.342189+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1040384 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:39.342313+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1024000 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:40.342513+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1024000 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:41.342764+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1024000 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:42.342958+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1024000 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:43.343138+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1024000 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:44.343346+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1024000 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:45.343483+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1024000 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:46.343606+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1024000 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:47.343751+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1024000 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:48.343881+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1015808 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:49.344056+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1015808 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:50.344243+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1015808 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:51.344415+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1015808 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:52.344571+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1015808 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:53.344703+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1015808 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:54.344868+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1015808 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:55.345044+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1015808 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:56.345201+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1015808 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:57.345309+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1015808 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:58.345467+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1015808 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:59.345653+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 991232 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:00.345818+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 991232 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:01.345959+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 991232 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:02.346091+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 991232 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:03.346259+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 983040 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:04.346453+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 983040 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:05.346653+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 983040 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:06.346798+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 983040 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:07.346965+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 983040 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:08.347089+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 983040 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:09.347270+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 983040 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:10.347709+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 983040 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:11.347988+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 983040 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:12.348155+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 983040 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:13.348319+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 983040 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:14.348465+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 983040 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:15.348625+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 983040 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:16.348798+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 983040 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:17.348960+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 974848 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:18.349166+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:19.349349+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:20.349569+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:21.349915+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:22.350071+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:23.350191+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:24.350340+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:25.350461+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:26.350591+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:27.350736+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:28.350943+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:29.351102+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:30.351487+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:31.352151+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:32.352348+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:33.352568+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:34.352958+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:35.353110+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:36.353381+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:37.353568+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:38.353734+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 942080 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:39.353994+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 942080 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:40.354135+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 942080 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:41.354324+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 942080 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:42.354514+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 942080 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:43.354661+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 933888 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:44.354787+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 933888 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:45.354952+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 925696 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:46.355097+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 925696 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:47.355204+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 925696 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:48.355352+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 925696 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:49.355507+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 925696 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:50.355637+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 925696 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:51.355953+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 925696 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:52.356097+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 925696 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:53.356284+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 917504 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:54.356454+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 917504 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:55.356690+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 917504 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:56.356848+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 917504 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:57.357054+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 917504 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:58.357207+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:59.357350+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:00.357789+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:01.357966+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:02.358087+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:03.358427+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:04.358548+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:05.359347+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:06.359506+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:07.359690+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:08.359885+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:09.360043+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:10.360260+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:11.360461+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:12.360599+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:13.360991+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 892928 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:14.361215+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 892928 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:15.361380+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 892928 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:16.361552+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 892928 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:17.361705+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 892928 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:18.361978+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 876544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:19.362136+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 876544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:20.362371+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 876544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:21.362697+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 876544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:22.362863+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 876544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:23.363070+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 876544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:24.363310+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 876544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:25.363437+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 876544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:26.363615+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 876544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:27.363826+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 876544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:28.363983+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:29.364155+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:30.364320+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:31.364517+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:32.364741+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:33.364989+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:34.365270+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:35.365435+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:36.365645+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:37.365828+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:38.366005+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 851968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:39.366228+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 851968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:40.366423+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71450624 unmapped: 843776 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:41.366702+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:42.366877+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:43.367065+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:44.367270+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:45.367485+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:46.367770+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:47.368025+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:48.368244+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:49.368542+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:50.368785+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:51.369043+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:52.369217+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:53.369443+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 827392 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:54.369581+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 827392 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:55.369700+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 827392 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:56.369827+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 827392 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:57.370023+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 827392 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:58.370166+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 811008 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:59.370366+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 811008 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:00.370612+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 811008 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:01.373325+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 811008 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:02.373535+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 811008 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:03.373845+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 811008 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:04.373979+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 811008 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:05.374133+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 811008 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:06.374969+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 811008 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:07.375095+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 811008 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:08.375477+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 892928 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:09.376012+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:10.376159+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 892928 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:11.376464+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 892928 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:12.376617+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 892928 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:13.376809+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 892928 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:14.377005+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 884736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:15.377122+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 884736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:16.377318+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 884736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:17.377441+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 884736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:18.377756+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 884736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:19.377911+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:20.378023+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:21.378226+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:22.378434+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:23.378656+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:24.378872+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:25.379179+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:26.379383+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:27.379581+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:28.379829+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 860160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:29.379935+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 860160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:30.380068+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 860160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:31.380277+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 860160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:32.380453+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 860160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:33.380634+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 860160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:34.380772+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 860160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:35.380951+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 860160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:36.381083+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 860160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:37.381231+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 860160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:38.381394+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 860160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:39.381577+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 860160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:40.381754+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 860160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:41.381959+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:42.382121+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:43.383793+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: mgrc ms_handle_reset ms_handle_reset con 0x559b45991c00
Oct 01 17:11:34 compute-0 ceph-osd[88140]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3235544197
Oct 01 17:11:34 compute-0 ceph-osd[88140]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: get_auth_request con 0x559b49514400 auth_method 0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: mgrc handle_mgr_configure stats_period=5
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:44.383925+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 540672 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:45.384010+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 540672 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:46.384139+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 540672 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:47.384298+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 540672 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 ms_handle_reset con 0x559b47e9d000 session 0x559b47b32780
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: handle_auth_request added challenge on 0x559b49c30000
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:48.384431+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:49.384610+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:50.384765+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:51.384960+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:52.385125+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:53.385290+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:54.385408+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:55.385555+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:56.385799+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:57.385942+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:58.386163+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:59.386381+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:00.386556+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:01.386853+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:02.387156+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:03.387372+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:04.387557+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:05.387809+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:06.388106+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:07.388349+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:08.388575+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:09.388854+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:10.389078+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:11.389255+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:12.389428+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:13.389651+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:14.389829+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:15.389994+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:16.390139+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71770112 unmapped: 524288 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:17.390292+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:18.390458+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:19.390627+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:20.390815+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:21.390996+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:22.391278+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:23.391444+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:24.391615+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:25.391952+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:26.392158+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:27.392677+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:28.392858+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:29.393142+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:30.393382+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:31.393989+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:32.394136+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:33.394303+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:34.394450+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:35.394580+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:36.394746+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:37.394961+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:38.395155+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:39.395335+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:40.395452+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:41.395671+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:42.395821+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:43.395999+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:44.396131+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:45.396268+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:46.396417+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:47.396578+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:48.396750+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:49.397064+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:50.397254+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:51.397453+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:52.397639+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:53.397803+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:54.397986+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:55.398157+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:56.398298+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 507904 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:57.398429+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 507904 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:58.398608+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 507904 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:59.398780+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 507904 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:00.398983+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 507904 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:01.399208+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 507904 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:02.399487+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 507904 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:03.399745+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 507904 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:04.400012+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 507904 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:05.400269+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 507904 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:06.400522+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 507904 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:07.400829+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 507904 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:08.401077+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 507904 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:09.401379+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 507904 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:10.401614+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 507904 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:11.401951+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 507904 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:12.402206+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 507904 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:13.402649+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 507904 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:14.403257+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:15.403619+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:16.403798+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:17.404485+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:18.404725+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:19.405036+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:20.405265+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:21.405468+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:22.406085+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:23.406260+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:24.406404+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:25.406685+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:26.406834+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:27.407030+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:28.407187+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:29.407427+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:30.407585+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:31.407825+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:32.407992+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:33.408141+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:34.408299+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 466944 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:35.408446+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 466944 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:36.408601+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 466944 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:37.408750+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 466944 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:38.408914+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 466944 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:39.409045+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 466944 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:40.409164+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 466944 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:41.409334+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71852032 unmapped: 442368 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:42.409453+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71852032 unmapped: 442368 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:43.409600+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71852032 unmapped: 442368 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:44.409698+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:45.409869+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:46.410079+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:47.410233+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:48.410315+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:49.410482+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:50.410661+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:51.410821+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:52.414542+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:53.414661+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:54.414830+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:55.414995+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:56.415156+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:57.415314+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:58.415442+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:59.415597+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:00.415758+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:01.415988+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:02.416143+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:03.416230+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:04.416403+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:05.416544+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:06.416747+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:07.416924+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:08.417039+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:09.417185+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:10.417316+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:11.417457+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:12.417648+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:13.417832+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:14.417960+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:15.418135+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:16.418270+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:17.418447+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:18.418606+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:19.418772+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:20.418939+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:21.419112+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:22.419277+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:23.419434+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:24.419584+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:25.419730+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:26.419940+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:27.420145+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:28.420434+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:29.420602+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:30.420768+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:31.420942+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:32.421093+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:33.421294+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71884800 unmapped: 409600 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:34.421447+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71884800 unmapped: 409600 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:35.421616+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71884800 unmapped: 409600 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:36.421770+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71884800 unmapped: 409600 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:37.421928+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71884800 unmapped: 409600 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:38.422089+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:39.422235+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:40.422346+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:41.422501+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:42.422663+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:43.428990+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:44.429134+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:45.429257+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:46.429393+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:47.429959+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:48.430113+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:49.430276+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:50.430441+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:51.430619+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:52.430768+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:53.430881+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:54.431037+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:55.431184+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:56.431380+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:57.431534+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:58.431767+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:59.431972+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:00.432092+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:01.432283+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:02.432418+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:03.432555+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:04.432750+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:05.432931+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:06.433091+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:07.433280+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:08.433452+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:09.433604+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:10.433760+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:11.434005+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:12.434155+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:13.434314+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:14.434460+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:15.434565+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:16.434681+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:17.434795+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:18.434914+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:19.435026+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:20.435141+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:21.435263+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:22.435400+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:23.435556+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:24.435713+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:25.435878+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:26.436074+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:27.436262+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:28.436444+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:29.436577+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:30.436706+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:31.436826+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:32.437155+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:33.437317+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:34.437441+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:35.437582+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:36.437772+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:37.437953+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:38.438103+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:39.438288+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:40.438428+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:41.438624+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71884800 unmapped: 409600 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:42.438851+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71884800 unmapped: 409600 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:43.439107+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:44.439348+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:45.439513+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:46.439758+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:47.439964+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:48.440171+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:49.440395+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:50.441570+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:51.442982+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:52.443156+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:53.443616+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:54.443779+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:55.444526+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:56.445185+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:57.445326+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:58.445505+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:59.445961+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:00.446109+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:01.446597+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:02.446785+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:03.447171+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:04.447518+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:05.447785+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:06.448056+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:07.448280+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:08.448482+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:09.448638+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:10.448785+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:11.448984+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:12.449150+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:13.449328+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:14.449497+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:15.449674+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:16.449821+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:17.449996+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:18.450172+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:19.450376+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:20.450539+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:21.450745+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:22.450948+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:23.451103+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:24.451247+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 360448 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:25.451417+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 360448 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:26.451552+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 360448 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:27.451701+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 360448 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:28.451837+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 360448 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:29.451972+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 360448 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:30.452202+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 360448 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:31.452365+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 360448 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:32.452480+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 360448 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:33.452667+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 360448 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:34.452852+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 360448 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:35.453017+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 360448 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:36.453215+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 360448 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5726 writes, 24K keys, 5726 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5726 writes, 938 syncs, 6.10 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:37.453324+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 327680 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:38.453443+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 327680 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:39.453632+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 327680 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:40.453840+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 327680 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:41.454117+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:42.454280+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:43.454433+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:44.454569+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:45.454739+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:46.454977+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:47.455151+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:48.455330+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:49.455511+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:50.455670+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:51.455838+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:52.456017+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:53.456202+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:54.456364+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:55.456524+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:56.456705+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:57.457076+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:58.457210+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:59.457335+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:00.457452+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:01.457619+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:02.457785+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:03.457971+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:04.458116+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:05.458306+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:06.458427+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:07.458588+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:08.458727+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:09.458871+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:10.459033+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:11.459188+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:12.459297+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:13.459439+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:14.459605+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:15.459777+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:16.459955+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:17.460108+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:18.460227+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:19.460409+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:20.460546+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:21.460771+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:22.460956+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:23.461094+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:24.461254+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:25.461417+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:26.461589+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:27.461782+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:28.461973+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:29.462149+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:30.462305+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:31.462473+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:32.462586+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:33.462745+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:34.462939+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:35.463085+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:36.463237+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:37.463412+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:38.463576+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 600.159301758s of 601.157226562s, submitted: 106
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:39.463724+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:40.464129+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:41.464329+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:42.464517+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:43.464638+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:44.464790+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:45.464963+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:46.465138+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:47.465283+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:48.465486+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:49.465751+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:50.465924+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:51.466099+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:52.466265+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:53.466441+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:54.466579+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:55.466747+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:56.466851+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:57.467014+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:58.467167+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:59.467319+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:00.467442+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:01.467655+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:02.467853+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:03.467999+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:04.468153+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:05.468272+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:06.468407+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:07.468612+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:08.468722+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:09.468885+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:10.469054+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:11.469250+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:12.469374+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:13.469575+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:14.469761+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:15.469968+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:16.470105+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:17.470254+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:18.470386+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:19.470535+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:20.470687+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:21.470995+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:22.471157+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:23.471296+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:24.471456+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:25.471598+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:26.471761+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:27.471979+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:28.472110+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:29.472268+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:30.472400+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:31.472561+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:32.472743+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:33.472958+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:34.473150+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:35.473277+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:36.473412+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:37.473546+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:38.473714+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:39.473846+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:40.474009+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:41.474196+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:42.474327+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:43.474464+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:44.474648+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:45.474789+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:46.475002+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:47.475171+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:48.475326+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:49.475519+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:50.476705+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:51.477069+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:52.477225+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:53.477419+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:54.477582+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:55.477728+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:56.478037+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:57.478240+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:58.478413+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:59.478567+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:00.478742+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:01.478927+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:02.479111+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:03.479522+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:04.479727+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:05.479963+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:06.480209+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:07.480427+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:08.480648+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:09.481030+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:10.481166+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:11.481397+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:12.481588+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:13.481788+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:14.481985+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:15.482179+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:16.482378+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:17.482567+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:18.482742+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:19.482967+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:20.483144+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:21.483374+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:22.483543+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:23.483774+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:24.483925+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:25.484172+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:26.484399+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:27.484668+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:28.484863+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:29.485174+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:30.485400+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:31.485623+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:32.485788+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:33.485936+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:34.486111+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:35.486300+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:36.486458+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:37.486661+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:38.486810+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 221184 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:39.486977+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 221184 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:40.487121+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 221184 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:41.487310+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:42.487496+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:43.487633+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:44.487795+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:45.487944+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:46.488075+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:47.488246+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:48.488439+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:49.488606+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:50.488761+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:51.488950+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:52.489125+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:53.489343+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:54.489517+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:55.489703+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:56.489941+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:57.490103+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:58.490277+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:59.490441+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:00.490603+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:01.490766+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:02.491000+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:03.491214+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:04.491357+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:05.491541+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:06.491679+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:07.491842+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:08.492037+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:09.492180+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:10.492400+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:11.492585+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:12.492698+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:13.492813+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:14.492998+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:15.493163+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:16.493337+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 196608 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:17.493501+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 196608 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:18.493666+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:19.493801+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:20.493949+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:21.494112+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:22.494248+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:23.494378+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:24.494504+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:25.494651+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:26.494773+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:27.494967+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:28.495127+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:29.495267+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:30.495430+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:31.495597+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:32.495789+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:33.495961+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:34.496179+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:35.496340+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:36.496497+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:37.496660+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:38.496789+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:39.497056+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:40.497247+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:41.497430+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:42.497599+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:43.497766+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:44.497974+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:45.498135+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:46.498354+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:47.498526+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:48.498681+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:49.498951+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:50.499220+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:51.499467+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:52.499633+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:53.499825+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:54.499979+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:55.500146+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:56.500304+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:57.500551+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:58.500698+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 139264 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:59.500947+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 139264 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:00.501092+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 139264 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:01.501299+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 139264 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:02.501410+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 139264 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:03.501561+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 139264 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:04.501716+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 139264 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:05.501997+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 139264 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:06.502140+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 139264 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:07.502285+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 139264 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:08.502438+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 139264 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:09.502613+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 139264 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:10.502792+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:11.503016+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:12.503157+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:13.503331+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:14.503522+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:15.503695+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:16.503865+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:17.504052+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:18.504216+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 114688 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:19.504392+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 114688 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:20.504508+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 114688 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:21.504681+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 114688 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:22.504808+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 222.721786499s of 223.958694458s, submitted: 106
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: handle_auth_request added challenge on 0x559b49c30400
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _renew_subs
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 24576 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:23.504978+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _renew_subs
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 125 ms_handle_reset con 0x559b49c30400 session 0x559b47e55c20
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fcaa7000/0x0/0x4ffc00000, data 0xbcff9/0x177000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72417280 unmapped: 925696 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:24.505147+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72417280 unmapped: 925696 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:25.505310+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fcaa3000/0x0/0x4ffc00000, data 0xbeb92/0x17a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: handle_auth_request added challenge on 0x559b49c30800
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:26.505485+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72499200 unmapped: 10158080 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 125 handle_osd_map epochs [125,126], i have 125, src has [1,126]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910146 data_alloc: 218103808 data_used: 217088
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 126 ms_handle_reset con 0x559b49c30800 session 0x559b48c72780
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:27.505661+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 10108928 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:28.505800+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 10108928 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:29.505983+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 10108928 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:30.506165+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 10108928 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fc62f000/0x0/0x4ffc00000, data 0x53074e/0x5ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fc62b000/0x0/0x4ffc00000, data 0x5321b1/0x5f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:31.506331+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 10108928 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913032 data_alloc: 218103808 data_used: 221184
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:32.506455+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 10108928 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:33.506651+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 10108928 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:34.506837+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 10100736 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fc62c000/0x0/0x4ffc00000, data 0x5321b1/0x5f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:35.506973+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 10100736 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:36.507126+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 10100736 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913032 data_alloc: 218103808 data_used: 221184
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:37.507304+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 10100736 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:38.507441+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 10100736 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:39.507581+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 10100736 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:40.507698+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 10100736 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fc62c000/0x0/0x4ffc00000, data 0x5321b1/0x5f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:41.507853+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 10076160 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913192 data_alloc: 218103808 data_used: 225280
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:42.508012+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 10076160 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:43.508157+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 10076160 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:44.508300+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 10076160 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:45.508439+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 10076160 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Got map version 10
Oct 01 17:11:34 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:46.508586+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 10067968 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fc62c000/0x0/0x4ffc00000, data 0x5321b1/0x5f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913192 data_alloc: 218103808 data_used: 225280
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:47.508722+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 10067968 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:48.508858+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 10067968 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:49.509039+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 10067968 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:50.509218+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 10067968 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:51.509400+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 10067968 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fc62c000/0x0/0x4ffc00000, data 0x5321b1/0x5f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913192 data_alloc: 218103808 data_used: 225280
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:52.509529+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 10067968 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:53.509658+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 10067968 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fc62c000/0x0/0x4ffc00000, data 0x5321b1/0x5f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Got map version 11
Oct 01 17:11:34 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:54.509802+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 10010624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.660118103s of 32.033905029s, submitted: 47
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:55.509965+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 10010624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:56.510134+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 10010624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912520 data_alloc: 218103808 data_used: 225280
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:57.510282+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 10010624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fc62d000/0x0/0x4ffc00000, data 0x5321b1/0x5f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:58.510431+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 10010624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:59.510582+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 10010624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:00.510815+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 10010624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fc62d000/0x0/0x4ffc00000, data 0x5321b1/0x5f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:01.511096+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 10010624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fc62d000/0x0/0x4ffc00000, data 0x5321b1/0x5f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912520 data_alloc: 218103808 data_used: 225280
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:02.511279+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 10010624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:03.511433+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 10010624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:04.511572+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 10010624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fc62d000/0x0/0x4ffc00000, data 0x5321b1/0x5f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:05.511700+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 10010624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.007043839s of 11.026364326s, submitted: 4
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:06.511829+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 10010624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fc62d000/0x0/0x4ffc00000, data 0x5321b1/0x5f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912504 data_alloc: 218103808 data_used: 225280
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:07.511979+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 10010624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:08.512119+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 10010624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fc62d000/0x0/0x4ffc00000, data 0x5321b1/0x5f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:09.512281+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 10010624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fc62d000/0x0/0x4ffc00000, data 0x5321b1/0x5f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:10.512419+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 10010624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _renew_subs
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:11.512617+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fc629000/0x0/0x4ffc00000, data 0x533d97/0x5f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916662 data_alloc: 218103808 data_used: 233472
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:12.512777+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:13.512944+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fc629000/0x0/0x4ffc00000, data 0x533d97/0x5f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:14.513091+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fc62a000/0x0/0x4ffc00000, data 0x533d97/0x5f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:15.513260+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:16.513492+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:17.513660+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915798 data_alloc: 218103808 data_used: 233472
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:18.513875+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.649345398s of 13.016798973s, submitted: 26
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:19.514085+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fc62a000/0x0/0x4ffc00000, data 0x533d97/0x5f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:20.514239+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:21.514482+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:22.514622+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915798 data_alloc: 218103808 data_used: 233472
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:23.514742+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:24.514860+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fc626000/0x0/0x4ffc00000, data 0x5357fa/0x5f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:25.516054+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:26.516171+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:27.516322+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919940 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:28.516491+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:29.516635+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:30.516777+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fc626000/0x0/0x4ffc00000, data 0x5357fa/0x5f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:31.517074+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:32.517261+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920116 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:33.517404+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fc626000/0x0/0x4ffc00000, data 0x5357fa/0x5f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:34.517535+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:35.517711+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:36.517968+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fc626000/0x0/0x4ffc00000, data 0x5357fa/0x5f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.035810471s of 18.060047150s, submitted: 15
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:37.518147+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922914 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72433664 unmapped: 10223616 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:38.518378+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72433664 unmapped: 10223616 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:39.518515+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72433664 unmapped: 10223616 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:40.518697+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72433664 unmapped: 10223616 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:41.518945+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72450048 unmapped: 10207232 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:42.519127+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922914 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72450048 unmapped: 10207232 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fc623000/0x0/0x4ffc00000, data 0x5373e0/0x5fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:43.519296+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72450048 unmapped: 10207232 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:44.519498+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72450048 unmapped: 10207232 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 130 handle_osd_map epochs [130,131], i have 130, src has [1,131]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:45.519704+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72458240 unmapped: 10199040 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:46.519850+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72458240 unmapped: 10199040 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 131 heartbeat osd_stat(store_statfs(0x4fc620000/0x0/0x4ffc00000, data 0x538e43/0x5fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:47.519965+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925888 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72458240 unmapped: 10199040 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:48.520116+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 10190848 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.896615982s of 11.985222816s, submitted: 30
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 131 heartbeat osd_stat(store_statfs(0x4fc620000/0x0/0x4ffc00000, data 0x538e43/0x5fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:49.520251+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 10182656 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:50.520452+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 10182656 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:51.520637+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 10182656 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:52.520802+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925216 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 10182656 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:53.520980+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 10182656 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 131 heartbeat osd_stat(store_statfs(0x4fc621000/0x0/0x4ffc00000, data 0x538e43/0x5fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:54.521146+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 10182656 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:55.521326+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 10182656 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: handle_auth_request added challenge on 0x559b49c30c00
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:56.521518+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 10182656 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 131 heartbeat osd_stat(store_statfs(0x4fc620000/0x0/0x4ffc00000, data 0x538ede/0x5fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:57.521648+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 926984 data_alloc: 218103808 data_used: 245760
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 10182656 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:58.521796+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 10182656 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:59.521956+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.157895088s of 10.438511848s, submitted: 5
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 10182656 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:00.522136+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 10182656 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:01.522366+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 131 handle_osd_map epochs [131,132], i have 131, src has [1,132]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fc620000/0x0/0x4ffc00000, data 0x538ede/0x5fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 10182656 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:02.522551+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930950 data_alloc: 218103808 data_used: 253952
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 10182656 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fc61c000/0x0/0x4ffc00000, data 0x53aac4/0x601000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:03.522711+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 10182656 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:04.522929+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 10182656 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:05.523118+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72482816 unmapped: 10174464 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:06.523253+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72482816 unmapped: 10174464 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fc61d000/0x0/0x4ffc00000, data 0x53ab3a/0x601000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: handle_auth_request added challenge on 0x559b49c31800
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:07.523395+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935146 data_alloc: 218103808 data_used: 262144
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72482816 unmapped: 10174464 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:08.523552+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Got map version 12
Oct 01 17:11:34 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 10108928 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:09.523705+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 10108928 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:10.523920+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.907539845s of 11.074452400s, submitted: 45
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 10108928 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:11.524107+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 10108928 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fc614000/0x0/0x4ffc00000, data 0x53e3c3/0x608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:12.524274+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940452 data_alloc: 218103808 data_used: 270336
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 10108928 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fc617000/0x0/0x4ffc00000, data 0x53e328/0x607000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [0,0,0,0,1])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:13.524435+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 10100736 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:14.524568+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 10067968 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:15.524689+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 9003008 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:16.524803+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 9003008 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:17.525012+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942864 data_alloc: 218103808 data_used: 278528
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 9003008 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:18.525250+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc614000/0x0/0x4ffc00000, data 0x53fefb/0x609000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 9003008 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:19.525383+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 9003008 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:20.525521+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 8953856 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc611000/0x0/0x4ffc00000, data 0x54197e/0x60c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc611000/0x0/0x4ffc00000, data 0x54197e/0x60c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.943125725s of 10.805793762s, submitted: 102
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:21.525671+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 8929280 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:22.525802+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948620 data_alloc: 218103808 data_used: 278528
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 8929280 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:23.525945+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc60e000/0x0/0x4ffc00000, data 0x543594/0x60f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 8929280 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:24.526409+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 8929280 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc60e000/0x0/0x4ffc00000, data 0x543594/0x60f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:25.526786+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 8929280 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:26.527028+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc60a000/0x0/0x4ffc00000, data 0x5450b2/0x613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 8929280 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc60a000/0x0/0x4ffc00000, data 0x5450b2/0x613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:27.527178+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953842 data_alloc: 218103808 data_used: 290816
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 8929280 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:28.527297+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc60a000/0x0/0x4ffc00000, data 0x5450b2/0x613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 8929280 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:29.527434+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 8929280 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:30.527574+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 8929280 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:31.527767+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 8929280 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:32.527949+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953842 data_alloc: 218103808 data_used: 290816
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 8929280 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:33.528101+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 8921088 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc60a000/0x0/0x4ffc00000, data 0x5450b2/0x613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:34.528231+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 8921088 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.841822624s of 13.927069664s, submitted: 33
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:35.528373+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 8921088 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:36.528456+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 8921088 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:37.528532+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc60a000/0x0/0x4ffc00000, data 0x5450b2/0x613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954018 data_alloc: 218103808 data_used: 290816
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 8921088 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:38.528626+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 8921088 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:39.528729+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 8912896 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:40.528855+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 8912896 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _renew_subs
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:41.529064+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 8904704 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:42.529185+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960900 data_alloc: 218103808 data_used: 299008
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fc606000/0x0/0x4ffc00000, data 0x546d63/0x617000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 8896512 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:43.529371+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 8896512 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:44.529527+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 8896512 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:45.529660+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.690116882s of 10.124419212s, submitted: 31
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 8896512 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc607000/0x0/0x4ffc00000, data 0x546d63/0x617000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:46.529829+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 8896512 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:47.530031+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962462 data_alloc: 218103808 data_used: 307200
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 8896512 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:48.530198+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 8896512 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:49.530347+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc605000/0x0/0x4ffc00000, data 0x5486b0/0x618000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 8896512 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:50.530529+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc605000/0x0/0x4ffc00000, data 0x5486b0/0x618000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 7831552 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:51.530990+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 7831552 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:52.531274+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964230 data_alloc: 218103808 data_used: 307200
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 7831552 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc604000/0x0/0x4ffc00000, data 0x54874b/0x619000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:53.531458+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 7823360 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc606000/0x0/0x4ffc00000, data 0x5486b0/0x618000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:54.531622+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 7815168 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc606000/0x0/0x4ffc00000, data 0x5486b0/0x618000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:55.531745+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 7815168 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _renew_subs
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.231688499s of 10.758177757s, submitted: 20
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:56.531925+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 7806976 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:57.532073+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc602000/0x0/0x4ffc00000, data 0x54a296/0x61b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965772 data_alloc: 218103808 data_used: 315392
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 7806976 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:58.532949+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 7806976 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:59.533180+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 7806976 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:00.533505+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 7806976 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:01.533847+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc601000/0x0/0x4ffc00000, data 0x54a331/0x61c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 7790592 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:02.534156+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970130 data_alloc: 218103808 data_used: 315392
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 7790592 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:03.534428+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 7790592 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:04.534776+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 7790592 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:05.535063+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 7790592 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.882214546s of 10.021253586s, submitted: 37
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:06.535318+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 7757824 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:07.535722+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 968736 data_alloc: 218103808 data_used: 315392
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 7757824 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:08.535934+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 7757824 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:09.536152+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 7757824 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:10.536367+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 7757824 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:11.536600+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 7757824 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:12.536822+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 968736 data_alloc: 218103808 data_used: 315392
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 7757824 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:13.536963+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Got map version 13
Oct 01 17:11:34 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 7757824 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:14.537158+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 7757824 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:15.537422+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 7757824 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.978836060s of 10.002865791s, submitted: 4
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:16.537576+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:17.537831+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 968736 data_alloc: 218103808 data_used: 315392
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:18.538012+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:19.538223+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:20.538432+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:21.538751+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:22.538967+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 968736 data_alloc: 218103808 data_used: 315392
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:23.539126+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:24.539293+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:25.539450+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:26.539643+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:27.539764+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969056 data_alloc: 218103808 data_used: 323584
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:28.540035+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:29.540238+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:30.540416+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.988643646s of 14.997574806s, submitted: 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:31.540605+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:32.540765+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969072 data_alloc: 218103808 data_used: 323584
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:33.540979+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:34.541109+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:35.541269+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:36.541458+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:37.541602+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969088 data_alloc: 218103808 data_used: 323584
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:38.541705+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:39.541804+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:40.542021+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x54bdc2/0x61f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.036031723s of 10.064247131s, submitted: 7
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 142 ms_handle_reset con 0x559b49c31800 session 0x559b470cef00
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:41.542224+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:42.542399+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Got map version 14
Oct 01 17:11:34 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970648 data_alloc: 218103808 data_used: 323584
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:43.542534+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:44.542778+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:45.542997+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:46.543112+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:47.543304+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969958 data_alloc: 218103808 data_used: 323584
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:48.543522+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:49.543776+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:50.543956+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:51.544199+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.928445816s of 10.967930794s, submitted: 139
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:52.544337+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969974 data_alloc: 218103808 data_used: 323584
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:53.544500+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:54.544692+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:55.544978+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:56.545148+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:57.545328+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969974 data_alloc: 218103808 data_used: 323584
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:58.545510+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:59.545700+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:00.545834+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:01.545981+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:02.546156+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969990 data_alloc: 218103808 data_used: 323584
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:03.546311+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.004209518s of 12.012298584s, submitted: 2
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:04.546476+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:05.546632+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:06.546794+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:07.584459+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969974 data_alloc: 218103808 data_used: 323584
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:08.584608+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:09.584806+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:10.584988+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:11.585169+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:12.585345+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969974 data_alloc: 218103808 data_used: 323584
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:13.585452+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.988098145s of 10.001517296s, submitted: 3
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:14.585575+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:15.585715+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x54bdc1/0x61f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:16.585865+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:17.586030+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971726 data_alloc: 218103808 data_used: 323584
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 7184384 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:18.586170+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 7315456 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:19.586297+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 7315456 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:20.586518+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 7315456 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:21.586664+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5ff000/0x0/0x4ffc00000, data 0x54bdbf/0x61f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 7315456 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:22.586831+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969958 data_alloc: 218103808 data_used: 323584
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 7315456 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:23.586961+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 7290880 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:24.587145+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 7290880 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:25.587298+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 7290880 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:26.587445+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 7290880 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:27.587616+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971742 data_alloc: 218103808 data_used: 323584
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 7290880 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.930398941s of 14.013147354s, submitted: 9
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:28.587845+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5ff000/0x0/0x4ffc00000, data 0x54bd94/0x61f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 7290880 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:29.588157+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 7290880 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:30.588265+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 7290880 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:31.588466+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 7274496 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:32.588617+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5fd000/0x0/0x4ffc00000, data 0x54beca/0x621000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974910 data_alloc: 218103808 data_used: 323584
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 7266304 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:33.588826+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 7266304 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:34.588983+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 7266304 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:35.589193+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc5f9000/0x0/0x4ffc00000, data 0x54dab0/0x624000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 7266304 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:36.589350+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 7258112 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:37.589637+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978826 data_alloc: 218103808 data_used: 331776
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 7258112 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:38.589788+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 7258112 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:39.589951+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x54da15/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 7258112 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.953672409s of 12.076897621s, submitted: 31
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:40.590088+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 7258112 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _renew_subs
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:41.590282+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 7258112 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:42.590497+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984944 data_alloc: 218103808 data_used: 344064
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 7225344 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:43.590661+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fc5f6000/0x0/0x4ffc00000, data 0x55103a/0x628000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 7184384 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:44.590848+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 7184384 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:45.591050+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fc5f6000/0x0/0x4ffc00000, data 0x55103a/0x628000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 7176192 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:46.591221+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 7176192 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:47.591398+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989158 data_alloc: 218103808 data_used: 352256
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 7176192 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:48.591570+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 7176192 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:49.591722+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f2000/0x0/0x4ffc00000, data 0x552abf/0x62b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 7176192 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.864019394s of 10.115748405s, submitted: 47
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:50.591935+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 7168000 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:51.592093+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 7168000 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:52.592275+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988454 data_alloc: 218103808 data_used: 352256
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 7168000 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:53.592446+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f3000/0x0/0x4ffc00000, data 0x552abd/0x62b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 7168000 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:54.592608+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:55.592751+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:56.592868+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:57.592998+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989500 data_alloc: 218103808 data_used: 352256
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f2000/0x0/0x4ffc00000, data 0x552abe/0x62b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:58.593116+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:59.593234+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f2000/0x0/0x4ffc00000, data 0x552abe/0x62b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:00.593360+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.076109886s of 10.867831230s, submitted: 11
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:01.593512+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:02.594025+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987588 data_alloc: 218103808 data_used: 352256
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:03.594316+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f4000/0x0/0x4ffc00000, data 0x5529f7/0x62a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:04.594514+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f4000/0x0/0x4ffc00000, data 0x5529f7/0x62a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:05.594727+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f4000/0x0/0x4ffc00000, data 0x5529f7/0x62a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:06.594991+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f4000/0x0/0x4ffc00000, data 0x5529f7/0x62a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f4000/0x0/0x4ffc00000, data 0x5529f7/0x62a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:07.595195+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987588 data_alloc: 218103808 data_used: 352256
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:08.595327+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f4000/0x0/0x4ffc00000, data 0x5529f7/0x62a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:09.595460+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:10.595583+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.988055229s of 10.014366150s, submitted: 5
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:11.595718+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:12.595878+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987604 data_alloc: 218103808 data_used: 352256
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:13.596076+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f4000/0x0/0x4ffc00000, data 0x5529f7/0x62a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:14.596284+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f4000/0x0/0x4ffc00000, data 0x5529f7/0x62a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:15.596419+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:16.596627+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f4000/0x0/0x4ffc00000, data 0x5529f7/0x62a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:17.596770+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987604 data_alloc: 218103808 data_used: 352256
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:18.596957+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f4000/0x0/0x4ffc00000, data 0x5529f7/0x62a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:19.597112+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:20.597350+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.957857132s of 10.065895081s, submitted: 4
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:21.597639+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:22.597788+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f2000/0x0/0x4ffc00000, data 0x552abf/0x62b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989484 data_alloc: 218103808 data_used: 352256
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:23.597966+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:24.598096+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 7127040 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:25.598286+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 7127040 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:26.598490+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 7127040 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f3000/0x0/0x4ffc00000, data 0x5529f7/0x62a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:27.598662+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 7127040 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989308 data_alloc: 218103808 data_used: 352256
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:28.598849+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 7102464 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f1000/0x0/0x4ffc00000, data 0x552b5b/0x62c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:29.599022+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 7102464 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:30.599222+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 7094272 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:31.599440+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 7094272 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.224013329s of 10.345972061s, submitted: 12
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:32.599590+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75571200 unmapped: 7086080 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992364 data_alloc: 218103808 data_used: 352256
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:33.599729+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75571200 unmapped: 7086080 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:34.599880+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75571200 unmapped: 7086080 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f1000/0x0/0x4ffc00000, data 0x552c22/0x62d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:35.600082+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75571200 unmapped: 7086080 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f1000/0x0/0x4ffc00000, data 0x552c22/0x62d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:36.600231+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 7061504 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f1000/0x0/0x4ffc00000, data 0x552c22/0x62d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 6923 writes, 27K keys, 6923 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6923 writes, 1355 syncs, 5.11 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1197 writes, 3013 keys, 1197 commit groups, 1.0 writes per commit group, ingest: 1.69 MB, 0.00 MB/s
                                           Interval WAL: 1197 writes, 417 syncs, 2.87 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:37.600389+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 7061504 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992364 data_alloc: 218103808 data_used: 352256
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:38.600557+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 7061504 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:39.600694+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 7061504 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:40.601006+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 7061504 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:41.601242+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f1000/0x0/0x4ffc00000, data 0x552bf6/0x62d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 7061504 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:42.601362+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 7061504 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992364 data_alloc: 218103808 data_used: 352256
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:43.601592+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 7061504 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: mgrc ms_handle_reset ms_handle_reset con 0x559b49514400
Oct 01 17:11:34 compute-0 ceph-osd[88140]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3235544197
Oct 01 17:11:34 compute-0 ceph-osd[88140]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: get_auth_request con 0x559b49c2f800 auth_method 0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: mgrc handle_mgr_configure stats_period=5
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:44.601732+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 6963200 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:45.601963+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 6963200 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:46.602151+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 6963200 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f1000/0x0/0x4ffc00000, data 0x552bf6/0x62d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:47.602324+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 6963200 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 146 ms_handle_reset con 0x559b49c30000 session 0x559b490d3c20
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: handle_auth_request added challenge on 0x559b47de8c00
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992364 data_alloc: 218103808 data_used: 352256
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:48.602454+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 6963200 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.831445694s of 16.878786087s, submitted: 5
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:49.602610+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 6938624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f1000/0x0/0x4ffc00000, data 0x552bf6/0x62d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:50.627732+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 6938624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:51.627959+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 6938624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:52.628165+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 6922240 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f0000/0x0/0x4ffc00000, data 0x552cbd/0x62e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993956 data_alloc: 218103808 data_used: 352256
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:53.628287+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 6922240 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:54.628502+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 6922240 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:55.628691+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 6922240 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:56.628844+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 6922240 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:57.629019+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 6914048 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991674 data_alloc: 218103808 data_used: 352256
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:58.629199+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 6905856 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f2000/0x0/0x4ffc00000, data 0x552b59/0x62c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:59.629346+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 6905856 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:00.629548+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 6905856 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f2000/0x0/0x4ffc00000, data 0x552b59/0x62c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:01.629774+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 6905856 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.806101799s of 13.457287788s, submitted: 17
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:02.629952+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 6905856 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f2000/0x0/0x4ffc00000, data 0x552b59/0x62c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990984 data_alloc: 218103808 data_used: 352256
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:03.630100+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 6897664 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:04.630258+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 6889472 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:05.630434+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 79634432 unmapped: 3022848 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5e4000/0x0/0x4ffc00000, data 0x56076e/0x63a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:06.630610+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 79634432 unmapped: 3022848 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:07.631024+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 2744320 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005552 data_alloc: 218103808 data_used: 352256
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:08.631182+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 1564672 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:09.631341+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 1531904 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:10.631506+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 1220608 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:11.631757+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 794624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fb3c4000/0x0/0x4ffc00000, data 0x5e1481/0x6ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [1])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.616258621s of 10.176497459s, submitted: 71
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:12.631993+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 1818624 heap: 83705856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:13.632152+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006662 data_alloc: 218103808 data_used: 352256
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 81256448 unmapped: 2449408 heap: 83705856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:14.632290+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 81256448 unmapped: 2449408 heap: 83705856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:15.632478+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 434176 heap: 83705856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:16.632600+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 999424 heap: 84754432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:17.632727+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 999424 heap: 84754432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fb30e000/0x0/0x4ffc00000, data 0x697588/0x770000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:18.632951+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1017350 data_alloc: 218103808 data_used: 352256
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 1998848 heap: 85803008 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:19.633106+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 1490944 heap: 85803008 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:20.633244+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fb285000/0x0/0x4ffc00000, data 0x71fc5d/0x7f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 303104 heap: 85803008 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:21.633427+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 85606400 unmapped: 196608 heap: 85803008 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:22.633588+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.204580307s of 10.285860062s, submitted: 86
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 2121728 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:23.633712+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033122 data_alloc: 218103808 data_used: 352256
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 85794816 unmapped: 2105344 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:24.634052+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 85803008 unmapped: 2097152 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:25.634255+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 2121728 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _renew_subs
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:26.634389+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb1c9000/0x0/0x4ffc00000, data 0x7d9a78/0x8b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 86769664 unmapped: 2179072 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:27.634563+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2105344 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:28.634698+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1036590 data_alloc: 218103808 data_used: 360448
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 86810624 unmapped: 2138112 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:29.634826+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 86818816 unmapped: 2129920 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:30.634975+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 87203840 unmapped: 1744896 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:31.635201+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 87842816 unmapped: 2154496 heap: 89997312 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb11c000/0x0/0x4ffc00000, data 0x886afa/0x962000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [0,0,0,1])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:32.635320+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.199892998s of 10.026391983s, submitted: 138
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 2138112 heap: 89997312 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:33.635537+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1056982 data_alloc: 218103808 data_used: 360448
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88039424 unmapped: 1957888 heap: 89997312 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:34.635846+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88121344 unmapped: 1875968 heap: 89997312 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:35.635984+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88121344 unmapped: 1875968 heap: 89997312 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _renew_subs
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:36.636135+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 1638400 heap: 89997312 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:37.637058+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88498176 unmapped: 1499136 heap: 89997312 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb076000/0x0/0x4ffc00000, data 0x92c598/0xa07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:38.637245+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052752 data_alloc: 218103808 data_used: 368640
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 87367680 unmapped: 2629632 heap: 89997312 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb053000/0x0/0x4ffc00000, data 0x950fea/0xa2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:39.637491+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 87171072 unmapped: 2826240 heap: 89997312 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:40.637608+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fac3e000/0x0/0x4ffc00000, data 0x9561fb/0xa30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88301568 unmapped: 1695744 heap: 89997312 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:41.637754+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 1638400 heap: 89997312 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:42.637884+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 1638400 heap: 89997312 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fac3a000/0x0/0x4ffc00000, data 0x957de1/0xa33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:43.638067+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1054754 data_alloc: 218103808 data_used: 376832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 1638400 heap: 89997312 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:44.638237+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fac3a000/0x0/0x4ffc00000, data 0x957de1/0xa33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88367104 unmapped: 1630208 heap: 89997312 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.679636955s of 12.825030327s, submitted: 178
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:45.638485+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88367104 unmapped: 1630208 heap: 89997312 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:46.638660+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _renew_subs
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88375296 unmapped: 1622016 heap: 89997312 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac35000/0x0/0x4ffc00000, data 0x95990b/0xa37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:47.638806+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88375296 unmapped: 1622016 heap: 89997312 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:48.638980+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059254 data_alloc: 218103808 data_used: 385024
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89432064 unmapped: 1613824 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:49.639158+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 2662400 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:50.639337+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 2662400 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:51.639580+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 2662400 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:52.639771+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 2662400 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac35000/0x0/0x4ffc00000, data 0x959844/0xa36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:53.639948+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059286 data_alloc: 218103808 data_used: 385024
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 2654208 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:54.640133+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 2654208 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:55.640303+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x959844/0xa36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 2646016 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:56.640476+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 2646016 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:57.640639+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 2646016 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.679360390s of 12.896973610s, submitted: 41
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:58.640806+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064318 data_alloc: 218103808 data_used: 401408
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 2703360 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:59.640979+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 2703360 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:00.641138+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 2703360 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x95b4c5/0xa3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 151 handle_osd_map epochs [152,152], i have 152, src has [1,152]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:01.641358+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 2703360 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:02.641552+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 2703360 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:03.641781+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066548 data_alloc: 218103808 data_used: 409600
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 2703360 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fac30000/0x0/0x4ffc00000, data 0x95cf28/0xa3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 152 handle_osd_map epochs [153,153], i have 153, src has [1,153]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:04.641927+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 2695168 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:05.642107+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 2695168 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:06.642289+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x95ea73/0xa3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 2695168 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:07.642446+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 2695168 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.969439507s of 10.173136711s, submitted: 37
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:08.642656+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1070792 data_alloc: 218103808 data_used: 409600
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 2695168 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:09.642821+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 2695168 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fac2d000/0x0/0x4ffc00000, data 0x95eb0e/0xa40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:10.642981+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 2695168 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:11.643193+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 2695168 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:12.643409+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 2695168 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x9621b3/0xa46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:13.643530+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077044 data_alloc: 218103808 data_used: 409600
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 2695168 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:14.643678+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 2695168 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:15.643875+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 155 handle_osd_map epochs [155,156], i have 155, src has [1,156]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 2703360 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fac25000/0x0/0x4ffc00000, data 0x963b9b/0xa48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:16.644050+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 2703360 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:17.644192+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 2703360 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:18.644329+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079168 data_alloc: 218103808 data_used: 417792
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 2703360 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fac25000/0x0/0x4ffc00000, data 0x963b9b/0xa48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:19.644494+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 2703360 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fac25000/0x0/0x4ffc00000, data 0x963b9b/0xa48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 156 handle_osd_map epochs [156,157], i have 156, src has [1,157]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.558800697s of 11.713101387s, submitted: 45
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:20.644618+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 2686976 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:21.644786+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 2686976 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:22.644984+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 2686976 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:23.645131+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083894 data_alloc: 218103808 data_used: 417792
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88367104 unmapped: 2678784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:24.645298+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 157 heartbeat osd_stat(store_statfs(0x4fac21000/0x0/0x4ffc00000, data 0x96581c/0xa4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88367104 unmapped: 2678784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:25.645433+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88367104 unmapped: 2678784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:26.645594+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88367104 unmapped: 2678784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:27.645742+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88367104 unmapped: 2678784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:28.645876+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088604 data_alloc: 218103808 data_used: 425984
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88367104 unmapped: 2678784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:29.646014+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88367104 unmapped: 2678784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fac1d000/0x0/0x4ffc00000, data 0x96731a/0xa50000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:30.646168+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.325958252s of 10.345420837s, submitted: 35
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88367104 unmapped: 2678784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:31.646330+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88367104 unmapped: 2678784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fac1e000/0x0/0x4ffc00000, data 0x96731a/0xa50000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:32.646488+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88367104 unmapped: 2678784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:33.646686+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088802 data_alloc: 218103808 data_used: 425984
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88367104 unmapped: 2678784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:34.646864+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88375296 unmapped: 2670592 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fac1e000/0x0/0x4ffc00000, data 0x96731a/0xa50000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:35.647012+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fac1e000/0x0/0x4ffc00000, data 0x96731a/0xa50000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88375296 unmapped: 2670592 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fac1e000/0x0/0x4ffc00000, data 0x96731a/0xa50000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:36.647204+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 2662400 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:37.647458+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 2662400 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:38.647624+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092042 data_alloc: 218103808 data_used: 425984
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 2662400 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:39.647789+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 2662400 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:40.647967+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.259268761s of 10.255467415s, submitted: 8
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fac1d000/0x0/0x4ffc00000, data 0x9673b5/0xa51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 2662400 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:41.648116+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _renew_subs
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 2646016 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:42.648274+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: handle_auth_request added challenge on 0x559b47de9000
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88408064 unmapped: 2637824 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:43.648392+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1096476 data_alloc: 218103808 data_used: 434176
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fac19000/0x0/0x4ffc00000, data 0x969045/0xa54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88408064 unmapped: 2637824 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:44.648492+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Got map version 15
Oct 01 17:11:34 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 2572288 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:45.648630+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fac1b000/0x0/0x4ffc00000, data 0x968faa/0xa53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 2572288 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:46.648773+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fac1a000/0x0/0x4ffc00000, data 0x969041/0xa54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 159 handle_osd_map epochs [160,160], i have 160, src has [1,160]
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89530368 unmapped: 1515520 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac17000/0x0/0x4ffc00000, data 0x96a9af/0xa56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:47.648972+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89530368 unmapped: 1515520 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:48.649121+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101190 data_alloc: 218103808 data_used: 442368
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 1507328 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:49.649265+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 1507328 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:50.649396+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 1507328 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:51.649561+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 1507328 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:52.649714+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 1507328 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac17000/0x0/0x4ffc00000, data 0x96a9af/0xa56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:53.649841+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101190 data_alloc: 218103808 data_used: 442368
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 1507328 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:54.650010+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 1507328 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:55.650163+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac17000/0x0/0x4ffc00000, data 0x96a9af/0xa56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 1507328 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:56.650303+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.995437622s of 16.123609543s, submitted: 43
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89546752 unmapped: 1499136 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:57.650428+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89546752 unmapped: 1499136 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:58.650561+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103118 data_alloc: 218103808 data_used: 446464
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89546752 unmapped: 1499136 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:59.650738+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89546752 unmapped: 1499136 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:00.651276+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac16000/0x0/0x4ffc00000, data 0x96aa4a/0xa57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89546752 unmapped: 1499136 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:01.651498+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89546752 unmapped: 1499136 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:02.651648+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89546752 unmapped: 1499136 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:03.651865+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103118 data_alloc: 218103808 data_used: 446464
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89546752 unmapped: 1499136 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:04.652045+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89546752 unmapped: 1499136 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:05.652240+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac16000/0x0/0x4ffc00000, data 0x96aa4a/0xa57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89546752 unmapped: 1499136 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:06.652367+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89546752 unmapped: 1499136 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:07.652540+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89546752 unmapped: 1499136 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac17000/0x0/0x4ffc00000, data 0x96aa4a/0xa57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:08.652704+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102062 data_alloc: 218103808 data_used: 446464
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89546752 unmapped: 1499136 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.603413582s of 12.616387367s, submitted: 3
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:09.652875+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89563136 unmapped: 1482752 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:10.653150+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89563136 unmapped: 1482752 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:11.653328+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89563136 unmapped: 1482752 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:12.653507+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac18000/0x0/0x4ffc00000, data 0x96a9af/0xa56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89563136 unmapped: 1482752 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:13.653679+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101356 data_alloc: 218103808 data_used: 446464
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89563136 unmapped: 1482752 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:14.653842+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89571328 unmapped: 1474560 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:15.654015+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89571328 unmapped: 1474560 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:16.654200+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac18000/0x0/0x4ffc00000, data 0x96a9af/0xa56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89571328 unmapped: 1474560 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:17.654395+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac18000/0x0/0x4ffc00000, data 0x96a9af/0xa56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89571328 unmapped: 1474560 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:18.654566+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101372 data_alloc: 218103808 data_used: 446464
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89587712 unmapped: 1458176 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:19.654724+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89587712 unmapped: 1458176 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:20.654931+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.980280876s of 11.014110565s, submitted: 7
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac17000/0x0/0x4ffc00000, data 0x96aa4a/0xa57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89587712 unmapped: 1458176 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:21.655146+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac17000/0x0/0x4ffc00000, data 0x96aa4a/0xa57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89587712 unmapped: 1458176 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:22.655322+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89587712 unmapped: 1458176 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:23.655480+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103156 data_alloc: 218103808 data_used: 446464
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89595904 unmapped: 1449984 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac17000/0x0/0x4ffc00000, data 0x96aa4a/0xa57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:24.655649+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89604096 unmapped: 1441792 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:25.655818+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89604096 unmapped: 1441792 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:26.655954+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89604096 unmapped: 1441792 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:27.656088+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89604096 unmapped: 1441792 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:28.656247+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:34 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101356 data_alloc: 218103808 data_used: 446464
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89604096 unmapped: 1441792 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:29.656413+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89604096 unmapped: 1441792 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac18000/0x0/0x4ffc00000, data 0x96a9af/0xa56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:30.656580+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac18000/0x0/0x4ffc00000, data 0x96a9af/0xa56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89604096 unmapped: 1441792 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:31.656782+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89604096 unmapped: 1441792 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:32.657048+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89604096 unmapped: 1441792 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:34 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:33.657199+0000)
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:34 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101516 data_alloc: 218103808 data_used: 450560
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89604096 unmapped: 1441792 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:34.657407+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.004671097s of 14.022926331s, submitted: 4
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89604096 unmapped: 1441792 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:35.657549+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90652672 unmapped: 1441792 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:36.657737+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac19000/0x0/0x4ffc00000, data 0x96a918/0xa55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90652672 unmapped: 1441792 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac19000/0x0/0x4ffc00000, data 0x96a918/0xa55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:37.657869+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90652672 unmapped: 1441792 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:38.658049+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100682 data_alloc: 218103808 data_used: 446464
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90652672 unmapped: 1441792 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:39.658237+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90652672 unmapped: 1441792 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:40.658387+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90652672 unmapped: 1441792 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:41.658645+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90660864 unmapped: 1433600 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:42.658831+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac19000/0x0/0x4ffc00000, data 0x96a918/0xa55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89612288 unmapped: 2482176 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:43.658969+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100698 data_alloc: 218103808 data_used: 446464
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89612288 unmapped: 2482176 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:44.659195+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 160 handle_osd_map epochs [160,161], i have 160, src has [1,161]
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.442216873s of 10.615092278s, submitted: 4
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 2457600 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:45.659391+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 2449408 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:46.659531+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 2449408 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:47.659677+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fac16000/0x0/0x4ffc00000, data 0x96c493/0xa57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 2449408 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:48.659802+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103684 data_alloc: 218103808 data_used: 454656
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 2449408 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:49.660064+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 2449408 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:50.660177+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fac16000/0x0/0x4ffc00000, data 0x96c493/0xa57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 161 handle_osd_map epochs [161,162], i have 161, src has [1,162]
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89718784 unmapped: 2375680 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:51.660367+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89718784 unmapped: 2375680 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:52.660511+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89718784 unmapped: 2375680 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:53.660776+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106658 data_alloc: 218103808 data_used: 454656
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89726976 unmapped: 2367488 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:54.660937+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89726976 unmapped: 2367488 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:55.661189+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89726976 unmapped: 2367488 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:56.661337+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fac13000/0x0/0x4ffc00000, data 0x96df16/0xa5a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.489933014s of 11.828451157s, submitted: 35
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89726976 unmapped: 2367488 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:57.661512+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89726976 unmapped: 2367488 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:58.661696+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107706 data_alloc: 218103808 data_used: 458752
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89735168 unmapped: 2359296 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:59.661838+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89751552 unmapped: 2342912 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:00.662016+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fac13000/0x0/0x4ffc00000, data 0x96dfb1/0xa5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 162 handle_osd_map epochs [163,163], i have 163, src has [1,163]
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 2334720 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:01.662192+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 2334720 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:02.662351+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 2334720 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:03.662503+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110824 data_alloc: 218103808 data_used: 466944
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 2334720 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:04.662663+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fac10000/0x0/0x4ffc00000, data 0x96fbc7/0xa5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 2318336 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:05.662821+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 2318336 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:06.662977+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _renew_subs
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 2318336 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:07.663632+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 2318336 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:08.663827+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fac0b000/0x0/0x4ffc00000, data 0x97175b/0xa62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118398 data_alloc: 218103808 data_used: 479232
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 2318336 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:09.664018+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 2318336 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:10.664237+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 2318336 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:11.664524+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 2318336 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fac0b000/0x0/0x4ffc00000, data 0x97175b/0xa62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:12.664818+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 2318336 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:13.665041+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.872739792s of 16.991382599s, submitted: 73
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117660 data_alloc: 218103808 data_used: 479232
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89784320 unmapped: 2310144 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:14.665275+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89784320 unmapped: 2310144 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:15.665460+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 2301952 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:16.665622+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fac0d000/0x0/0x4ffc00000, data 0x97164a/0xa61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 2301952 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:17.665800+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fac0d000/0x0/0x4ffc00000, data 0x97164a/0xa61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 2301952 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:18.665968+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115750 data_alloc: 218103808 data_used: 479232
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 2301952 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:19.666109+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fac0d000/0x0/0x4ffc00000, data 0x97164a/0xa61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 2301952 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fac0d000/0x0/0x4ffc00000, data 0x97164a/0xa61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:20.666282+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 2301952 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:21.666464+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 2301952 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:22.666670+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 2301952 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:23.666843+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117518 data_alloc: 218103808 data_used: 479232
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 2301952 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:24.667003+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.731521606s of 10.673498154s, submitted: 3
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fac0c000/0x0/0x4ffc00000, data 0x97175c/0xa62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 2301952 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:25.667221+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fac0c000/0x0/0x4ffc00000, data 0x97175c/0xa62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 2301952 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:26.667360+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 2301952 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:27.667539+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 2301952 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:28.667701+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116668 data_alloc: 218103808 data_used: 479232
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 2301952 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:29.667838+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fac0d000/0x0/0x4ffc00000, data 0x97164a/0xa61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 2301952 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:30.668037+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 2301952 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:31.668202+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89800704 unmapped: 2293760 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:32.668338+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fac0d000/0x0/0x4ffc00000, data 0x97164a/0xa61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89800704 unmapped: 2293760 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:33.668472+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116684 data_alloc: 218103808 data_used: 479232
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89800704 unmapped: 2293760 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:34.668626+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fac0d000/0x0/0x4ffc00000, data 0x97164a/0xa61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.121912003s of 10.435431480s, submitted: 5
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:35.668802+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89800704 unmapped: 2293760 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fac0d000/0x0/0x4ffc00000, data 0x97164a/0xa61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:36.668942+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89800704 unmapped: 2293760 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:37.669072+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90873856 unmapped: 1220608 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:38.669236+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90873856 unmapped: 1220608 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119962 data_alloc: 218103808 data_used: 487424
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:39.669407+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90873856 unmapped: 1220608 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fac0a000/0x0/0x4ffc00000, data 0x973230/0xa64000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:40.669597+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90873856 unmapped: 1220608 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fac0a000/0x0/0x4ffc00000, data 0x973230/0xa64000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:41.669764+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90873856 unmapped: 1220608 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _renew_subs
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:42.669939+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90873856 unmapped: 1220608 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:43.670076+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90882048 unmapped: 1212416 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124280 data_alloc: 218103808 data_used: 499712
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:44.670315+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90882048 unmapped: 1212416 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:45.670446+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90882048 unmapped: 1212416 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fac06000/0x0/0x4ffc00000, data 0x974c93/0xa67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:46.670627+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90882048 unmapped: 1212416 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:47.670808+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90882048 unmapped: 1212416 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:48.670967+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90882048 unmapped: 1212416 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124280 data_alloc: 218103808 data_used: 499712
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fac06000/0x0/0x4ffc00000, data 0x974c93/0xa67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:49.671175+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90882048 unmapped: 1212416 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fac06000/0x0/0x4ffc00000, data 0x974c93/0xa67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:50.671382+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90882048 unmapped: 1212416 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:51.671543+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90882048 unmapped: 1212416 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.257966042s of 16.866178513s, submitted: 33
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:52.671719+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90882048 unmapped: 1212416 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:53.671865+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90898432 unmapped: 1196032 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fac06000/0x0/0x4ffc00000, data 0x974c93/0xa67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1123416 data_alloc: 218103808 data_used: 499712
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:54.671993+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90898432 unmapped: 1196032 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:55.672135+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 91103232 unmapped: 991232 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 166 handle_osd_map epochs [166,167], i have 166, src has [1,167]
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:56.678815+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 91144192 unmapped: 950272 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:57.678940+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 91365376 unmapped: 1777664 heap: 93143040 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:58.679592+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 91365376 unmapped: 1777664 heap: 93143040 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136408 data_alloc: 218103808 data_used: 512000
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:59.679851+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 91430912 unmapped: 1712128 heap: 93143040 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fabad000/0x0/0x4ffc00000, data 0x9cbe07/0xac0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:00.679997+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 91553792 unmapped: 2637824 heap: 94191616 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:01.680140+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 2588672 heap: 94191616 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:02.680253+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 2482176 heap: 94191616 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _renew_subs
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.750692844s of 11.498521805s, submitted: 44
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:03.680423+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 91234304 unmapped: 2957312 heap: 94191616 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141242 data_alloc: 218103808 data_used: 520192
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:04.680574+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 91234304 unmapped: 2957312 heap: 94191616 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 168 heartbeat osd_stat(store_statfs(0x4fab76000/0x0/0x4ffc00000, data 0xa00db3/0xaf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:05.680700+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 91529216 unmapped: 2662400 heap: 94191616 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:06.680819+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 91586560 unmapped: 2605056 heap: 94191616 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:07.680941+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 168 heartbeat osd_stat(store_statfs(0x4fab20000/0x0/0x4ffc00000, data 0xa57783/0xb4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 2408448 heap: 94191616 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 168 heartbeat osd_stat(store_statfs(0x4fab1c000/0x0/0x4ffc00000, data 0xa5ba22/0xb52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:08.681107+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 91971584 unmapped: 2220032 heap: 94191616 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1142474 data_alloc: 218103808 data_used: 520192
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:09.681263+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 91340800 unmapped: 2850816 heap: 94191616 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 168 heartbeat osd_stat(store_statfs(0x4fab1d000/0x0/0x4ffc00000, data 0xa5b987/0xb51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:10.681430+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 92479488 unmapped: 1712128 heap: 94191616 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 168 heartbeat osd_stat(store_statfs(0x4fab0b000/0x0/0x4ffc00000, data 0xa6d8a5/0xb63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:11.692409+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 92643328 unmapped: 1548288 heap: 94191616 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:12.692527+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 92643328 unmapped: 1548288 heap: 94191616 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 168 heartbeat osd_stat(store_statfs(0x4faae9000/0x0/0x4ffc00000, data 0xa8f7c7/0xb85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Got map version 16
Oct 01 17:11:35 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.284783363s of 10.012758255s, submitted: 44
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:13.692649+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 92741632 unmapped: 1449984 heap: 94191616 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148846 data_alloc: 218103808 data_used: 520192
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:14.692784+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93052928 unmapped: 1138688 heap: 94191616 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:15.692958+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93265920 unmapped: 1974272 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _renew_subs
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 168 handle_osd_map epochs [169,169], i have 168, src has [1,169]
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:16.693100+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2007040 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _renew_subs
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 169 handle_osd_map epochs [170,170], i have 169, src has [1,170]
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:17.693220+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 1810432 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:18.693333+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93724672 unmapped: 1515520 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 170 heartbeat osd_stat(store_statfs(0x4faa0e000/0x0/0x4ffc00000, data 0xb67e2d/0xc5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161862 data_alloc: 218103808 data_used: 528384
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:19.693482+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93724672 unmapped: 1515520 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:20.693627+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 2236416 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:21.693758+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 2236416 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:22.693944+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 2236416 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 170 handle_osd_map epochs [171,171], i have 170, src has [1,171]
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.111546516s of 10.004213333s, submitted: 84
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:23.694291+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 2236416 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162712 data_alloc: 218103808 data_used: 536576
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:24.694441+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 2236416 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fa9f8000/0x0/0x4ffc00000, data 0xb7c762/0xc75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:25.694627+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93339648 unmapped: 1900544 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fa9cc000/0x0/0x4ffc00000, data 0xba8a5b/0xca1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:26.694804+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93519872 unmapped: 1720320 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:27.694998+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93519872 unmapped: 1720320 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:28.695175+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93552640 unmapped: 1687552 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169384 data_alloc: 218103808 data_used: 544768
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:29.695339+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93634560 unmapped: 1605632 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:30.695506+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93741056 unmapped: 1499136 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fa9a9000/0x0/0x4ffc00000, data 0xbcb911/0xcc5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:31.695861+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93741056 unmapped: 1499136 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:32.695984+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93822976 unmapped: 1417216 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:33.696134+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93831168 unmapped: 1409024 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169084 data_alloc: 218103808 data_used: 544768
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:34.696359+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93831168 unmapped: 1409024 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.286650658s of 11.565922737s, submitted: 16
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:35.696567+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93569024 unmapped: 2719744 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:36.696771+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fa971000/0x0/0x4ffc00000, data 0xc0404a/0xcfd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94912512 unmapped: 1376256 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:37.696920+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 1359872 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fa94a000/0x0/0x4ffc00000, data 0xc2a4c0/0xd24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:38.697058+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95207424 unmapped: 1081344 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fa94a000/0x0/0x4ffc00000, data 0xc2a4c0/0xd24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:39.697213+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176732 data_alloc: 218103808 data_used: 544768
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 1359872 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fa94c000/0x0/0x4ffc00000, data 0xc2a38a/0xd22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:40.697341+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 1359872 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:41.697578+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 171 ms_handle_reset con 0x559b47de9000 session 0x559b48c370e0
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95526912 unmapped: 761856 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:42.697704+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95526912 unmapped: 761856 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Got map version 17
Oct 01 17:11:35 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:43.697854+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95543296 unmapped: 745472 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fa90a000/0x0/0x4ffc00000, data 0xc6c291/0xd64000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:44.697988+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174258 data_alloc: 218103808 data_used: 544768
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fa90a000/0x0/0x4ffc00000, data 0xc6c291/0xd64000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95543296 unmapped: 745472 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.350504875s of 10.000372887s, submitted: 157
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:45.698126+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fa90a000/0x0/0x4ffc00000, data 0xc6c291/0xd64000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 171 handle_osd_map epochs [172,172], i have 171, src has [1,172]
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 802816 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:46.698271+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 802816 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:47.698459+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95674368 unmapped: 614400 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:48.698655+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 172 heartbeat osd_stat(store_statfs(0x4fa8d9000/0x0/0x4ffc00000, data 0xc9a6f9/0xd94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95674368 unmapped: 614400 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:49.698830+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181420 data_alloc: 218103808 data_used: 552960
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95674368 unmapped: 614400 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 172 heartbeat osd_stat(store_statfs(0x4fa8d9000/0x0/0x4ffc00000, data 0xc9a6f9/0xd94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,1])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:50.698986+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 172 heartbeat osd_stat(store_statfs(0x4fa8d9000/0x0/0x4ffc00000, data 0xc9a6f9/0xd94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94920704 unmapped: 1368064 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:51.699144+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94920704 unmapped: 1368064 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:52.699302+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94920704 unmapped: 1368064 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:53.699419+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 172 heartbeat osd_stat(store_statfs(0x4fa8c9000/0x0/0x4ffc00000, data 0xcab1f7/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94994432 unmapped: 1294336 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:54.699561+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179540 data_alloc: 218103808 data_used: 552960
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94994432 unmapped: 1294336 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.382527351s of 10.000631332s, submitted: 27
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 172 heartbeat osd_stat(store_statfs(0x4fa8c9000/0x0/0x4ffc00000, data 0xcab1f7/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:55.699711+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:56.699857+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:57.699985+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 172 handle_osd_map epochs [173,173], i have 172, src has [1,173]
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c3000/0x0/0x4ffc00000, data 0xcb185a/0xdab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:58.700148+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94822400 unmapped: 1466368 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8bf000/0x0/0x4ffc00000, data 0xcb32bd/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:59.700285+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183666 data_alloc: 218103808 data_used: 561152
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:00.700419+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:01.700586+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:02.700768+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:03.700963+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:04.701115+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183666 data_alloc: 218103808 data_used: 561152
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8bf000/0x0/0x4ffc00000, data 0xcb32bd/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:05.701252+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8bf000/0x0/0x4ffc00000, data 0xcb32bd/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:06.701457+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:07.701611+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:08.701812+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:09.701977+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183666 data_alloc: 218103808 data_used: 561152
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8bf000/0x0/0x4ffc00000, data 0xcb32bd/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:10.702113+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:11.702276+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:12.702407+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:13.702594+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:14.702743+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183666 data_alloc: 218103808 data_used: 561152
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8bf000/0x0/0x4ffc00000, data 0xcb32bd/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:15.702915+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:16.703072+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:17.703394+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8bf000/0x0/0x4ffc00000, data 0xcb32bd/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:18.703618+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:19.703764+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183666 data_alloc: 218103808 data_used: 561152
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:20.703931+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:21.704152+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8bf000/0x0/0x4ffc00000, data 0xcb32bd/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:22.704287+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:23.704482+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:24.704658+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183666 data_alloc: 218103808 data_used: 561152
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:25.704822+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:26.704976+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:27.705073+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8bf000/0x0/0x4ffc00000, data 0xcb32bd/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:28.705182+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:29.705306+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183666 data_alloc: 218103808 data_used: 561152
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:30.705465+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:31.705634+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8bf000/0x0/0x4ffc00000, data 0xcb32bd/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:32.705759+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8bf000/0x0/0x4ffc00000, data 0xcb32bd/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:33.705846+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8bf000/0x0/0x4ffc00000, data 0xcb32bd/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:34.706001+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183666 data_alloc: 218103808 data_used: 561152
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:35.706101+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8bf000/0x0/0x4ffc00000, data 0xcb32bd/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:36.706184+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:37.706306+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8bf000/0x0/0x4ffc00000, data 0xcb32bd/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:38.706474+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:39.706640+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183666 data_alloc: 218103808 data_used: 561152
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8bf000/0x0/0x4ffc00000, data 0xcb32bd/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:40.706834+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 46.179630280s of 46.202053070s, submitted: 11
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 173 ms_handle_reset con 0x559b49c30c00 session 0x559b49106960
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:41.707071+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:42.707248+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Got map version 18
Oct 01 17:11:35 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:43.707423+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:44.707606+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182994 data_alloc: 218103808 data_used: 561152
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb3443/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:45.707739+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:46.707872+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:47.708123+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb3443/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:48.708266+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:49.708386+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182994 data_alloc: 218103808 data_used: 561152
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:50.708493+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:51.708634+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb3443/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:52.708748+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:53.708866+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:54.708986+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182994 data_alloc: 218103808 data_used: 561152
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb3443/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:55.709102+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:56.709208+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:57.709362+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:58.709474+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:59.709629+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:11:35 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:11:35 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182994 data_alloc: 218103808 data_used: 561152
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:00.709800+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb3443/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:01.709990+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95059968 unmapped: 1228800 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: do_command 'config diff' '{prefix=config diff}'
Oct 01 17:11:35 compute-0 ceph-osd[88140]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 01 17:11:35 compute-0 ceph-osd[88140]: do_command 'config show' '{prefix=config show}'
Oct 01 17:11:35 compute-0 ceph-osd[88140]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 01 17:11:35 compute-0 ceph-osd[88140]: do_command 'counter dump' '{prefix=counter dump}'
Oct 01 17:11:35 compute-0 ceph-osd[88140]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 01 17:11:35 compute-0 ceph-osd[88140]: do_command 'counter schema' '{prefix=counter schema}'
Oct 01 17:11:35 compute-0 ceph-osd[88140]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:02.710113+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95158272 unmapped: 2179072 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:03.710272+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 1949696 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:11:35 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:04.710430+0000)
Oct 01 17:11:35 compute-0 ceph-osd[88140]: do_command 'log dump' '{prefix=log dump}'
Oct 01 17:11:35 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14661 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Oct 01 17:11:35 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/332874118' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 01 17:11:35 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/596620069' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 01 17:11:35 compute-0 ceph-mon[74273]: from='client.14661 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:35 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/332874118' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 01 17:11:35 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Oct 01 17:11:35 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1080666850' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 01 17:11:36 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:11:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Oct 01 17:11:36 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2512120547' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 01 17:11:36 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1080666850' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 01 17:11:36 compute-0 ceph-mon[74273]: pgmap v1286: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:11:36 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2512120547' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 01 17:11:36 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Oct 01 17:11:36 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/563952034' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 01 17:11:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:11:37 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14671 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:37 compute-0 systemd[1]: Starting Hostname Service...
Oct 01 17:11:37 compute-0 systemd[1]: Started Hostname Service.
Oct 01 17:11:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Oct 01 17:11:37 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/720196902' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 01 17:11:37 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/563952034' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 01 17:11:37 compute-0 ceph-mon[74273]: from='client.14671 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:37 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/720196902' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 01 17:11:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Oct 01 17:11:37 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4105071281' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 01 17:11:38 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:11:38 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14677 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:38 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4105071281' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 01 17:11:38 compute-0 ceph-mon[74273]: pgmap v1287: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:11:38 compute-0 ceph-mon[74273]: from='client.14677 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:38 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Oct 01 17:11:38 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3902042751' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 01 17:11:38 compute-0 podman[283623]: 2025-10-01 17:11:38.749058739 +0000 UTC m=+0.068762589 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 01 17:11:38 compute-0 podman[283624]: 2025-10-01 17:11:38.772139197 +0000 UTC m=+0.088923239 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 01 17:11:39 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14681 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:39 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14683 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:39 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3902042751' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 01 17:11:39 compute-0 ceph-mon[74273]: from='client.14681 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:39 compute-0 ceph-mon[74273]: from='client.14683 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:39 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Oct 01 17:11:39 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3929068301' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 01 17:11:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Oct 01 17:11:40 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3500189843' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 01 17:11:40 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:11:40 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14689 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:40 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3929068301' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 01 17:11:40 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3500189843' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 01 17:11:40 compute-0 ceph-mon[74273]: pgmap v1288: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:11:40 compute-0 ceph-mon[74273]: from='client.14689 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:40 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14691 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:40 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:11:40 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 17:11:40 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:11:40 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:11:40 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:11:40 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:11:40 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:11:40 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:11:40 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:11:40 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 01 17:11:40 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:11:40 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005739061380803542 of space, bias 4.0, pg target 0.6886873656964251 quantized to 16 (current 16)
Oct 01 17:11:40 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:11:40 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Oct 01 17:11:40 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:11:40 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 17:11:40 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:11:40 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 17:11:40 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:11:40 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:11:40 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:11:40 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 17:11:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Oct 01 17:11:41 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/557888672' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 01 17:11:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:11:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:11:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:11:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f816c319fa0>)]
Oct 01 17:11:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Oct 01 17:11:41 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Oct 01 17:11:41 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1966013715' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 01 17:11:41 compute-0 ceph-mon[74273]: from='client.14691 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:41 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/557888672' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 01 17:11:41 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1966013715' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 01 17:11:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:11:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:11:41 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14697 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:11:42 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:11:42 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14699 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 01 17:11:42 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1102815139' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 01 17:11:43 compute-0 ceph-mon[74273]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.pmbdpj(active, since 37m)
Oct 01 17:11:43 compute-0 ceph-mon[74273]: from='client.14697 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:43 compute-0 ceph-mon[74273]: pgmap v1289: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:11:43 compute-0 ceph-mon[74273]: from='client.14699 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:11:43 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1102815139' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 01 17:11:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Oct 01 17:11:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/860835809' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 01 17:11:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Oct 01 17:11:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2186595341' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 01 17:11:43 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14707 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 17:11:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1753219009' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 17:11:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 17:11:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1753219009' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 17:11:44 compute-0 ovs-appctl[284948]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 01 17:11:44 compute-0 ovs-appctl[284953]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 01 17:11:44 compute-0 ovs-appctl[284958]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 01 17:11:44 compute-0 ceph-mon[74273]: mgrmap e19: compute-0.pmbdpj(active, since 37m)
Oct 01 17:11:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/860835809' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 01 17:11:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2186595341' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 01 17:11:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1753219009' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 17:11:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1753219009' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 17:11:44 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1290: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 170 B/s wr, 0 op/s
Oct 01 17:11:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 01 17:11:44 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1666584600' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 01 17:11:44 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Oct 01 17:11:44 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2561764492' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 01 17:11:45 compute-0 ceph-mon[74273]: from='client.14707 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:45 compute-0 ceph-mon[74273]: pgmap v1290: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 170 B/s wr, 0 op/s
Oct 01 17:11:45 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1666584600' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 01 17:11:45 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2561764492' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 01 17:11:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Oct 01 17:11:45 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2741784005' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 01 17:11:45 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Oct 01 17:11:45 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3136411886' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 01 17:11:45 compute-0 sudo[285748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:11:45 compute-0 sudo[285748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:11:45 compute-0 sudo[285748]: pam_unix(sudo:session): session closed for user root
Oct 01 17:11:46 compute-0 sudo[285782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:11:46 compute-0 sudo[285782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:11:46 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14721 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:46 compute-0 sudo[285782]: pam_unix(sudo:session): session closed for user root
Oct 01 17:11:46 compute-0 sudo[285813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:11:46 compute-0 sudo[285813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:11:46 compute-0 sudo[285813]: pam_unix(sudo:session): session closed for user root
Oct 01 17:11:46 compute-0 sudo[285847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 17:11:46 compute-0 sudo[285847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:11:46 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 170 B/s wr, 0 op/s
Oct 01 17:11:46 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2741784005' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 01 17:11:46 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3136411886' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 01 17:11:46 compute-0 sudo[285847]: pam_unix(sudo:session): session closed for user root
Oct 01 17:11:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 17:11:46 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:11:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 17:11:46 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 17:11:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0) v1
Oct 01 17:11:46 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/919501200' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 01 17:11:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 17:11:46 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:11:46 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev a8f86c65-3c3c-4d46-b649-54d0dbf9eb7e does not exist
Oct 01 17:11:46 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev c04ad23f-ef5c-47e5-b540-f1297a36f495 does not exist
Oct 01 17:11:46 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev ac7a4f7c-76e6-4dcd-86f5-cf26b2697777 does not exist
Oct 01 17:11:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 17:11:46 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 17:11:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 17:11:46 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 17:11:46 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 17:11:46 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:11:46 compute-0 sudo[285982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:11:46 compute-0 sudo[285982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:11:46 compute-0 sudo[285982]: pam_unix(sudo:session): session closed for user root
Oct 01 17:11:46 compute-0 sudo[286009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:11:46 compute-0 sudo[286009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:11:46 compute-0 sudo[286009]: pam_unix(sudo:session): session closed for user root
Oct 01 17:11:46 compute-0 sudo[286040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:11:46 compute-0 sudo[286040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:11:46 compute-0 sudo[286040]: pam_unix(sudo:session): session closed for user root
Oct 01 17:11:46 compute-0 sudo[286073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 17:11:46 compute-0 sudo[286073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:11:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0) v1
Oct 01 17:11:47 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3043087569' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 01 17:11:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:11:47 compute-0 podman[286164]: 2025-10-01 17:11:47.275006258 +0000 UTC m=+0.021066287 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:11:47 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14727 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:47 compute-0 ceph-mon[74273]: from='client.14721 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:47 compute-0 ceph-mon[74273]: pgmap v1291: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 170 B/s wr, 0 op/s
Oct 01 17:11:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:11:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 17:11:47 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/919501200' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 01 17:11:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:11:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 17:11:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 17:11:47 compute-0 podman[286164]: 2025-10-01 17:11:47.544642826 +0000 UTC m=+0.290702835 container create fe4365d5476f2494d2d4086b320dfadff316bab4e401ac5679121a155fa6f1bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_hertz, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 17:11:47 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:11:47 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3043087569' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 01 17:11:47 compute-0 systemd[1]: Started libpod-conmon-fe4365d5476f2494d2d4086b320dfadff316bab4e401ac5679121a155fa6f1bb.scope.
Oct 01 17:11:47 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:11:47 compute-0 podman[286164]: 2025-10-01 17:11:47.82487128 +0000 UTC m=+0.570931339 container init fe4365d5476f2494d2d4086b320dfadff316bab4e401ac5679121a155fa6f1bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_hertz, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 01 17:11:47 compute-0 podman[286164]: 2025-10-01 17:11:47.832829861 +0000 UTC m=+0.578889870 container start fe4365d5476f2494d2d4086b320dfadff316bab4e401ac5679121a155fa6f1bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 01 17:11:47 compute-0 priceless_hertz[286213]: 167 167
Oct 01 17:11:47 compute-0 systemd[1]: libpod-fe4365d5476f2494d2d4086b320dfadff316bab4e401ac5679121a155fa6f1bb.scope: Deactivated successfully.
Oct 01 17:11:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0) v1
Oct 01 17:11:47 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1458764199' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 01 17:11:47 compute-0 podman[286164]: 2025-10-01 17:11:47.885350612 +0000 UTC m=+0.631410641 container attach fe4365d5476f2494d2d4086b320dfadff316bab4e401ac5679121a155fa6f1bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:11:47 compute-0 podman[286164]: 2025-10-01 17:11:47.885944057 +0000 UTC m=+0.632004076 container died fe4365d5476f2494d2d4086b320dfadff316bab4e401ac5679121a155fa6f1bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 01 17:11:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-fed540c4ff8641e9a22596904d36c658525b43d3600f2e4f37dba986347fb5ec-merged.mount: Deactivated successfully.
Oct 01 17:11:48 compute-0 podman[286164]: 2025-10-01 17:11:48.063903574 +0000 UTC m=+0.809963583 container remove fe4365d5476f2494d2d4086b320dfadff316bab4e401ac5679121a155fa6f1bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_hertz, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:11:48 compute-0 systemd[1]: libpod-conmon-fe4365d5476f2494d2d4086b320dfadff316bab4e401ac5679121a155fa6f1bb.scope: Deactivated successfully.
Oct 01 17:11:48 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14731 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:48 compute-0 podman[286278]: 2025-10-01 17:11:48.224165793 +0000 UTC m=+0.044175893 container create a0d15e8320e79d653ea23563ba5b2133d80bce1589b5af4b660d7d0e2b25ec6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_haslett, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:11:48 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 170 B/s wr, 0 op/s
Oct 01 17:11:48 compute-0 systemd[1]: Started libpod-conmon-a0d15e8320e79d653ea23563ba5b2133d80bce1589b5af4b660d7d0e2b25ec6c.scope.
Oct 01 17:11:48 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:11:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b4e7919bf8a6a6f517a20756f9e49601c0f24ce9682505c2413449d76324af9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:11:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b4e7919bf8a6a6f517a20756f9e49601c0f24ce9682505c2413449d76324af9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:11:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b4e7919bf8a6a6f517a20756f9e49601c0f24ce9682505c2413449d76324af9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:11:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b4e7919bf8a6a6f517a20756f9e49601c0f24ce9682505c2413449d76324af9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:11:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b4e7919bf8a6a6f517a20756f9e49601c0f24ce9682505c2413449d76324af9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 17:11:48 compute-0 podman[286278]: 2025-10-01 17:11:48.200020169 +0000 UTC m=+0.020030279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:11:48 compute-0 podman[286278]: 2025-10-01 17:11:48.299840719 +0000 UTC m=+0.119850819 container init a0d15e8320e79d653ea23563ba5b2133d80bce1589b5af4b660d7d0e2b25ec6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Oct 01 17:11:48 compute-0 podman[286278]: 2025-10-01 17:11:48.310755972 +0000 UTC m=+0.130766072 container start a0d15e8320e79d653ea23563ba5b2133d80bce1589b5af4b660d7d0e2b25ec6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:11:48 compute-0 podman[286278]: 2025-10-01 17:11:48.314375458 +0000 UTC m=+0.134385558 container attach a0d15e8320e79d653ea23563ba5b2133d80bce1589b5af4b660d7d0e2b25ec6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 01 17:11:48 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14733 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:48 compute-0 ceph-mon[74273]: from='client.14727 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:48 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1458764199' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 01 17:11:48 compute-0 ceph-mon[74273]: from='client.14731 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:48 compute-0 ceph-mon[74273]: pgmap v1292: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 170 B/s wr, 0 op/s
Oct 01 17:11:48 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0) v1
Oct 01 17:11:48 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2118616315' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 01 17:11:49 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0) v1
Oct 01 17:11:49 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1858246917' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 01 17:11:49 compute-0 youthful_haslett[286299]: --> passed data devices: 0 physical, 3 LVM
Oct 01 17:11:49 compute-0 youthful_haslett[286299]: --> relative data size: 1.0
Oct 01 17:11:49 compute-0 youthful_haslett[286299]: --> All data devices are unavailable
Oct 01 17:11:49 compute-0 systemd[1]: libpod-a0d15e8320e79d653ea23563ba5b2133d80bce1589b5af4b660d7d0e2b25ec6c.scope: Deactivated successfully.
Oct 01 17:11:49 compute-0 podman[286278]: 2025-10-01 17:11:49.31095116 +0000 UTC m=+1.130961260 container died a0d15e8320e79d653ea23563ba5b2133d80bce1589b5af4b660d7d0e2b25ec6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 01 17:11:49 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14739 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:49 compute-0 ceph-mon[74273]: from='client.14733 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:49 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2118616315' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 01 17:11:49 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1858246917' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 01 17:11:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b4e7919bf8a6a6f517a20756f9e49601c0f24ce9682505c2413449d76324af9-merged.mount: Deactivated successfully.
Oct 01 17:11:49 compute-0 podman[286278]: 2025-10-01 17:11:49.680480745 +0000 UTC m=+1.500490855 container remove a0d15e8320e79d653ea23563ba5b2133d80bce1589b5af4b660d7d0e2b25ec6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_haslett, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:11:49 compute-0 sudo[286073]: pam_unix(sudo:session): session closed for user root
Oct 01 17:11:49 compute-0 sudo[286505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:11:49 compute-0 sudo[286505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:11:49 compute-0 sudo[286505]: pam_unix(sudo:session): session closed for user root
Oct 01 17:11:49 compute-0 systemd[1]: libpod-conmon-a0d15e8320e79d653ea23563ba5b2133d80bce1589b5af4b660d7d0e2b25ec6c.scope: Deactivated successfully.
Oct 01 17:11:49 compute-0 sudo[286530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:11:49 compute-0 sudo[286530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:11:49 compute-0 sudo[286530]: pam_unix(sudo:session): session closed for user root
Oct 01 17:11:49 compute-0 sudo[286555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:11:49 compute-0 sudo[286555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:11:49 compute-0 sudo[286555]: pam_unix(sudo:session): session closed for user root
Oct 01 17:11:49 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14741 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:49 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:11:49 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 17:11:49 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:11:49 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:11:49 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:11:49 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:11:49 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:11:49 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:11:49 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:11:49 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 01 17:11:49 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:11:49 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005739061380803542 of space, bias 4.0, pg target 0.6886873656964251 quantized to 16 (current 16)
Oct 01 17:11:49 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:11:49 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Oct 01 17:11:49 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:11:49 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 17:11:49 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:11:49 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 17:11:49 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:11:49 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:11:49 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:11:49 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 17:11:49 compute-0 sudo[286580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 17:11:49 compute-0 sudo[286580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:11:50 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 170 B/s wr, 0 op/s
Oct 01 17:11:50 compute-0 podman[286667]: 2025-10-01 17:11:50.285517417 +0000 UTC m=+0.047025513 container create ec37be4cbba9011c76eaef23db27ac3562764434207ea9b23947ae25cfd34da4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:11:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 01 17:11:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2190858366' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 01 17:11:50 compute-0 systemd[1]: Started libpod-conmon-ec37be4cbba9011c76eaef23db27ac3562764434207ea9b23947ae25cfd34da4.scope.
Oct 01 17:11:50 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:11:50 compute-0 podman[286667]: 2025-10-01 17:11:50.260026973 +0000 UTC m=+0.021535099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:11:50 compute-0 podman[286667]: 2025-10-01 17:11:50.364575744 +0000 UTC m=+0.126083860 container init ec37be4cbba9011c76eaef23db27ac3562764434207ea9b23947ae25cfd34da4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 01 17:11:50 compute-0 podman[286667]: 2025-10-01 17:11:50.370970683 +0000 UTC m=+0.132478779 container start ec37be4cbba9011c76eaef23db27ac3562764434207ea9b23947ae25cfd34da4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lovelace, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:11:50 compute-0 stupefied_lovelace[286685]: 167 167
Oct 01 17:11:50 compute-0 systemd[1]: libpod-ec37be4cbba9011c76eaef23db27ac3562764434207ea9b23947ae25cfd34da4.scope: Deactivated successfully.
Oct 01 17:11:50 compute-0 podman[286667]: 2025-10-01 17:11:50.376524046 +0000 UTC m=+0.138032162 container attach ec37be4cbba9011c76eaef23db27ac3562764434207ea9b23947ae25cfd34da4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lovelace, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:11:50 compute-0 podman[286667]: 2025-10-01 17:11:50.376925147 +0000 UTC m=+0.138433243 container died ec37be4cbba9011c76eaef23db27ac3562764434207ea9b23947ae25cfd34da4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lovelace, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:11:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-1884dc3aa3b0ea2b7c224a7706c08c80cbe2b1ec420c52f93a5de09fe4cecaf9-merged.mount: Deactivated successfully.
Oct 01 17:11:50 compute-0 podman[286667]: 2025-10-01 17:11:50.414802528 +0000 UTC m=+0.176310624 container remove ec37be4cbba9011c76eaef23db27ac3562764434207ea9b23947ae25cfd34da4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lovelace, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 01 17:11:50 compute-0 podman[286681]: 2025-10-01 17:11:50.41775716 +0000 UTC m=+0.092731122 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 17:11:50 compute-0 systemd[1]: libpod-conmon-ec37be4cbba9011c76eaef23db27ac3562764434207ea9b23947ae25cfd34da4.scope: Deactivated successfully.
Oct 01 17:11:50 compute-0 podman[286752]: 2025-10-01 17:11:50.610027916 +0000 UTC m=+0.057956708 container create 0fe9f125eb816e45df88ad8ded0be99f7c748d2826ef739bb9ce22c5e1254f9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_carver, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 01 17:11:50 compute-0 ceph-mon[74273]: from='client.14739 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:50 compute-0 ceph-mon[74273]: from='client.14741 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:50 compute-0 ceph-mon[74273]: pgmap v1293: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 170 B/s wr, 0 op/s
Oct 01 17:11:50 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2190858366' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 01 17:11:50 compute-0 systemd[1]: Started libpod-conmon-0fe9f125eb816e45df88ad8ded0be99f7c748d2826ef739bb9ce22c5e1254f9a.scope.
Oct 01 17:11:50 compute-0 podman[286752]: 2025-10-01 17:11:50.57875583 +0000 UTC m=+0.026684652 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:11:50 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:11:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33adf90e0830bfffb93491142873220eedfa04b8dd6567468769c99a8d9f2668/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:11:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33adf90e0830bfffb93491142873220eedfa04b8dd6567468769c99a8d9f2668/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:11:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33adf90e0830bfffb93491142873220eedfa04b8dd6567468769c99a8d9f2668/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:11:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33adf90e0830bfffb93491142873220eedfa04b8dd6567468769c99a8d9f2668/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:11:50 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0) v1
Oct 01 17:11:50 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1509798564' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 01 17:11:50 compute-0 podman[286752]: 2025-10-01 17:11:50.705002286 +0000 UTC m=+0.152931098 container init 0fe9f125eb816e45df88ad8ded0be99f7c748d2826ef739bb9ce22c5e1254f9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_carver, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:11:50 compute-0 podman[286752]: 2025-10-01 17:11:50.71102584 +0000 UTC m=+0.158954632 container start 0fe9f125eb816e45df88ad8ded0be99f7c748d2826ef739bb9ce22c5e1254f9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_carver, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:11:50 compute-0 podman[286752]: 2025-10-01 17:11:50.715369184 +0000 UTC m=+0.163298006 container attach 0fe9f125eb816e45df88ad8ded0be99f7c748d2826ef739bb9ce22c5e1254f9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:11:51 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14747 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:51 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14749 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:51 compute-0 frosty_carver[286769]: {
Oct 01 17:11:51 compute-0 frosty_carver[286769]:     "0": [
Oct 01 17:11:51 compute-0 frosty_carver[286769]:         {
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             "devices": [
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "/dev/loop3"
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             ],
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             "lv_name": "ceph_lv0",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             "lv_size": "21470642176",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             "name": "ceph_lv0",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             "tags": {
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "ceph.cluster_name": "ceph",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "ceph.crush_device_class": "",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "ceph.encrypted": "0",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "ceph.osd_id": "0",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "ceph.type": "block",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "ceph.vdo": "0"
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             },
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             "type": "block",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             "vg_name": "ceph_vg0"
Oct 01 17:11:51 compute-0 frosty_carver[286769]:         }
Oct 01 17:11:51 compute-0 frosty_carver[286769]:     ],
Oct 01 17:11:51 compute-0 frosty_carver[286769]:     "1": [
Oct 01 17:11:51 compute-0 frosty_carver[286769]:         {
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             "devices": [
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "/dev/loop4"
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             ],
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             "lv_name": "ceph_lv1",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             "lv_size": "21470642176",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             "name": "ceph_lv1",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             "tags": {
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "ceph.cluster_name": "ceph",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "ceph.crush_device_class": "",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "ceph.encrypted": "0",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "ceph.osd_id": "1",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "ceph.type": "block",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "ceph.vdo": "0"
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             },
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             "type": "block",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             "vg_name": "ceph_vg1"
Oct 01 17:11:51 compute-0 frosty_carver[286769]:         }
Oct 01 17:11:51 compute-0 frosty_carver[286769]:     ],
Oct 01 17:11:51 compute-0 frosty_carver[286769]:     "2": [
Oct 01 17:11:51 compute-0 frosty_carver[286769]:         {
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             "devices": [
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "/dev/loop5"
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             ],
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             "lv_name": "ceph_lv2",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             "lv_size": "21470642176",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             "name": "ceph_lv2",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             "tags": {
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "ceph.cluster_name": "ceph",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "ceph.crush_device_class": "",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "ceph.encrypted": "0",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "ceph.osd_id": "2",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "ceph.type": "block",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:                 "ceph.vdo": "0"
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             },
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             "type": "block",
Oct 01 17:11:51 compute-0 frosty_carver[286769]:             "vg_name": "ceph_vg2"
Oct 01 17:11:51 compute-0 frosty_carver[286769]:         }
Oct 01 17:11:51 compute-0 frosty_carver[286769]:     ]
Oct 01 17:11:51 compute-0 frosty_carver[286769]: }
Oct 01 17:11:51 compute-0 systemd[1]: libpod-0fe9f125eb816e45df88ad8ded0be99f7c748d2826ef739bb9ce22c5e1254f9a.scope: Deactivated successfully.
Oct 01 17:11:51 compute-0 podman[286752]: 2025-10-01 17:11:51.545432923 +0000 UTC m=+0.993361725 container died 0fe9f125eb816e45df88ad8ded0be99f7c748d2826ef739bb9ce22c5e1254f9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_carver, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:11:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 01 17:11:52 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1983276072' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 17:11:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:11:52 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1509798564' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 01 17:11:52 compute-0 ceph-mon[74273]: from='client.14747 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:52 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 170 B/s wr, 0 op/s
Oct 01 17:11:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-33adf90e0830bfffb93491142873220eedfa04b8dd6567468769c99a8d9f2668-merged.mount: Deactivated successfully.
Oct 01 17:11:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0) v1
Oct 01 17:11:52 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1528884067' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 01 17:11:52 compute-0 podman[286752]: 2025-10-01 17:11:52.606996017 +0000 UTC m=+2.054924809 container remove 0fe9f125eb816e45df88ad8ded0be99f7c748d2826ef739bb9ce22c5e1254f9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 01 17:11:52 compute-0 systemd[1]: libpod-conmon-0fe9f125eb816e45df88ad8ded0be99f7c748d2826ef739bb9ce22c5e1254f9a.scope: Deactivated successfully.
Oct 01 17:11:52 compute-0 sudo[286580]: pam_unix(sudo:session): session closed for user root
Oct 01 17:11:52 compute-0 sudo[286927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:11:52 compute-0 sudo[286927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:11:52 compute-0 sudo[286927]: pam_unix(sudo:session): session closed for user root
Oct 01 17:11:52 compute-0 sudo[286956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:11:52 compute-0 sudo[286956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:11:52 compute-0 sudo[286956]: pam_unix(sudo:session): session closed for user root
Oct 01 17:11:52 compute-0 sudo[286983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:11:52 compute-0 sudo[286983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:11:52 compute-0 sudo[286983]: pam_unix(sudo:session): session closed for user root
Oct 01 17:11:52 compute-0 sudo[287013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 17:11:52 compute-0 sudo[287013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:11:53 compute-0 ceph-mon[74273]: from='client.14749 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:11:53 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1983276072' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 17:11:53 compute-0 ceph-mon[74273]: pgmap v1294: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 170 B/s wr, 0 op/s
Oct 01 17:11:53 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1528884067' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 01 17:11:53 compute-0 podman[287157]: 2025-10-01 17:11:53.216410117 +0000 UTC m=+0.020183531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:11:53 compute-0 podman[287157]: 2025-10-01 17:11:53.412026881 +0000 UTC m=+0.215800265 container create 095a1d5c7584dff4587d020e5f3032d225f779e321bccd6de49c1af2113bf2fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_swanson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:11:53 compute-0 virtqemud[259310]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 01 17:11:53 compute-0 systemd[1]: Started libpod-conmon-095a1d5c7584dff4587d020e5f3032d225f779e321bccd6de49c1af2113bf2fc.scope.
Oct 01 17:11:53 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:11:54 compute-0 podman[287157]: 2025-10-01 17:11:54.027591298 +0000 UTC m=+0.831364692 container init 095a1d5c7584dff4587d020e5f3032d225f779e321bccd6de49c1af2113bf2fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:11:54 compute-0 podman[287157]: 2025-10-01 17:11:54.037529206 +0000 UTC m=+0.841302610 container start 095a1d5c7584dff4587d020e5f3032d225f779e321bccd6de49c1af2113bf2fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 01 17:11:54 compute-0 ecstatic_swanson[287352]: 167 167
Oct 01 17:11:54 compute-0 systemd[1]: libpod-095a1d5c7584dff4587d020e5f3032d225f779e321bccd6de49c1af2113bf2fc.scope: Deactivated successfully.
Oct 01 17:11:54 compute-0 podman[287157]: 2025-10-01 17:11:54.076283516 +0000 UTC m=+0.880056900 container attach 095a1d5c7584dff4587d020e5f3032d225f779e321bccd6de49c1af2113bf2fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:11:54 compute-0 podman[287157]: 2025-10-01 17:11:54.076696671 +0000 UTC m=+0.880470065 container died 095a1d5c7584dff4587d020e5f3032d225f779e321bccd6de49c1af2113bf2fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_swanson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 01 17:11:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-11cae3e68c325c2c4a385292553a57d7989bdd9a727d99cd49bea432b8f8ed2a-merged.mount: Deactivated successfully.
Oct 01 17:11:54 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 170 B/s wr, 0 op/s
Oct 01 17:11:54 compute-0 podman[287157]: 2025-10-01 17:11:54.527526094 +0000 UTC m=+1.331299498 container remove 095a1d5c7584dff4587d020e5f3032d225f779e321bccd6de49c1af2113bf2fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_swanson, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 01 17:11:54 compute-0 ceph-mon[74273]: pgmap v1295: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 170 B/s wr, 0 op/s
Oct 01 17:11:54 compute-0 systemd[1]: libpod-conmon-095a1d5c7584dff4587d020e5f3032d225f779e321bccd6de49c1af2113bf2fc.scope: Deactivated successfully.
Oct 01 17:11:54 compute-0 podman[287447]: 2025-10-01 17:11:54.768168381 +0000 UTC m=+0.092911701 container create 4d0ea7860ff4aafa61eb977adfcd1c4ee1e0607137f3ec52fc208e3ddb1ccb9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_varahamihira, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:11:54 compute-0 podman[287447]: 2025-10-01 17:11:54.699503939 +0000 UTC m=+0.024247289 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:11:54 compute-0 systemd[1]: Started libpod-conmon-4d0ea7860ff4aafa61eb977adfcd1c4ee1e0607137f3ec52fc208e3ddb1ccb9a.scope.
Oct 01 17:11:54 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:11:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eefbd6134dbd71aecde9d61310633dd7c9f6889da96f0b39dbf3632a0af7a0a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:11:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eefbd6134dbd71aecde9d61310633dd7c9f6889da96f0b39dbf3632a0af7a0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:11:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eefbd6134dbd71aecde9d61310633dd7c9f6889da96f0b39dbf3632a0af7a0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:11:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eefbd6134dbd71aecde9d61310633dd7c9f6889da96f0b39dbf3632a0af7a0a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:11:55 compute-0 podman[287447]: 2025-10-01 17:11:55.18523029 +0000 UTC m=+0.509973640 container init 4d0ea7860ff4aafa61eb977adfcd1c4ee1e0607137f3ec52fc208e3ddb1ccb9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_varahamihira, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:11:55 compute-0 podman[287447]: 2025-10-01 17:11:55.193273124 +0000 UTC m=+0.518016444 container start 4d0ea7860ff4aafa61eb977adfcd1c4ee1e0607137f3ec52fc208e3ddb1ccb9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_varahamihira, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 17:11:55 compute-0 podman[287447]: 2025-10-01 17:11:55.273794009 +0000 UTC m=+0.598537329 container attach 4d0ea7860ff4aafa61eb977adfcd1c4ee1e0607137f3ec52fc208e3ddb1ccb9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_varahamihira, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:11:55 compute-0 podman[287556]: 2025-10-01 17:11:55.829410517 +0000 UTC m=+0.053119315 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 01 17:11:56 compute-0 musing_varahamihira[287511]: {
Oct 01 17:11:56 compute-0 musing_varahamihira[287511]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 17:11:56 compute-0 musing_varahamihira[287511]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:11:56 compute-0 musing_varahamihira[287511]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 17:11:56 compute-0 musing_varahamihira[287511]:         "osd_id": 2,
Oct 01 17:11:56 compute-0 musing_varahamihira[287511]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 17:11:56 compute-0 musing_varahamihira[287511]:         "type": "bluestore"
Oct 01 17:11:56 compute-0 musing_varahamihira[287511]:     },
Oct 01 17:11:56 compute-0 musing_varahamihira[287511]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 17:11:56 compute-0 musing_varahamihira[287511]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:11:56 compute-0 musing_varahamihira[287511]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 17:11:56 compute-0 musing_varahamihira[287511]:         "osd_id": 0,
Oct 01 17:11:56 compute-0 musing_varahamihira[287511]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 17:11:56 compute-0 musing_varahamihira[287511]:         "type": "bluestore"
Oct 01 17:11:56 compute-0 musing_varahamihira[287511]:     },
Oct 01 17:11:56 compute-0 musing_varahamihira[287511]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 17:11:56 compute-0 musing_varahamihira[287511]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:11:56 compute-0 musing_varahamihira[287511]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 17:11:56 compute-0 musing_varahamihira[287511]:         "osd_id": 1,
Oct 01 17:11:56 compute-0 musing_varahamihira[287511]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 17:11:56 compute-0 musing_varahamihira[287511]:         "type": "bluestore"
Oct 01 17:11:56 compute-0 musing_varahamihira[287511]:     }
Oct 01 17:11:56 compute-0 musing_varahamihira[287511]: }
Oct 01 17:11:56 compute-0 systemd[1]: libpod-4d0ea7860ff4aafa61eb977adfcd1c4ee1e0607137f3ec52fc208e3ddb1ccb9a.scope: Deactivated successfully.
Oct 01 17:11:56 compute-0 podman[287447]: 2025-10-01 17:11:56.150427354 +0000 UTC m=+1.475170684 container died 4d0ea7860ff4aafa61eb977adfcd1c4ee1e0607137f3ec52fc208e3ddb1ccb9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_varahamihira, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:11:56 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Oct 01 17:11:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-1eefbd6134dbd71aecde9d61310633dd7c9f6889da96f0b39dbf3632a0af7a0a-merged.mount: Deactivated successfully.
Oct 01 17:11:56 compute-0 ceph-mon[74273]: pgmap v1296: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Oct 01 17:11:56 compute-0 podman[287447]: 2025-10-01 17:11:56.564323176 +0000 UTC m=+1.889066496 container remove 4d0ea7860ff4aafa61eb977adfcd1c4ee1e0607137f3ec52fc208e3ddb1ccb9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 01 17:11:56 compute-0 sudo[287013]: pam_unix(sudo:session): session closed for user root
Oct 01 17:11:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 17:11:56 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:11:56 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 17:11:56 compute-0 systemd[1]: libpod-conmon-4d0ea7860ff4aafa61eb977adfcd1c4ee1e0607137f3ec52fc208e3ddb1ccb9a.scope: Deactivated successfully.
Oct 01 17:11:56 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:11:56 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 799ab9ef-9c31-4250-b4cc-13622cf02040 does not exist
Oct 01 17:11:56 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev d651a16e-767e-4639-b3aa-bd5e6a1c385a does not exist
Oct 01 17:11:56 compute-0 sudo[287620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:11:56 compute-0 sudo[287620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:11:56 compute-0 sudo[287620]: pam_unix(sudo:session): session closed for user root
Oct 01 17:11:56 compute-0 sudo[287647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 17:11:56 compute-0 sudo[287647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:11:56 compute-0 sudo[287647]: pam_unix(sudo:session): session closed for user root
Oct 01 17:11:56 compute-0 systemd[1]: Starting Time & Date Service...
Oct 01 17:11:56 compute-0 systemd[1]: Started Time & Date Service.
Oct 01 17:11:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:11:57 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:11:57 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:11:58 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Oct 01 17:11:58 compute-0 ceph-mon[74273]: pgmap v1297: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Oct 01 17:12:00 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:01 compute-0 ceph-mon[74273]: pgmap v1298: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:12:02 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:02 compute-0 ceph-mon[74273]: pgmap v1299: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:04 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:05 compute-0 ceph-mon[74273]: pgmap v1300: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:06 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:12:07 compute-0 ceph-mon[74273]: pgmap v1301: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:08 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1302: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:08 compute-0 ceph-mon[74273]: pgmap v1302: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:09 compute-0 podman[287683]: 2025-10-01 17:12:09.721703981 +0000 UTC m=+0.056045413 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2)
Oct 01 17:12:09 compute-0 podman[287682]: 2025-10-01 17:12:09.72768013 +0000 UTC m=+0.061876120 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 01 17:12:10 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:11 compute-0 ceph-mon[74273]: pgmap v1303: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:12:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:12:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_17:12:11
Oct 01 17:12:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 17:12:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 17:12:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'images', 'volumes', 'default.rgw.meta', 'vms', 'default.rgw.control', '.rgw.root', '.mgr']
Oct 01 17:12:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 17:12:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:12:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:12:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:12:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:12:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:12:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 17:12:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:12:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 17:12:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:12:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:12:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:12:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:12:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:12:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:12:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:12:12 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:13 compute-0 ceph-mon[74273]: pgmap v1304: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:14 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:14 compute-0 ceph-mon[74273]: pgmap v1305: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:16 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:16 compute-0 ceph-mon[74273]: pgmap v1306: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:12:18 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:18 compute-0 ceph-mon[74273]: pgmap v1307: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:19 compute-0 nova_compute[259504]: 2025-10-01 17:12:19.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:12:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:12:19.982 162304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:12:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:12:19.982 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:12:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:12:19.983 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:12:20 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:20 compute-0 podman[287721]: 2025-10-01 17:12:20.794043234 +0000 UTC m=+0.115827584 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 01 17:12:20 compute-0 ceph-mon[74273]: pgmap v1308: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 17:12:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:12:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 17:12:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:12:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:12:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:12:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:12:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:12:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:12:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:12:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 01 17:12:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:12:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005739061380803542 of space, bias 4.0, pg target 0.6886873656964251 quantized to 16 (current 16)
Oct 01 17:12:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:12:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Oct 01 17:12:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:12:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 17:12:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:12:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 17:12:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:12:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:12:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:12:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 17:12:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:12:22 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:23 compute-0 sudo[280150]: pam_unix(sudo:session): session closed for user root
Oct 01 17:12:23 compute-0 sshd-session[280149]: Received disconnect from 192.168.122.10 port 58042:11: disconnected by user
Oct 01 17:12:23 compute-0 sshd-session[280149]: Disconnected from user zuul 192.168.122.10 port 58042
Oct 01 17:12:23 compute-0 sshd-session[280146]: pam_unix(sshd:session): session closed for user zuul
Oct 01 17:12:23 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Oct 01 17:12:23 compute-0 systemd[1]: session-53.scope: Consumed 2min 27.764s CPU time, 720.8M memory peak, read 235.1M from disk, written 272.5M to disk.
Oct 01 17:12:23 compute-0 systemd-logind[788]: Session 53 logged out. Waiting for processes to exit.
Oct 01 17:12:23 compute-0 systemd-logind[788]: Removed session 53.
Oct 01 17:12:23 compute-0 sshd-session[287748]: Accepted publickey for zuul from 192.168.122.10 port 45470 ssh2: ECDSA SHA256:cAu4I/kPoFUKOLOQB71BUt6Th09G4PIJ2iHT8DD8gEY
Oct 01 17:12:23 compute-0 systemd-logind[788]: New session 54 of user zuul.
Oct 01 17:12:23 compute-0 systemd[1]: Started Session 54 of User zuul.
Oct 01 17:12:23 compute-0 sshd-session[287748]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 17:12:23 compute-0 sudo[287752]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2025-10-01-ltamhsl.tar.xz
Oct 01 17:12:23 compute-0 sudo[287752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 17:12:23 compute-0 sudo[287752]: pam_unix(sudo:session): session closed for user root
Oct 01 17:12:23 compute-0 sshd-session[287751]: Received disconnect from 192.168.122.10 port 45470:11: disconnected by user
Oct 01 17:12:23 compute-0 sshd-session[287751]: Disconnected from user zuul 192.168.122.10 port 45470
Oct 01 17:12:23 compute-0 sshd-session[287748]: pam_unix(sshd:session): session closed for user zuul
Oct 01 17:12:23 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Oct 01 17:12:23 compute-0 systemd-logind[788]: Session 54 logged out. Waiting for processes to exit.
Oct 01 17:12:23 compute-0 systemd-logind[788]: Removed session 54.
Oct 01 17:12:23 compute-0 sshd-session[287777]: Accepted publickey for zuul from 192.168.122.10 port 45478 ssh2: ECDSA SHA256:cAu4I/kPoFUKOLOQB71BUt6Th09G4PIJ2iHT8DD8gEY
Oct 01 17:12:23 compute-0 systemd-logind[788]: New session 55 of user zuul.
Oct 01 17:12:23 compute-0 systemd[1]: Started Session 55 of User zuul.
Oct 01 17:12:23 compute-0 sshd-session[287777]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 17:12:23 compute-0 ceph-mon[74273]: pgmap v1309: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:23 compute-0 sudo[287781]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Oct 01 17:12:23 compute-0 sudo[287781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 17:12:23 compute-0 sudo[287781]: pam_unix(sudo:session): session closed for user root
Oct 01 17:12:23 compute-0 sshd-session[287780]: Received disconnect from 192.168.122.10 port 45478:11: disconnected by user
Oct 01 17:12:23 compute-0 sshd-session[287780]: Disconnected from user zuul 192.168.122.10 port 45478
Oct 01 17:12:23 compute-0 sshd-session[287777]: pam_unix(sshd:session): session closed for user zuul
Oct 01 17:12:23 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Oct 01 17:12:23 compute-0 systemd-logind[788]: Session 55 logged out. Waiting for processes to exit.
Oct 01 17:12:23 compute-0 systemd-logind[788]: Removed session 55.
Oct 01 17:12:24 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:24 compute-0 ceph-mon[74273]: pgmap v1310: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:25 compute-0 nova_compute[259504]: 2025-10-01 17:12:25.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:12:25 compute-0 nova_compute[259504]: 2025-10-01 17:12:25.752 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 17:12:25 compute-0 nova_compute[259504]: 2025-10-01 17:12:25.752 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 17:12:25 compute-0 nova_compute[259504]: 2025-10-01 17:12:25.829 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 17:12:26 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:26 compute-0 ceph-mon[74273]: pgmap v1311: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:26 compute-0 nova_compute[259504]: 2025-10-01 17:12:26.749 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:12:26 compute-0 podman[287806]: 2025-10-01 17:12:26.773203085 +0000 UTC m=+0.069485969 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct 01 17:12:26 compute-0 nova_compute[259504]: 2025-10-01 17:12:26.834 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:12:26 compute-0 nova_compute[259504]: 2025-10-01 17:12:26.835 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:12:26 compute-0 nova_compute[259504]: 2025-10-01 17:12:26.835 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:12:26 compute-0 nova_compute[259504]: 2025-10-01 17:12:26.835 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 17:12:26 compute-0 nova_compute[259504]: 2025-10-01 17:12:26.836 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:12:27 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 01 17:12:27 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 01 17:12:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:12:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:12:27 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3597881272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:12:27 compute-0 nova_compute[259504]: 2025-10-01 17:12:27.270 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:12:27 compute-0 nova_compute[259504]: 2025-10-01 17:12:27.474 2 WARNING nova.virt.libvirt.driver [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 17:12:27 compute-0 nova_compute[259504]: 2025-10-01 17:12:27.476 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4882MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 17:12:27 compute-0 nova_compute[259504]: 2025-10-01 17:12:27.476 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:12:27 compute-0 nova_compute[259504]: 2025-10-01 17:12:27.477 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:12:27 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3597881272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:12:27 compute-0 nova_compute[259504]: 2025-10-01 17:12:27.806 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 17:12:27 compute-0 nova_compute[259504]: 2025-10-01 17:12:27.806 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 17:12:27 compute-0 nova_compute[259504]: 2025-10-01 17:12:27.830 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:12:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:12:28 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4116247724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:12:28 compute-0 nova_compute[259504]: 2025-10-01 17:12:28.236 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:12:28 compute-0 nova_compute[259504]: 2025-10-01 17:12:28.240 2 DEBUG nova.compute.provider_tree [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed in ProviderTree for provider: 2417da73-53f1-4edf-ae4c-fbd9fa470d6b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 17:12:28 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:28 compute-0 nova_compute[259504]: 2025-10-01 17:12:28.391 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 17:12:28 compute-0 nova_compute[259504]: 2025-10-01 17:12:28.394 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 17:12:28 compute-0 nova_compute[259504]: 2025-10-01 17:12:28.394 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.917s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:12:28 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4116247724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:12:28 compute-0 ceph-mon[74273]: pgmap v1312: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:29 compute-0 nova_compute[259504]: 2025-10-01 17:12:29.397 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:12:29 compute-0 nova_compute[259504]: 2025-10-01 17:12:29.398 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:12:29 compute-0 nova_compute[259504]: 2025-10-01 17:12:29.398 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:12:29 compute-0 nova_compute[259504]: 2025-10-01 17:12:29.398 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:12:30 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:30 compute-0 nova_compute[259504]: 2025-10-01 17:12:30.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:12:30 compute-0 nova_compute[259504]: 2025-10-01 17:12:30.750 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 17:12:31 compute-0 ceph-mon[74273]: pgmap v1313: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:12:32 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:32 compute-0 ceph-mon[74273]: pgmap v1314: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:32 compute-0 nova_compute[259504]: 2025-10-01 17:12:32.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:12:34 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:34 compute-0 ceph-mon[74273]: pgmap v1315: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:36 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:12:37 compute-0 ceph-mon[74273]: pgmap v1316: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:37 compute-0 nova_compute[259504]: 2025-10-01 17:12:37.746 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:12:38 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:38 compute-0 ceph-mon[74273]: pgmap v1317: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:40 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:40 compute-0 podman[287874]: 2025-10-01 17:12:40.731562981 +0000 UTC m=+0.052183704 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, org.label-schema.schema-version=1.0)
Oct 01 17:12:40 compute-0 podman[287873]: 2025-10-01 17:12:40.73594816 +0000 UTC m=+0.058611350 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 01 17:12:41 compute-0 ceph-mon[74273]: pgmap v1318: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:12:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:12:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:12:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:12:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:12:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:12:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:12:42 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:43 compute-0 ceph-mon[74273]: pgmap v1319: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 17:12:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3764994281' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 17:12:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 17:12:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3764994281' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 17:12:44 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3764994281' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 17:12:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3764994281' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 17:12:45 compute-0 ceph-mon[74273]: pgmap v1320: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:46 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:46 compute-0 ceph-mon[74273]: pgmap v1321: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:12:48 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:49 compute-0 ceph-mon[74273]: pgmap v1322: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:50 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:51 compute-0 ceph-mon[74273]: pgmap v1323: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:51 compute-0 podman[287912]: 2025-10-01 17:12:51.743876342 +0000 UTC m=+0.066590434 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 01 17:12:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:12:52 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:52 compute-0 ceph-mon[74273]: pgmap v1324: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:54 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:55 compute-0 ceph-mon[74273]: pgmap v1325: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:56 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:56 compute-0 sudo[287938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:12:56 compute-0 sudo[287938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:12:56 compute-0 sudo[287938]: pam_unix(sudo:session): session closed for user root
Oct 01 17:12:57 compute-0 podman[287962]: 2025-10-01 17:12:57.012498342 +0000 UTC m=+0.053438098 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3)
Oct 01 17:12:57 compute-0 sudo[287969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:12:57 compute-0 sudo[287969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:12:57 compute-0 sudo[287969]: pam_unix(sudo:session): session closed for user root
Oct 01 17:12:57 compute-0 sudo[288009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:12:57 compute-0 sudo[288009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:12:57 compute-0 sudo[288009]: pam_unix(sudo:session): session closed for user root
Oct 01 17:12:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:12:57 compute-0 sudo[288034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 17:12:57 compute-0 sudo[288034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:12:57 compute-0 ceph-mon[74273]: pgmap v1326: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:57 compute-0 sudo[288034]: pam_unix(sudo:session): session closed for user root
Oct 01 17:12:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 17:12:57 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:12:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 17:12:57 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 17:12:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 17:12:57 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:12:57 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 91439261-2550-4aad-b05a-0b7d93550768 does not exist
Oct 01 17:12:57 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev bac75c11-7ee6-4e0a-9a37-37276d334cb4 does not exist
Oct 01 17:12:57 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 13e4a8fc-4436-4181-abbc-768b7036bd3c does not exist
Oct 01 17:12:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 17:12:57 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 17:12:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 17:12:57 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 17:12:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 17:12:57 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:12:57 compute-0 sudo[288091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:12:57 compute-0 sudo[288091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:12:57 compute-0 sudo[288091]: pam_unix(sudo:session): session closed for user root
Oct 01 17:12:57 compute-0 sudo[288116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:12:57 compute-0 sudo[288116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:12:57 compute-0 sudo[288116]: pam_unix(sudo:session): session closed for user root
Oct 01 17:12:57 compute-0 sudo[288141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:12:57 compute-0 sudo[288141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:12:57 compute-0 sudo[288141]: pam_unix(sudo:session): session closed for user root
Oct 01 17:12:57 compute-0 sudo[288166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 17:12:57 compute-0 sudo[288166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:12:58 compute-0 podman[288232]: 2025-10-01 17:12:58.1302668 +0000 UTC m=+0.042381009 container create 365276813f5297b98996921b83bad0f15e2352b4905ef9970e919fece555b0ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hypatia, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 01 17:12:58 compute-0 podman[288232]: 2025-10-01 17:12:58.107397236 +0000 UTC m=+0.019511465 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:12:58 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:58 compute-0 systemd[1]: Started libpod-conmon-365276813f5297b98996921b83bad0f15e2352b4905ef9970e919fece555b0ad.scope.
Oct 01 17:12:58 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:12:58 compute-0 podman[288232]: 2025-10-01 17:12:58.380185364 +0000 UTC m=+0.292299623 container init 365276813f5297b98996921b83bad0f15e2352b4905ef9970e919fece555b0ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 01 17:12:58 compute-0 podman[288232]: 2025-10-01 17:12:58.388138749 +0000 UTC m=+0.300252968 container start 365276813f5297b98996921b83bad0f15e2352b4905ef9970e919fece555b0ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hypatia, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 01 17:12:58 compute-0 hungry_hypatia[288247]: 167 167
Oct 01 17:12:58 compute-0 systemd[1]: libpod-365276813f5297b98996921b83bad0f15e2352b4905ef9970e919fece555b0ad.scope: Deactivated successfully.
Oct 01 17:12:58 compute-0 conmon[288247]: conmon 365276813f5297b98996 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-365276813f5297b98996921b83bad0f15e2352b4905ef9970e919fece555b0ad.scope/container/memory.events
Oct 01 17:12:58 compute-0 podman[288232]: 2025-10-01 17:12:58.480975595 +0000 UTC m=+0.393089804 container attach 365276813f5297b98996921b83bad0f15e2352b4905ef9970e919fece555b0ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hypatia, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:12:58 compute-0 podman[288232]: 2025-10-01 17:12:58.482060592 +0000 UTC m=+0.394174791 container died 365276813f5297b98996921b83bad0f15e2352b4905ef9970e919fece555b0ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:12:58 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:12:58 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 17:12:58 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:12:58 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 17:12:58 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 17:12:58 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:12:58 compute-0 ceph-mon[74273]: pgmap v1327: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:12:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2ca5bf39cf9387946e00f3b48a98db4018c3f4bcd113cceb9ab743aeb922e53-merged.mount: Deactivated successfully.
Oct 01 17:12:58 compute-0 podman[288232]: 2025-10-01 17:12:58.871280035 +0000 UTC m=+0.783394254 container remove 365276813f5297b98996921b83bad0f15e2352b4905ef9970e919fece555b0ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:12:58 compute-0 systemd[1]: libpod-conmon-365276813f5297b98996921b83bad0f15e2352b4905ef9970e919fece555b0ad.scope: Deactivated successfully.
Oct 01 17:12:59 compute-0 podman[288273]: 2025-10-01 17:12:59.033668553 +0000 UTC m=+0.048818315 container create 95eca03cd573fffb014050b1b2f63a581385d9ca31049e7257f7819745fcfc9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_vaughan, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:12:59 compute-0 systemd[1]: Started libpod-conmon-95eca03cd573fffb014050b1b2f63a581385d9ca31049e7257f7819745fcfc9b.scope.
Oct 01 17:12:59 compute-0 podman[288273]: 2025-10-01 17:12:59.006481329 +0000 UTC m=+0.021631111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:12:59 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:12:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44d41cec9c2031ec8e3e4ca5ec822396f4ae92d9597836d48b542f1b06bf2af3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:12:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44d41cec9c2031ec8e3e4ca5ec822396f4ae92d9597836d48b542f1b06bf2af3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:12:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44d41cec9c2031ec8e3e4ca5ec822396f4ae92d9597836d48b542f1b06bf2af3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:12:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44d41cec9c2031ec8e3e4ca5ec822396f4ae92d9597836d48b542f1b06bf2af3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:12:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44d41cec9c2031ec8e3e4ca5ec822396f4ae92d9597836d48b542f1b06bf2af3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 17:12:59 compute-0 podman[288273]: 2025-10-01 17:12:59.198649116 +0000 UTC m=+0.213798888 container init 95eca03cd573fffb014050b1b2f63a581385d9ca31049e7257f7819745fcfc9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_vaughan, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:12:59 compute-0 podman[288273]: 2025-10-01 17:12:59.20760164 +0000 UTC m=+0.222751442 container start 95eca03cd573fffb014050b1b2f63a581385d9ca31049e7257f7819745fcfc9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Oct 01 17:12:59 compute-0 podman[288273]: 2025-10-01 17:12:59.257326724 +0000 UTC m=+0.272476496 container attach 95eca03cd573fffb014050b1b2f63a581385d9ca31049e7257f7819745fcfc9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 01 17:13:00 compute-0 loving_vaughan[288290]: --> passed data devices: 0 physical, 3 LVM
Oct 01 17:13:00 compute-0 loving_vaughan[288290]: --> relative data size: 1.0
Oct 01 17:13:00 compute-0 loving_vaughan[288290]: --> All data devices are unavailable
Oct 01 17:13:00 compute-0 systemd[1]: libpod-95eca03cd573fffb014050b1b2f63a581385d9ca31049e7257f7819745fcfc9b.scope: Deactivated successfully.
Oct 01 17:13:00 compute-0 podman[288273]: 2025-10-01 17:13:00.261257202 +0000 UTC m=+1.276406984 container died 95eca03cd573fffb014050b1b2f63a581385d9ca31049e7257f7819745fcfc9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Oct 01 17:13:00 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-44d41cec9c2031ec8e3e4ca5ec822396f4ae92d9597836d48b542f1b06bf2af3-merged.mount: Deactivated successfully.
Oct 01 17:13:01 compute-0 podman[288273]: 2025-10-01 17:13:01.101279155 +0000 UTC m=+2.116428957 container remove 95eca03cd573fffb014050b1b2f63a581385d9ca31049e7257f7819745fcfc9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:13:01 compute-0 systemd[1]: libpod-conmon-95eca03cd573fffb014050b1b2f63a581385d9ca31049e7257f7819745fcfc9b.scope: Deactivated successfully.
Oct 01 17:13:01 compute-0 sudo[288166]: pam_unix(sudo:session): session closed for user root
Oct 01 17:13:01 compute-0 sudo[288332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:13:01 compute-0 sudo[288332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:13:01 compute-0 sudo[288332]: pam_unix(sudo:session): session closed for user root
Oct 01 17:13:01 compute-0 sudo[288357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:13:01 compute-0 sudo[288357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:13:01 compute-0 sudo[288357]: pam_unix(sudo:session): session closed for user root
Oct 01 17:13:01 compute-0 sudo[288382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:13:01 compute-0 sudo[288382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:13:01 compute-0 sudo[288382]: pam_unix(sudo:session): session closed for user root
Oct 01 17:13:01 compute-0 sudo[288407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 17:13:01 compute-0 sudo[288407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:13:01 compute-0 ceph-mon[74273]: pgmap v1328: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:01 compute-0 podman[288474]: 2025-10-01 17:13:01.758081544 +0000 UTC m=+0.048455501 container create 91f5c94f4fb90469e539d72c4e60a6d4d585b543278e0e5d28244b14de0157fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_gould, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Oct 01 17:13:01 compute-0 podman[288474]: 2025-10-01 17:13:01.729879562 +0000 UTC m=+0.020253539 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:13:01 compute-0 systemd[1]: Started libpod-conmon-91f5c94f4fb90469e539d72c4e60a6d4d585b543278e0e5d28244b14de0157fc.scope.
Oct 01 17:13:01 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:13:01 compute-0 podman[288474]: 2025-10-01 17:13:01.942213759 +0000 UTC m=+0.232587796 container init 91f5c94f4fb90469e539d72c4e60a6d4d585b543278e0e5d28244b14de0157fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 01 17:13:01 compute-0 podman[288474]: 2025-10-01 17:13:01.950076005 +0000 UTC m=+0.240449962 container start 91f5c94f4fb90469e539d72c4e60a6d4d585b543278e0e5d28244b14de0157fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_gould, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 01 17:13:01 compute-0 podman[288474]: 2025-10-01 17:13:01.955366805 +0000 UTC m=+0.245740862 container attach 91f5c94f4fb90469e539d72c4e60a6d4d585b543278e0e5d28244b14de0157fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 01 17:13:01 compute-0 elastic_gould[288490]: 167 167
Oct 01 17:13:01 compute-0 systemd[1]: libpod-91f5c94f4fb90469e539d72c4e60a6d4d585b543278e0e5d28244b14de0157fc.scope: Deactivated successfully.
Oct 01 17:13:02 compute-0 podman[288495]: 2025-10-01 17:13:02.0016901 +0000 UTC m=+0.030027875 container died 91f5c94f4fb90469e539d72c4e60a6d4d585b543278e0e5d28244b14de0157fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:13:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:13:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d96dde975756072ffc72853e61d83616ed8f690afb7fd38461ea02493fc16c0-merged.mount: Deactivated successfully.
Oct 01 17:13:02 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:02 compute-0 podman[288495]: 2025-10-01 17:13:02.524287895 +0000 UTC m=+0.552625610 container remove 91f5c94f4fb90469e539d72c4e60a6d4d585b543278e0e5d28244b14de0157fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_gould, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 01 17:13:02 compute-0 systemd[1]: libpod-conmon-91f5c94f4fb90469e539d72c4e60a6d4d585b543278e0e5d28244b14de0157fc.scope: Deactivated successfully.
Oct 01 17:13:02 compute-0 ceph-mon[74273]: pgmap v1329: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:02 compute-0 podman[288517]: 2025-10-01 17:13:02.727520472 +0000 UTC m=+0.040553907 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:13:02 compute-0 podman[288517]: 2025-10-01 17:13:02.821385967 +0000 UTC m=+0.134419322 container create af46509670b7b54566e54f01754cd23b991f4cebbc388af4db296ab85d1c203e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ritchie, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 01 17:13:02 compute-0 systemd[1]: Started libpod-conmon-af46509670b7b54566e54f01754cd23b991f4cebbc388af4db296ab85d1c203e.scope.
Oct 01 17:13:03 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:13:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263501a3dbbea38e15e765d0789a3c4ecf9e8eced1300b38de4e0ee7d2ac67b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:13:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263501a3dbbea38e15e765d0789a3c4ecf9e8eced1300b38de4e0ee7d2ac67b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:13:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263501a3dbbea38e15e765d0789a3c4ecf9e8eced1300b38de4e0ee7d2ac67b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:13:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263501a3dbbea38e15e765d0789a3c4ecf9e8eced1300b38de4e0ee7d2ac67b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:13:03 compute-0 podman[288517]: 2025-10-01 17:13:03.056649135 +0000 UTC m=+0.369682530 container init af46509670b7b54566e54f01754cd23b991f4cebbc388af4db296ab85d1c203e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ritchie, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:13:03 compute-0 podman[288517]: 2025-10-01 17:13:03.067762074 +0000 UTC m=+0.380795469 container start af46509670b7b54566e54f01754cd23b991f4cebbc388af4db296ab85d1c203e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 01 17:13:03 compute-0 podman[288517]: 2025-10-01 17:13:03.073992464 +0000 UTC m=+0.387025839 container attach af46509670b7b54566e54f01754cd23b991f4cebbc388af4db296ab85d1c203e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]: {
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:     "0": [
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:         {
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             "devices": [
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "/dev/loop3"
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             ],
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             "lv_name": "ceph_lv0",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             "lv_size": "21470642176",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             "name": "ceph_lv0",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             "tags": {
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "ceph.cluster_name": "ceph",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "ceph.crush_device_class": "",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "ceph.encrypted": "0",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "ceph.osd_id": "0",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "ceph.type": "block",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "ceph.vdo": "0"
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             },
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             "type": "block",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             "vg_name": "ceph_vg0"
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:         }
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:     ],
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:     "1": [
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:         {
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             "devices": [
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "/dev/loop4"
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             ],
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             "lv_name": "ceph_lv1",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             "lv_size": "21470642176",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             "name": "ceph_lv1",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             "tags": {
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "ceph.cluster_name": "ceph",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "ceph.crush_device_class": "",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "ceph.encrypted": "0",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "ceph.osd_id": "1",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "ceph.type": "block",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "ceph.vdo": "0"
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             },
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             "type": "block",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             "vg_name": "ceph_vg1"
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:         }
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:     ],
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:     "2": [
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:         {
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             "devices": [
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "/dev/loop5"
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             ],
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             "lv_name": "ceph_lv2",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             "lv_size": "21470642176",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             "name": "ceph_lv2",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             "tags": {
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "ceph.cluster_name": "ceph",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "ceph.crush_device_class": "",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "ceph.encrypted": "0",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "ceph.osd_id": "2",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "ceph.type": "block",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:                 "ceph.vdo": "0"
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             },
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             "type": "block",
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:             "vg_name": "ceph_vg2"
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:         }
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]:     ]
Oct 01 17:13:03 compute-0 vigorous_ritchie[288534]: }
Oct 01 17:13:03 compute-0 systemd[1]: libpod-af46509670b7b54566e54f01754cd23b991f4cebbc388af4db296ab85d1c203e.scope: Deactivated successfully.
Oct 01 17:13:03 compute-0 podman[288517]: 2025-10-01 17:13:03.867425978 +0000 UTC m=+1.180459333 container died af46509670b7b54566e54f01754cd23b991f4cebbc388af4db296ab85d1c203e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ritchie, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 01 17:13:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-263501a3dbbea38e15e765d0789a3c4ecf9e8eced1300b38de4e0ee7d2ac67b3-merged.mount: Deactivated successfully.
Oct 01 17:13:03 compute-0 podman[288517]: 2025-10-01 17:13:03.954516174 +0000 UTC m=+1.267549529 container remove af46509670b7b54566e54f01754cd23b991f4cebbc388af4db296ab85d1c203e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 01 17:13:03 compute-0 systemd[1]: libpod-conmon-af46509670b7b54566e54f01754cd23b991f4cebbc388af4db296ab85d1c203e.scope: Deactivated successfully.
Oct 01 17:13:03 compute-0 sudo[288407]: pam_unix(sudo:session): session closed for user root
Oct 01 17:13:04 compute-0 sudo[288554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:13:04 compute-0 sudo[288554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:13:04 compute-0 sudo[288554]: pam_unix(sudo:session): session closed for user root
Oct 01 17:13:04 compute-0 sudo[288579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:13:04 compute-0 sudo[288579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:13:04 compute-0 sudo[288579]: pam_unix(sudo:session): session closed for user root
Oct 01 17:13:04 compute-0 sudo[288604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:13:04 compute-0 sudo[288604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:13:04 compute-0 sudo[288604]: pam_unix(sudo:session): session closed for user root
Oct 01 17:13:04 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:04 compute-0 sudo[288629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 17:13:04 compute-0 sudo[288629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:13:04 compute-0 podman[288695]: 2025-10-01 17:13:04.593273859 +0000 UTC m=+0.047935023 container create 89745ac449502d1c3c09fa64b42000ccc29a539617b3ac3390969df9a48448e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_leakey, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:13:04 compute-0 systemd[1]: Started libpod-conmon-89745ac449502d1c3c09fa64b42000ccc29a539617b3ac3390969df9a48448e3.scope.
Oct 01 17:13:04 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:13:04 compute-0 podman[288695]: 2025-10-01 17:13:04.657504831 +0000 UTC m=+0.112165985 container init 89745ac449502d1c3c09fa64b42000ccc29a539617b3ac3390969df9a48448e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 01 17:13:04 compute-0 podman[288695]: 2025-10-01 17:13:04.566844529 +0000 UTC m=+0.021505773 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:13:04 compute-0 podman[288695]: 2025-10-01 17:13:04.663854749 +0000 UTC m=+0.118515903 container start 89745ac449502d1c3c09fa64b42000ccc29a539617b3ac3390969df9a48448e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Oct 01 17:13:04 compute-0 happy_leakey[288711]: 167 167
Oct 01 17:13:04 compute-0 systemd[1]: libpod-89745ac449502d1c3c09fa64b42000ccc29a539617b3ac3390969df9a48448e3.scope: Deactivated successfully.
Oct 01 17:13:04 compute-0 podman[288695]: 2025-10-01 17:13:04.668483122 +0000 UTC m=+0.123144276 container attach 89745ac449502d1c3c09fa64b42000ccc29a539617b3ac3390969df9a48448e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_leakey, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:13:04 compute-0 podman[288695]: 2025-10-01 17:13:04.668777496 +0000 UTC m=+0.123438650 container died 89745ac449502d1c3c09fa64b42000ccc29a539617b3ac3390969df9a48448e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:13:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca13445c131beb27d5f9d0341a8d4dfbc30df8ccbef67224ab636349d6b44db4-merged.mount: Deactivated successfully.
Oct 01 17:13:04 compute-0 podman[288695]: 2025-10-01 17:13:04.712511785 +0000 UTC m=+0.167172939 container remove 89745ac449502d1c3c09fa64b42000ccc29a539617b3ac3390969df9a48448e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_leakey, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 01 17:13:04 compute-0 systemd[1]: libpod-conmon-89745ac449502d1c3c09fa64b42000ccc29a539617b3ac3390969df9a48448e3.scope: Deactivated successfully.
Oct 01 17:13:04 compute-0 podman[288735]: 2025-10-01 17:13:04.92451099 +0000 UTC m=+0.082563622 container create 8920974a21d96cc68a0b97ed62aa6f63b410a99901301f9631d8324ddcc5d4d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_driscoll, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 01 17:13:04 compute-0 podman[288735]: 2025-10-01 17:13:04.862289236 +0000 UTC m=+0.020341868 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:13:05 compute-0 systemd[1]: Started libpod-conmon-8920974a21d96cc68a0b97ed62aa6f63b410a99901301f9631d8324ddcc5d4d5.scope.
Oct 01 17:13:05 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:13:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8dcf1cded66792f39e04c9e18a874b672b703590d5d66722c48a11bfb216379/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:13:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8dcf1cded66792f39e04c9e18a874b672b703590d5d66722c48a11bfb216379/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:13:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8dcf1cded66792f39e04c9e18a874b672b703590d5d66722c48a11bfb216379/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:13:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8dcf1cded66792f39e04c9e18a874b672b703590d5d66722c48a11bfb216379/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:13:05 compute-0 podman[288735]: 2025-10-01 17:13:05.046875411 +0000 UTC m=+0.204928043 container init 8920974a21d96cc68a0b97ed62aa6f63b410a99901301f9631d8324ddcc5d4d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_driscoll, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 01 17:13:05 compute-0 podman[288735]: 2025-10-01 17:13:05.053523862 +0000 UTC m=+0.211576474 container start 8920974a21d96cc68a0b97ed62aa6f63b410a99901301f9631d8324ddcc5d4d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_driscoll, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:13:05 compute-0 podman[288735]: 2025-10-01 17:13:05.057522509 +0000 UTC m=+0.215575151 container attach 8920974a21d96cc68a0b97ed62aa6f63b410a99901301f9631d8324ddcc5d4d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_driscoll, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:13:05 compute-0 ceph-mon[74273]: pgmap v1330: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:06 compute-0 quizzical_driscoll[288752]: {
Oct 01 17:13:06 compute-0 quizzical_driscoll[288752]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 17:13:06 compute-0 quizzical_driscoll[288752]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:13:06 compute-0 quizzical_driscoll[288752]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 17:13:06 compute-0 quizzical_driscoll[288752]:         "osd_id": 2,
Oct 01 17:13:06 compute-0 quizzical_driscoll[288752]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 17:13:06 compute-0 quizzical_driscoll[288752]:         "type": "bluestore"
Oct 01 17:13:06 compute-0 quizzical_driscoll[288752]:     },
Oct 01 17:13:06 compute-0 quizzical_driscoll[288752]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 17:13:06 compute-0 quizzical_driscoll[288752]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:13:06 compute-0 quizzical_driscoll[288752]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 17:13:06 compute-0 quizzical_driscoll[288752]:         "osd_id": 0,
Oct 01 17:13:06 compute-0 quizzical_driscoll[288752]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 17:13:06 compute-0 quizzical_driscoll[288752]:         "type": "bluestore"
Oct 01 17:13:06 compute-0 quizzical_driscoll[288752]:     },
Oct 01 17:13:06 compute-0 quizzical_driscoll[288752]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 17:13:06 compute-0 quizzical_driscoll[288752]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:13:06 compute-0 quizzical_driscoll[288752]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 17:13:06 compute-0 quizzical_driscoll[288752]:         "osd_id": 1,
Oct 01 17:13:06 compute-0 quizzical_driscoll[288752]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 17:13:06 compute-0 quizzical_driscoll[288752]:         "type": "bluestore"
Oct 01 17:13:06 compute-0 quizzical_driscoll[288752]:     }
Oct 01 17:13:06 compute-0 quizzical_driscoll[288752]: }
Oct 01 17:13:06 compute-0 systemd[1]: libpod-8920974a21d96cc68a0b97ed62aa6f63b410a99901301f9631d8324ddcc5d4d5.scope: Deactivated successfully.
Oct 01 17:13:06 compute-0 systemd[1]: libpod-8920974a21d96cc68a0b97ed62aa6f63b410a99901301f9631d8324ddcc5d4d5.scope: Consumed 1.062s CPU time.
Oct 01 17:13:06 compute-0 podman[288735]: 2025-10-01 17:13:06.108804712 +0000 UTC m=+1.266857354 container died 8920974a21d96cc68a0b97ed62aa6f63b410a99901301f9631d8324ddcc5d4d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_driscoll, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 17:13:06 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8dcf1cded66792f39e04c9e18a874b672b703590d5d66722c48a11bfb216379-merged.mount: Deactivated successfully.
Oct 01 17:13:06 compute-0 ceph-mon[74273]: pgmap v1331: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:06 compute-0 podman[288735]: 2025-10-01 17:13:06.7041637 +0000 UTC m=+1.862216312 container remove 8920974a21d96cc68a0b97ed62aa6f63b410a99901301f9631d8324ddcc5d4d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_driscoll, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 17:13:06 compute-0 sudo[288629]: pam_unix(sudo:session): session closed for user root
Oct 01 17:13:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 17:13:06 compute-0 systemd[1]: libpod-conmon-8920974a21d96cc68a0b97ed62aa6f63b410a99901301f9631d8324ddcc5d4d5.scope: Deactivated successfully.
Oct 01 17:13:06 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:13:06 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 17:13:06 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:13:06 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 83adb3e8-db6c-4a82-8d32-415e553efec1 does not exist
Oct 01 17:13:06 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 94638a47-4ad8-4d92-8ebf-4a8b673ab3ab does not exist
Oct 01 17:13:07 compute-0 sudo[288797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:13:07 compute-0 sudo[288797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:13:07 compute-0 sudo[288797]: pam_unix(sudo:session): session closed for user root
Oct 01 17:13:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:13:07 compute-0 sudo[288822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 17:13:07 compute-0 sudo[288822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:13:07 compute-0 sudo[288822]: pam_unix(sudo:session): session closed for user root
Oct 01 17:13:07 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:13:07 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:13:08 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:08 compute-0 ceph-mon[74273]: pgmap v1332: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:10 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:11 compute-0 ceph-mon[74273]: pgmap v1333: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:13:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:13:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_17:13:11
Oct 01 17:13:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 17:13:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 17:13:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'backups', 'default.rgw.control', 'volumes', 'images', 'default.rgw.log', 'vms', 'cephfs.cephfs.data']
Oct 01 17:13:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 17:13:11 compute-0 podman[288847]: 2025-10-01 17:13:11.414540387 +0000 UTC m=+0.061861243 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 01 17:13:11 compute-0 podman[288848]: 2025-10-01 17:13:11.414076827 +0000 UTC m=+0.061416262 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 01 17:13:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:13:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:13:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:13:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:13:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:13:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 17:13:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:13:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 17:13:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:13:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:13:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:13:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:13:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:13:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:13:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:13:12 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:13 compute-0 ceph-mon[74273]: pgmap v1334: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:14 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:15 compute-0 ceph-mon[74273]: pgmap v1335: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:16 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:16 compute-0 ceph-mon[74273]: pgmap v1336: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:13:18 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1337: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:18 compute-0 ceph-mon[74273]: pgmap v1337: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:13:19.983 162304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:13:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:13:19.983 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:13:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:13:19.983 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:13:20 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:20 compute-0 ceph-mon[74273]: pgmap v1338: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 17:13:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:13:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 17:13:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:13:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:13:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:13:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:13:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:13:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:13:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:13:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 01 17:13:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:13:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005739061380803542 of space, bias 4.0, pg target 0.6886873656964251 quantized to 16 (current 16)
Oct 01 17:13:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:13:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Oct 01 17:13:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:13:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 17:13:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:13:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 17:13:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:13:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:13:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:13:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 17:13:21 compute-0 nova_compute[259504]: 2025-10-01 17:13:21.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:13:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:13:22 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:22 compute-0 podman[288889]: 2025-10-01 17:13:22.775541796 +0000 UTC m=+0.087943609 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 01 17:13:23 compute-0 ceph-mon[74273]: pgmap v1339: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:23 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 17:13:23 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.3 total, 600.0 interval
                                           Cumulative writes: 6550 writes, 30K keys, 6550 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 6550 writes, 6550 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1765 writes, 8577 keys, 1765 commit groups, 1.0 writes per commit group, ingest: 10.71 MB, 0.02 MB/s
                                           Interval WAL: 1765 writes, 1765 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     29.8      1.12              0.13        16    0.070       0      0       0.0       0.0
                                             L6      1/0    8.25 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4     72.3     59.1      1.91              0.43        15    0.127     72K   8390       0.0       0.0
                                            Sum      1/0    8.25 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.4     45.6     48.3      3.03              0.55        31    0.098     72K   8390       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.0     26.8     27.4      1.61              0.17         8    0.202     24K   2602       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     72.3     59.1      1.91              0.43        15    0.127     72K   8390       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     32.0      1.04              0.13        15    0.069       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.08              0.00         1    0.079       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.033, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.14 GB write, 0.06 MB/s write, 0.13 GB read, 0.06 MB/s read, 3.0 seconds
                                           Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 1.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5647d11d91f0#2 capacity: 304.00 MB usage: 16.08 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000131 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1244,15.49 MB,5.09664%) FilterBlock(32,211.36 KB,0.0678966%) IndexBlock(32,385.61 KB,0.123872%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 01 17:13:24 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:25 compute-0 ceph-mon[74273]: pgmap v1340: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:25 compute-0 nova_compute[259504]: 2025-10-01 17:13:25.751 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:13:25 compute-0 nova_compute[259504]: 2025-10-01 17:13:25.751 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 17:13:25 compute-0 nova_compute[259504]: 2025-10-01 17:13:25.752 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 17:13:25 compute-0 nova_compute[259504]: 2025-10-01 17:13:25.849 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 17:13:26 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:26 compute-0 nova_compute[259504]: 2025-10-01 17:13:26.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:13:26 compute-0 nova_compute[259504]: 2025-10-01 17:13:26.777 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:13:26 compute-0 nova_compute[259504]: 2025-10-01 17:13:26.777 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:13:26 compute-0 nova_compute[259504]: 2025-10-01 17:13:26.778 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:13:26 compute-0 nova_compute[259504]: 2025-10-01 17:13:26.778 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 17:13:26 compute-0 nova_compute[259504]: 2025-10-01 17:13:26.778 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:13:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:13:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:13:27 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2137229699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:13:27 compute-0 nova_compute[259504]: 2025-10-01 17:13:27.221 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:13:27 compute-0 nova_compute[259504]: 2025-10-01 17:13:27.386 2 WARNING nova.virt.libvirt.driver [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 17:13:27 compute-0 nova_compute[259504]: 2025-10-01 17:13:27.387 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4982MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 17:13:27 compute-0 nova_compute[259504]: 2025-10-01 17:13:27.387 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:13:27 compute-0 nova_compute[259504]: 2025-10-01 17:13:27.388 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:13:27 compute-0 nova_compute[259504]: 2025-10-01 17:13:27.454 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 17:13:27 compute-0 nova_compute[259504]: 2025-10-01 17:13:27.455 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 17:13:27 compute-0 ceph-mon[74273]: pgmap v1341: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:27 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2137229699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:13:27 compute-0 nova_compute[259504]: 2025-10-01 17:13:27.476 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:13:27 compute-0 podman[288957]: 2025-10-01 17:13:27.74965197 +0000 UTC m=+0.065182303 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 01 17:13:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:13:27 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1496168364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:13:27 compute-0 nova_compute[259504]: 2025-10-01 17:13:27.913 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:13:27 compute-0 nova_compute[259504]: 2025-10-01 17:13:27.917 2 DEBUG nova.compute.provider_tree [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed in ProviderTree for provider: 2417da73-53f1-4edf-ae4c-fbd9fa470d6b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 17:13:27 compute-0 nova_compute[259504]: 2025-10-01 17:13:27.955 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 17:13:27 compute-0 nova_compute[259504]: 2025-10-01 17:13:27.956 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 17:13:27 compute-0 nova_compute[259504]: 2025-10-01 17:13:27.956 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:13:28 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:28 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1496168364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:13:28 compute-0 ceph-mon[74273]: pgmap v1342: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:29 compute-0 nova_compute[259504]: 2025-10-01 17:13:29.957 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:13:29 compute-0 nova_compute[259504]: 2025-10-01 17:13:29.957 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:13:29 compute-0 nova_compute[259504]: 2025-10-01 17:13:29.958 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:13:30 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:30 compute-0 nova_compute[259504]: 2025-10-01 17:13:30.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:13:30 compute-0 nova_compute[259504]: 2025-10-01 17:13:30.751 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:13:30 compute-0 nova_compute[259504]: 2025-10-01 17:13:30.751 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 17:13:31 compute-0 ceph-mon[74273]: pgmap v1343: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:13:32 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:32 compute-0 nova_compute[259504]: 2025-10-01 17:13:32.751 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:13:33 compute-0 ceph-mon[74273]: pgmap v1344: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:34 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:35 compute-0 ceph-mon[74273]: pgmap v1345: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:36 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:37 compute-0 ceph-mon[74273]: pgmap v1346: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:13:38 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:39 compute-0 ceph-mon[74273]: pgmap v1347: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:40 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:41 compute-0 ceph-mon[74273]: pgmap v1348: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:13:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:13:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:13:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:13:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:13:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:13:41 compute-0 podman[288979]: 2025-10-01 17:13:41.741458534 +0000 UTC m=+0.059840286 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 01 17:13:41 compute-0 podman[288980]: 2025-10-01 17:13:41.788997155 +0000 UTC m=+0.092735366 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 17:13:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:13:42 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:43 compute-0 ceph-mon[74273]: pgmap v1349: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 17:13:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2527843481' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 17:13:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 17:13:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2527843481' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 17:13:44 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2527843481' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 17:13:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2527843481' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 17:13:45 compute-0 ceph-mon[74273]: pgmap v1350: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:46 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:46 compute-0 ceph-mon[74273]: pgmap v1351: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:13:48 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:49 compute-0 ceph-mon[74273]: pgmap v1352: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:50 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:51 compute-0 ceph-mon[74273]: pgmap v1353: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:51 compute-0 rsyslogd[1001]: imjournal: 17399 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 01 17:13:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:13:52 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:53 compute-0 ceph-mon[74273]: pgmap v1354: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:53 compute-0 podman[289017]: 2025-10-01 17:13:53.75269036 +0000 UTC m=+0.067857747 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 01 17:13:54 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:55 compute-0 ceph-mon[74273]: pgmap v1355: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:56 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:56 compute-0 ceph-mon[74273]: pgmap v1356: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:13:58 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:13:58 compute-0 podman[289043]: 2025-10-01 17:13:58.729724138 +0000 UTC m=+0.049247753 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 01 17:13:59 compute-0 ceph-mon[74273]: pgmap v1357: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:00 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:00 compute-0 ceph-mon[74273]: pgmap v1358: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:14:02 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:03 compute-0 ceph-mon[74273]: pgmap v1359: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:04 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:05 compute-0 ceph-mon[74273]: pgmap v1360: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:06 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:07 compute-0 sudo[289062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:14:07 compute-0 sudo[289062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:14:07 compute-0 sudo[289062]: pam_unix(sudo:session): session closed for user root
Oct 01 17:14:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:14:07 compute-0 sudo[289087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:14:07 compute-0 sudo[289087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:14:07 compute-0 sudo[289087]: pam_unix(sudo:session): session closed for user root
Oct 01 17:14:07 compute-0 sudo[289112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:14:07 compute-0 sudo[289112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:14:07 compute-0 sudo[289112]: pam_unix(sudo:session): session closed for user root
Oct 01 17:14:07 compute-0 sudo[289137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 17:14:07 compute-0 sudo[289137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:14:07 compute-0 ceph-mon[74273]: pgmap v1361: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:07 compute-0 sudo[289137]: pam_unix(sudo:session): session closed for user root
Oct 01 17:14:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 17:14:07 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:14:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 17:14:07 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 17:14:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 17:14:07 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:14:07 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev f827bc6c-fa3c-4e7d-87fb-4f8019ff9127 does not exist
Oct 01 17:14:07 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev a914ed96-eac8-4373-b7a4-d5f3bd3866a3 does not exist
Oct 01 17:14:07 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev dd555987-cddb-4a6e-988d-81853f9a7572 does not exist
Oct 01 17:14:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 17:14:07 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 17:14:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 17:14:07 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 17:14:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 17:14:07 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:14:07 compute-0 sudo[289192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:14:07 compute-0 sudo[289192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:14:07 compute-0 sudo[289192]: pam_unix(sudo:session): session closed for user root
Oct 01 17:14:07 compute-0 sudo[289217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:14:07 compute-0 sudo[289217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:14:07 compute-0 sudo[289217]: pam_unix(sudo:session): session closed for user root
Oct 01 17:14:08 compute-0 sudo[289242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:14:08 compute-0 sudo[289242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:14:08 compute-0 sudo[289242]: pam_unix(sudo:session): session closed for user root
Oct 01 17:14:08 compute-0 sudo[289267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 17:14:08 compute-0 sudo[289267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:14:08 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1362: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:08 compute-0 podman[289331]: 2025-10-01 17:14:08.373370519 +0000 UTC m=+0.025357776 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:14:08 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:14:08 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 17:14:08 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:14:08 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 17:14:08 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 17:14:08 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:14:08 compute-0 podman[289331]: 2025-10-01 17:14:08.62374577 +0000 UTC m=+0.275733047 container create 0e12659a7a0512f86c044f0de47c5e03bf5ab2183509457e12f45dedfe9261be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 01 17:14:08 compute-0 systemd[1]: Started libpod-conmon-0e12659a7a0512f86c044f0de47c5e03bf5ab2183509457e12f45dedfe9261be.scope.
Oct 01 17:14:08 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:14:08 compute-0 podman[289331]: 2025-10-01 17:14:08.747476327 +0000 UTC m=+0.399463584 container init 0e12659a7a0512f86c044f0de47c5e03bf5ab2183509457e12f45dedfe9261be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_saha, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Oct 01 17:14:08 compute-0 podman[289331]: 2025-10-01 17:14:08.754795127 +0000 UTC m=+0.406782364 container start 0e12659a7a0512f86c044f0de47c5e03bf5ab2183509457e12f45dedfe9261be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_saha, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 01 17:14:08 compute-0 silly_saha[289347]: 167 167
Oct 01 17:14:08 compute-0 systemd[1]: libpod-0e12659a7a0512f86c044f0de47c5e03bf5ab2183509457e12f45dedfe9261be.scope: Deactivated successfully.
Oct 01 17:14:08 compute-0 podman[289331]: 2025-10-01 17:14:08.795394161 +0000 UTC m=+0.447381428 container attach 0e12659a7a0512f86c044f0de47c5e03bf5ab2183509457e12f45dedfe9261be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:14:08 compute-0 podman[289331]: 2025-10-01 17:14:08.79656135 +0000 UTC m=+0.448548577 container died 0e12659a7a0512f86c044f0de47c5e03bf5ab2183509457e12f45dedfe9261be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:14:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-891b1668ee16a75a2f720c83c26ede21066b36752a0d2de134976b3cd71c05ba-merged.mount: Deactivated successfully.
Oct 01 17:14:09 compute-0 podman[289331]: 2025-10-01 17:14:09.088094305 +0000 UTC m=+0.740081552 container remove 0e12659a7a0512f86c044f0de47c5e03bf5ab2183509457e12f45dedfe9261be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:14:09 compute-0 systemd[1]: libpod-conmon-0e12659a7a0512f86c044f0de47c5e03bf5ab2183509457e12f45dedfe9261be.scope: Deactivated successfully.
Oct 01 17:14:09 compute-0 podman[289371]: 2025-10-01 17:14:09.26199539 +0000 UTC m=+0.039237710 container create d8e9cd76e6795fb3c43b593abad97dd7b47754fe8d4022c50d426d5df01fb316 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 01 17:14:09 compute-0 systemd[1]: Started libpod-conmon-d8e9cd76e6795fb3c43b593abad97dd7b47754fe8d4022c50d426d5df01fb316.scope.
Oct 01 17:14:09 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:14:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c36c104d5690215b79968632d4d6c252f36dd0b01deae0325c2314f1933f443b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:14:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c36c104d5690215b79968632d4d6c252f36dd0b01deae0325c2314f1933f443b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:14:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c36c104d5690215b79968632d4d6c252f36dd0b01deae0325c2314f1933f443b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:14:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c36c104d5690215b79968632d4d6c252f36dd0b01deae0325c2314f1933f443b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:14:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c36c104d5690215b79968632d4d6c252f36dd0b01deae0325c2314f1933f443b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 17:14:09 compute-0 podman[289371]: 2025-10-01 17:14:09.244809146 +0000 UTC m=+0.022051496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:14:09 compute-0 podman[289371]: 2025-10-01 17:14:09.347200709 +0000 UTC m=+0.124443059 container init d8e9cd76e6795fb3c43b593abad97dd7b47754fe8d4022c50d426d5df01fb316 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_heisenberg, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Oct 01 17:14:09 compute-0 podman[289371]: 2025-10-01 17:14:09.355215631 +0000 UTC m=+0.132457951 container start d8e9cd76e6795fb3c43b593abad97dd7b47754fe8d4022c50d426d5df01fb316 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_heisenberg, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:14:09 compute-0 podman[289371]: 2025-10-01 17:14:09.359024795 +0000 UTC m=+0.136267135 container attach d8e9cd76e6795fb3c43b593abad97dd7b47754fe8d4022c50d426d5df01fb316 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_heisenberg, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 01 17:14:09 compute-0 ceph-mon[74273]: pgmap v1362: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:10 compute-0 thirsty_heisenberg[289387]: --> passed data devices: 0 physical, 3 LVM
Oct 01 17:14:10 compute-0 thirsty_heisenberg[289387]: --> relative data size: 1.0
Oct 01 17:14:10 compute-0 thirsty_heisenberg[289387]: --> All data devices are unavailable
Oct 01 17:14:10 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:10 compute-0 systemd[1]: libpod-d8e9cd76e6795fb3c43b593abad97dd7b47754fe8d4022c50d426d5df01fb316.scope: Deactivated successfully.
Oct 01 17:14:10 compute-0 podman[289371]: 2025-10-01 17:14:10.311683939 +0000 UTC m=+1.088926259 container died d8e9cd76e6795fb3c43b593abad97dd7b47754fe8d4022c50d426d5df01fb316 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:14:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-c36c104d5690215b79968632d4d6c252f36dd0b01deae0325c2314f1933f443b-merged.mount: Deactivated successfully.
Oct 01 17:14:10 compute-0 podman[289371]: 2025-10-01 17:14:10.375170729 +0000 UTC m=+1.152413049 container remove d8e9cd76e6795fb3c43b593abad97dd7b47754fe8d4022c50d426d5df01fb316 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:14:10 compute-0 systemd[1]: libpod-conmon-d8e9cd76e6795fb3c43b593abad97dd7b47754fe8d4022c50d426d5df01fb316.scope: Deactivated successfully.
Oct 01 17:14:10 compute-0 sudo[289267]: pam_unix(sudo:session): session closed for user root
Oct 01 17:14:10 compute-0 sudo[289429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:14:10 compute-0 sudo[289429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:14:10 compute-0 sudo[289429]: pam_unix(sudo:session): session closed for user root
Oct 01 17:14:10 compute-0 sudo[289454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:14:10 compute-0 sudo[289454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:14:10 compute-0 sudo[289454]: pam_unix(sudo:session): session closed for user root
Oct 01 17:14:10 compute-0 ceph-mon[74273]: pgmap v1363: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:10 compute-0 sudo[289479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:14:10 compute-0 sudo[289479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:14:10 compute-0 sudo[289479]: pam_unix(sudo:session): session closed for user root
Oct 01 17:14:10 compute-0 sudo[289504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 17:14:10 compute-0 sudo[289504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:14:10 compute-0 podman[289568]: 2025-10-01 17:14:10.949233996 +0000 UTC m=+0.037697705 container create 27db7117cbcde70166efefd37c81047333617b2aad2c955ca0b9401650ad0814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_kirch, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 01 17:14:10 compute-0 systemd[1]: Started libpod-conmon-27db7117cbcde70166efefd37c81047333617b2aad2c955ca0b9401650ad0814.scope.
Oct 01 17:14:10 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:14:11 compute-0 podman[289568]: 2025-10-01 17:14:11.009631292 +0000 UTC m=+0.098119759 container init 27db7117cbcde70166efefd37c81047333617b2aad2c955ca0b9401650ad0814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_kirch, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:14:11 compute-0 podman[289568]: 2025-10-01 17:14:11.016219213 +0000 UTC m=+0.104682922 container start 27db7117cbcde70166efefd37c81047333617b2aad2c955ca0b9401650ad0814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_kirch, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 01 17:14:11 compute-0 flamboyant_kirch[289584]: 167 167
Oct 01 17:14:11 compute-0 systemd[1]: libpod-27db7117cbcde70166efefd37c81047333617b2aad2c955ca0b9401650ad0814.scope: Deactivated successfully.
Oct 01 17:14:11 compute-0 podman[289568]: 2025-10-01 17:14:11.01985453 +0000 UTC m=+0.108318239 container attach 27db7117cbcde70166efefd37c81047333617b2aad2c955ca0b9401650ad0814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_kirch, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 01 17:14:11 compute-0 podman[289568]: 2025-10-01 17:14:11.021166679 +0000 UTC m=+0.109630388 container died 27db7117cbcde70166efefd37c81047333617b2aad2c955ca0b9401650ad0814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:14:11 compute-0 podman[289568]: 2025-10-01 17:14:10.931373981 +0000 UTC m=+0.019837720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:14:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e93250ffc83c590764514422c7404135d9a7640f54dc6ac06346d8382f313db-merged.mount: Deactivated successfully.
Oct 01 17:14:11 compute-0 podman[289568]: 2025-10-01 17:14:11.06697049 +0000 UTC m=+0.155434199 container remove 27db7117cbcde70166efefd37c81047333617b2aad2c955ca0b9401650ad0814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_kirch, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 01 17:14:11 compute-0 systemd[1]: libpod-conmon-27db7117cbcde70166efefd37c81047333617b2aad2c955ca0b9401650ad0814.scope: Deactivated successfully.
Oct 01 17:14:11 compute-0 podman[289608]: 2025-10-01 17:14:11.25171075 +0000 UTC m=+0.042336584 container create 03aee193facb7982210dc7449901b1e916679c0371de97828835e983317f71b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_joliot, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 01 17:14:11 compute-0 systemd[1]: Started libpod-conmon-03aee193facb7982210dc7449901b1e916679c0371de97828835e983317f71b5.scope.
Oct 01 17:14:11 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:14:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67d10b510ddb7522f6c0d567a5688c03ac13d94fdc7b4ff09cf99438bc0f3a3b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:14:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67d10b510ddb7522f6c0d567a5688c03ac13d94fdc7b4ff09cf99438bc0f3a3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:14:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67d10b510ddb7522f6c0d567a5688c03ac13d94fdc7b4ff09cf99438bc0f3a3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:14:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67d10b510ddb7522f6c0d567a5688c03ac13d94fdc7b4ff09cf99438bc0f3a3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:14:11 compute-0 podman[289608]: 2025-10-01 17:14:11.232105675 +0000 UTC m=+0.022731499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:14:11 compute-0 podman[289608]: 2025-10-01 17:14:11.335734311 +0000 UTC m=+0.126360145 container init 03aee193facb7982210dc7449901b1e916679c0371de97828835e983317f71b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_joliot, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 17:14:11 compute-0 podman[289608]: 2025-10-01 17:14:11.343070581 +0000 UTC m=+0.133696385 container start 03aee193facb7982210dc7449901b1e916679c0371de97828835e983317f71b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_joliot, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 01 17:14:11 compute-0 podman[289608]: 2025-10-01 17:14:11.346486913 +0000 UTC m=+0.137112727 container attach 03aee193facb7982210dc7449901b1e916679c0371de97828835e983317f71b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:14:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:14:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:14:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_17:14:11
Oct 01 17:14:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 17:14:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 17:14:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['vms', 'volumes', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'images', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log']
Oct 01 17:14:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 17:14:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:14:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:14:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:14:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:14:12 compute-0 practical_joliot[289625]: {
Oct 01 17:14:12 compute-0 practical_joliot[289625]:     "0": [
Oct 01 17:14:12 compute-0 practical_joliot[289625]:         {
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             "devices": [
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "/dev/loop3"
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             ],
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             "lv_name": "ceph_lv0",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             "lv_size": "21470642176",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             "name": "ceph_lv0",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             "tags": {
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "ceph.cluster_name": "ceph",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "ceph.crush_device_class": "",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "ceph.encrypted": "0",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "ceph.osd_id": "0",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "ceph.type": "block",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "ceph.vdo": "0"
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             },
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             "type": "block",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             "vg_name": "ceph_vg0"
Oct 01 17:14:12 compute-0 practical_joliot[289625]:         }
Oct 01 17:14:12 compute-0 practical_joliot[289625]:     ],
Oct 01 17:14:12 compute-0 practical_joliot[289625]:     "1": [
Oct 01 17:14:12 compute-0 practical_joliot[289625]:         {
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             "devices": [
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "/dev/loop4"
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             ],
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             "lv_name": "ceph_lv1",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             "lv_size": "21470642176",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             "name": "ceph_lv1",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             "tags": {
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "ceph.cluster_name": "ceph",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "ceph.crush_device_class": "",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "ceph.encrypted": "0",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "ceph.osd_id": "1",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "ceph.type": "block",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "ceph.vdo": "0"
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             },
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             "type": "block",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             "vg_name": "ceph_vg1"
Oct 01 17:14:12 compute-0 practical_joliot[289625]:         }
Oct 01 17:14:12 compute-0 practical_joliot[289625]:     ],
Oct 01 17:14:12 compute-0 practical_joliot[289625]:     "2": [
Oct 01 17:14:12 compute-0 practical_joliot[289625]:         {
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             "devices": [
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "/dev/loop5"
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             ],
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             "lv_name": "ceph_lv2",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             "lv_size": "21470642176",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             "name": "ceph_lv2",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             "tags": {
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "ceph.cluster_name": "ceph",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "ceph.crush_device_class": "",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "ceph.encrypted": "0",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "ceph.osd_id": "2",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "ceph.type": "block",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:                 "ceph.vdo": "0"
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             },
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             "type": "block",
Oct 01 17:14:12 compute-0 practical_joliot[289625]:             "vg_name": "ceph_vg2"
Oct 01 17:14:12 compute-0 practical_joliot[289625]:         }
Oct 01 17:14:12 compute-0 practical_joliot[289625]:     ]
Oct 01 17:14:12 compute-0 practical_joliot[289625]: }
Oct 01 17:14:12 compute-0 systemd[1]: libpod-03aee193facb7982210dc7449901b1e916679c0371de97828835e983317f71b5.scope: Deactivated successfully.
Oct 01 17:14:12 compute-0 podman[289608]: 2025-10-01 17:14:12.121009466 +0000 UTC m=+0.911635290 container died 03aee193facb7982210dc7449901b1e916679c0371de97828835e983317f71b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_joliot, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Oct 01 17:14:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 17:14:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 17:14:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:14:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:14:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:14:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:14:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:14:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:14:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:14:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:14:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-67d10b510ddb7522f6c0d567a5688c03ac13d94fdc7b4ff09cf99438bc0f3a3b-merged.mount: Deactivated successfully.
Oct 01 17:14:12 compute-0 podman[289608]: 2025-10-01 17:14:12.186227576 +0000 UTC m=+0.976853390 container remove 03aee193facb7982210dc7449901b1e916679c0371de97828835e983317f71b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:14:12 compute-0 systemd[1]: libpod-conmon-03aee193facb7982210dc7449901b1e916679c0371de97828835e983317f71b5.scope: Deactivated successfully.
Oct 01 17:14:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:14:12 compute-0 sudo[289504]: pam_unix(sudo:session): session closed for user root
Oct 01 17:14:12 compute-0 podman[289642]: 2025-10-01 17:14:12.243759452 +0000 UTC m=+0.086276385 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 01 17:14:12 compute-0 podman[289635]: 2025-10-01 17:14:12.243729794 +0000 UTC m=+0.085064590 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 01 17:14:12 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:12 compute-0 sudo[289680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:14:12 compute-0 sudo[289680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:14:12 compute-0 sudo[289680]: pam_unix(sudo:session): session closed for user root
Oct 01 17:14:12 compute-0 sudo[289705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:14:12 compute-0 sudo[289705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:14:12 compute-0 sudo[289705]: pam_unix(sudo:session): session closed for user root
Oct 01 17:14:12 compute-0 sudo[289730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:14:12 compute-0 sudo[289730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:14:12 compute-0 sudo[289730]: pam_unix(sudo:session): session closed for user root
Oct 01 17:14:12 compute-0 sudo[289755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 17:14:12 compute-0 sudo[289755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:14:12 compute-0 podman[289820]: 2025-10-01 17:14:12.852431041 +0000 UTC m=+0.047463577 container create 0e1dc0942d74547d94481fb6cd1c451891c4c3b2cfc7e76db04c9c3cea99e747 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ramanujan, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:14:12 compute-0 systemd[1]: Started libpod-conmon-0e1dc0942d74547d94481fb6cd1c451891c4c3b2cfc7e76db04c9c3cea99e747.scope.
Oct 01 17:14:12 compute-0 podman[289820]: 2025-10-01 17:14:12.830751 +0000 UTC m=+0.025783516 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:14:12 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:14:12 compute-0 podman[289820]: 2025-10-01 17:14:12.968393298 +0000 UTC m=+0.163425814 container init 0e1dc0942d74547d94481fb6cd1c451891c4c3b2cfc7e76db04c9c3cea99e747 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ramanujan, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 01 17:14:12 compute-0 podman[289820]: 2025-10-01 17:14:12.97698116 +0000 UTC m=+0.172013656 container start 0e1dc0942d74547d94481fb6cd1c451891c4c3b2cfc7e76db04c9c3cea99e747 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ramanujan, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 01 17:14:12 compute-0 angry_ramanujan[289836]: 167 167
Oct 01 17:14:12 compute-0 systemd[1]: libpod-0e1dc0942d74547d94481fb6cd1c451891c4c3b2cfc7e76db04c9c3cea99e747.scope: Deactivated successfully.
Oct 01 17:14:12 compute-0 podman[289820]: 2025-10-01 17:14:12.999725277 +0000 UTC m=+0.194757773 container attach 0e1dc0942d74547d94481fb6cd1c451891c4c3b2cfc7e76db04c9c3cea99e747 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 01 17:14:13 compute-0 podman[289820]: 2025-10-01 17:14:13.000507543 +0000 UTC m=+0.195540039 container died 0e1dc0942d74547d94481fb6cd1c451891c4c3b2cfc7e76db04c9c3cea99e747 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 01 17:14:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-79b90e1b2511625963f2d7fe895fbb57a31cacc2a121192e757a94fa0cdcb930-merged.mount: Deactivated successfully.
Oct 01 17:14:13 compute-0 podman[289820]: 2025-10-01 17:14:13.104018377 +0000 UTC m=+0.299050883 container remove 0e1dc0942d74547d94481fb6cd1c451891c4c3b2cfc7e76db04c9c3cea99e747 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ramanujan, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:14:13 compute-0 systemd[1]: libpod-conmon-0e1dc0942d74547d94481fb6cd1c451891c4c3b2cfc7e76db04c9c3cea99e747.scope: Deactivated successfully.
Oct 01 17:14:13 compute-0 podman[289861]: 2025-10-01 17:14:13.273854574 +0000 UTC m=+0.041131518 container create 0b30b15bab1e5ca59bbbe6e235e3617767115f8cd1578914d55a86699c004ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lichterman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:14:13 compute-0 systemd[1]: Started libpod-conmon-0b30b15bab1e5ca59bbbe6e235e3617767115f8cd1578914d55a86699c004ba7.scope.
Oct 01 17:14:13 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:14:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f08522549657cb6ecde77ab3aa352009a24e101995b45c97821e45fa231cc175/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:14:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f08522549657cb6ecde77ab3aa352009a24e101995b45c97821e45fa231cc175/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:14:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f08522549657cb6ecde77ab3aa352009a24e101995b45c97821e45fa231cc175/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:14:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f08522549657cb6ecde77ab3aa352009a24e101995b45c97821e45fa231cc175/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:14:13 compute-0 ceph-mon[74273]: pgmap v1364: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:13 compute-0 podman[289861]: 2025-10-01 17:14:13.255228631 +0000 UTC m=+0.022505605 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:14:13 compute-0 podman[289861]: 2025-10-01 17:14:13.350495469 +0000 UTC m=+0.117772423 container init 0b30b15bab1e5ca59bbbe6e235e3617767115f8cd1578914d55a86699c004ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lichterman, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:14:13 compute-0 podman[289861]: 2025-10-01 17:14:13.356205622 +0000 UTC m=+0.123482566 container start 0b30b15bab1e5ca59bbbe6e235e3617767115f8cd1578914d55a86699c004ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lichterman, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 01 17:14:13 compute-0 podman[289861]: 2025-10-01 17:14:13.35996241 +0000 UTC m=+0.127239354 container attach 0b30b15bab1e5ca59bbbe6e235e3617767115f8cd1578914d55a86699c004ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lichterman, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 01 17:14:14 compute-0 priceless_lichterman[289878]: {
Oct 01 17:14:14 compute-0 priceless_lichterman[289878]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 17:14:14 compute-0 priceless_lichterman[289878]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:14:14 compute-0 priceless_lichterman[289878]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 17:14:14 compute-0 priceless_lichterman[289878]:         "osd_id": 2,
Oct 01 17:14:14 compute-0 priceless_lichterman[289878]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 17:14:14 compute-0 priceless_lichterman[289878]:         "type": "bluestore"
Oct 01 17:14:14 compute-0 priceless_lichterman[289878]:     },
Oct 01 17:14:14 compute-0 priceless_lichterman[289878]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 17:14:14 compute-0 priceless_lichterman[289878]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:14:14 compute-0 priceless_lichterman[289878]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 17:14:14 compute-0 priceless_lichterman[289878]:         "osd_id": 0,
Oct 01 17:14:14 compute-0 priceless_lichterman[289878]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 17:14:14 compute-0 priceless_lichterman[289878]:         "type": "bluestore"
Oct 01 17:14:14 compute-0 priceless_lichterman[289878]:     },
Oct 01 17:14:14 compute-0 priceless_lichterman[289878]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 17:14:14 compute-0 priceless_lichterman[289878]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:14:14 compute-0 priceless_lichterman[289878]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 17:14:14 compute-0 priceless_lichterman[289878]:         "osd_id": 1,
Oct 01 17:14:14 compute-0 priceless_lichterman[289878]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 17:14:14 compute-0 priceless_lichterman[289878]:         "type": "bluestore"
Oct 01 17:14:14 compute-0 priceless_lichterman[289878]:     }
Oct 01 17:14:14 compute-0 priceless_lichterman[289878]: }
Oct 01 17:14:14 compute-0 systemd[1]: libpod-0b30b15bab1e5ca59bbbe6e235e3617767115f8cd1578914d55a86699c004ba7.scope: Deactivated successfully.
Oct 01 17:14:14 compute-0 podman[289861]: 2025-10-01 17:14:14.225559554 +0000 UTC m=+0.992836538 container died 0b30b15bab1e5ca59bbbe6e235e3617767115f8cd1578914d55a86699c004ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 01 17:14:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-f08522549657cb6ecde77ab3aa352009a24e101995b45c97821e45fa231cc175-merged.mount: Deactivated successfully.
Oct 01 17:14:14 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:14 compute-0 podman[289861]: 2025-10-01 17:14:14.294033397 +0000 UTC m=+1.061310341 container remove 0b30b15bab1e5ca59bbbe6e235e3617767115f8cd1578914d55a86699c004ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lichterman, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 01 17:14:14 compute-0 systemd[1]: libpod-conmon-0b30b15bab1e5ca59bbbe6e235e3617767115f8cd1578914d55a86699c004ba7.scope: Deactivated successfully.
Oct 01 17:14:14 compute-0 sudo[289755]: pam_unix(sudo:session): session closed for user root
Oct 01 17:14:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 17:14:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:14:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 17:14:14 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:14:14 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev ffcfacfa-1504-4472-afa9-92c4e6e171b6 does not exist
Oct 01 17:14:14 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 3bcdd84a-eb4f-4d29-ba63-78eb9559f365 does not exist
Oct 01 17:14:14 compute-0 sudo[289925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:14:14 compute-0 sudo[289925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:14:14 compute-0 sudo[289925]: pam_unix(sudo:session): session closed for user root
Oct 01 17:14:14 compute-0 sudo[289950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 17:14:14 compute-0 sudo[289950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:14:14 compute-0 sudo[289950]: pam_unix(sudo:session): session closed for user root
Oct 01 17:14:15 compute-0 ceph-mon[74273]: pgmap v1365: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:14:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:14:16 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:14:17 compute-0 ceph-mon[74273]: pgmap v1366: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:18 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:19 compute-0 ceph-mon[74273]: pgmap v1367: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:14:19.984 162304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:14:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:14:19.986 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:14:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:14:19.986 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:14:20 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:20 compute-0 ceph-mon[74273]: pgmap v1368: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 17:14:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:14:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 17:14:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:14:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:14:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:14:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:14:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:14:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:14:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:14:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 01 17:14:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:14:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005739061380803542 of space, bias 4.0, pg target 0.6886873656964251 quantized to 16 (current 16)
Oct 01 17:14:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:14:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Oct 01 17:14:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:14:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 17:14:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:14:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 17:14:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:14:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:14:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:14:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 17:14:21 compute-0 nova_compute[259504]: 2025-10-01 17:14:21.751 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:14:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:14:22 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:23 compute-0 ceph-mon[74273]: pgmap v1369: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:24 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:24 compute-0 ceph-mon[74273]: pgmap v1370: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:24 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Oct 01 17:14:24 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:14:24.718607) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 17:14:24 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Oct 01 17:14:24 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338864718675, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2410, "num_deletes": 509, "total_data_size": 3585481, "memory_usage": 3684000, "flush_reason": "Manual Compaction"}
Oct 01 17:14:24 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Oct 01 17:14:24 compute-0 podman[289975]: 2025-10-01 17:14:24.777758137 +0000 UTC m=+0.089237788 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 01 17:14:24 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338864848821, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3529205, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28683, "largest_seqno": 31092, "table_properties": {"data_size": 3518553, "index_size": 6314, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3269, "raw_key_size": 25667, "raw_average_key_size": 19, "raw_value_size": 3494840, "raw_average_value_size": 2700, "num_data_blocks": 279, "num_entries": 1294, "num_filter_entries": 1294, "num_deletions": 509, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759338646, "oldest_key_time": 1759338646, "file_creation_time": 1759338864, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Oct 01 17:14:24 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 130251 microseconds, and 13531 cpu microseconds.
Oct 01 17:14:24 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 17:14:24 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:14:24.848865) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3529205 bytes OK
Oct 01 17:14:24 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:14:24.848888) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Oct 01 17:14:24 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:14:24.929402) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Oct 01 17:14:24 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:14:24.929440) EVENT_LOG_v1 {"time_micros": 1759338864929430, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 17:14:24 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:14:24.929464) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 17:14:24 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3574126, prev total WAL file size 3574126, number of live WAL files 2.
Oct 01 17:14:24 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 17:14:24 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:14:24.930654) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Oct 01 17:14:24 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 17:14:24 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3446KB)], [62(8442KB)]
Oct 01 17:14:24 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338864930686, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 12174764, "oldest_snapshot_seqno": -1}
Oct 01 17:14:25 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 6132 keys, 10460536 bytes, temperature: kUnknown
Oct 01 17:14:25 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338865408377, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 10460536, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10417118, "index_size": 26988, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15365, "raw_key_size": 154512, "raw_average_key_size": 25, "raw_value_size": 10304836, "raw_average_value_size": 1680, "num_data_blocks": 1102, "num_entries": 6132, "num_filter_entries": 6132, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336399, "oldest_key_time": 0, "file_creation_time": 1759338864, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Oct 01 17:14:25 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 17:14:25 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:14:25.408636) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 10460536 bytes
Oct 01 17:14:25 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:14:25.417094) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 25.5 rd, 21.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 8.2 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(6.4) write-amplify(3.0) OK, records in: 7166, records dropped: 1034 output_compression: NoCompression
Oct 01 17:14:25 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:14:25.417124) EVENT_LOG_v1 {"time_micros": 1759338865417112, "job": 34, "event": "compaction_finished", "compaction_time_micros": 477792, "compaction_time_cpu_micros": 28005, "output_level": 6, "num_output_files": 1, "total_output_size": 10460536, "num_input_records": 7166, "num_output_records": 6132, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 17:14:25 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 17:14:25 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338865417814, "job": 34, "event": "table_file_deletion", "file_number": 64}
Oct 01 17:14:25 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 17:14:25 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338865419299, "job": 34, "event": "table_file_deletion", "file_number": 62}
Oct 01 17:14:25 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:14:24.930575) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:14:25 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:14:25.419436) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:14:25 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:14:25.419451) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:14:25 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:14:25.419454) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:14:25 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:14:25.419457) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:14:25 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:14:25.419460) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:14:26 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:14:27 compute-0 ceph-mon[74273]: pgmap v1371: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:27 compute-0 nova_compute[259504]: 2025-10-01 17:14:27.749 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:14:27 compute-0 nova_compute[259504]: 2025-10-01 17:14:27.750 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 17:14:27 compute-0 nova_compute[259504]: 2025-10-01 17:14:27.750 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 17:14:27 compute-0 nova_compute[259504]: 2025-10-01 17:14:27.773 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 17:14:27 compute-0 nova_compute[259504]: 2025-10-01 17:14:27.774 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:14:27 compute-0 nova_compute[259504]: 2025-10-01 17:14:27.797 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:14:27 compute-0 nova_compute[259504]: 2025-10-01 17:14:27.797 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:14:27 compute-0 nova_compute[259504]: 2025-10-01 17:14:27.797 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:14:27 compute-0 nova_compute[259504]: 2025-10-01 17:14:27.797 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 17:14:27 compute-0 nova_compute[259504]: 2025-10-01 17:14:27.798 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:14:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:14:28 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4285536584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:14:28 compute-0 nova_compute[259504]: 2025-10-01 17:14:28.181 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.384s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:14:28 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:28 compute-0 nova_compute[259504]: 2025-10-01 17:14:28.330 2 WARNING nova.virt.libvirt.driver [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 17:14:28 compute-0 nova_compute[259504]: 2025-10-01 17:14:28.331 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4950MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 17:14:28 compute-0 nova_compute[259504]: 2025-10-01 17:14:28.331 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:14:28 compute-0 nova_compute[259504]: 2025-10-01 17:14:28.332 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:14:28 compute-0 nova_compute[259504]: 2025-10-01 17:14:28.578 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 17:14:28 compute-0 nova_compute[259504]: 2025-10-01 17:14:28.578 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 17:14:28 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4285536584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:14:28 compute-0 nova_compute[259504]: 2025-10-01 17:14:28.657 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Refreshing inventories for resource provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 01 17:14:28 compute-0 nova_compute[259504]: 2025-10-01 17:14:28.772 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Updating ProviderTree inventory for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 01 17:14:28 compute-0 nova_compute[259504]: 2025-10-01 17:14:28.772 2 DEBUG nova.compute.provider_tree [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Updating inventory in ProviderTree for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 01 17:14:28 compute-0 nova_compute[259504]: 2025-10-01 17:14:28.797 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Refreshing aggregate associations for resource provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 01 17:14:28 compute-0 nova_compute[259504]: 2025-10-01 17:14:28.819 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Refreshing trait associations for resource provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_ABM,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AVX2,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_ACCELERATORS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSSE3,HW_CPU_X86_SHA,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 01 17:14:28 compute-0 nova_compute[259504]: 2025-10-01 17:14:28.834 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:14:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:14:29 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3049540237' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:14:29 compute-0 nova_compute[259504]: 2025-10-01 17:14:29.317 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:14:29 compute-0 nova_compute[259504]: 2025-10-01 17:14:29.322 2 DEBUG nova.compute.provider_tree [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed in ProviderTree for provider: 2417da73-53f1-4edf-ae4c-fbd9fa470d6b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 17:14:29 compute-0 nova_compute[259504]: 2025-10-01 17:14:29.337 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 17:14:29 compute-0 nova_compute[259504]: 2025-10-01 17:14:29.339 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 17:14:29 compute-0 nova_compute[259504]: 2025-10-01 17:14:29.339 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:14:29 compute-0 ceph-mon[74273]: pgmap v1372: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:29 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3049540237' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:14:29 compute-0 podman[290046]: 2025-10-01 17:14:29.753230292 +0000 UTC m=+0.067696029 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 01 17:14:30 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:30 compute-0 nova_compute[259504]: 2025-10-01 17:14:30.315 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:14:30 compute-0 nova_compute[259504]: 2025-10-01 17:14:30.315 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:14:30 compute-0 ceph-mon[74273]: pgmap v1373: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:30 compute-0 nova_compute[259504]: 2025-10-01 17:14:30.745 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:14:31 compute-0 nova_compute[259504]: 2025-10-01 17:14:31.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:14:31 compute-0 nova_compute[259504]: 2025-10-01 17:14:31.750 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 01 17:14:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:14:32 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:32 compute-0 nova_compute[259504]: 2025-10-01 17:14:32.809 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:14:32 compute-0 nova_compute[259504]: 2025-10-01 17:14:32.810 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:14:32 compute-0 nova_compute[259504]: 2025-10-01 17:14:32.810 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 17:14:33 compute-0 ceph-mon[74273]: pgmap v1374: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:34 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:34 compute-0 ceph-mon[74273]: pgmap v1375: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:34 compute-0 nova_compute[259504]: 2025-10-01 17:14:34.751 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:14:34 compute-0 nova_compute[259504]: 2025-10-01 17:14:34.751 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:14:36 compute-0 nova_compute[259504]: 2025-10-01 17:14:36.075 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:14:36 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:36 compute-0 ceph-mon[74273]: pgmap v1376: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:14:37 compute-0 nova_compute[259504]: 2025-10-01 17:14:37.785 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:14:38 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:39 compute-0 ceph-mon[74273]: pgmap v1377: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:40 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:40 compute-0 ceph-mon[74273]: pgmap v1378: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:14:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:14:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:14:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:14:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:14:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:14:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:14:42 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1379: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:42 compute-0 podman[290065]: 2025-10-01 17:14:42.729284432 +0000 UTC m=+0.050101915 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 01 17:14:42 compute-0 podman[290066]: 2025-10-01 17:14:42.757860948 +0000 UTC m=+0.072870298 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 01 17:14:43 compute-0 ceph-mon[74273]: pgmap v1379: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 17:14:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3934212823' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 17:14:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 17:14:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3934212823' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 17:14:44 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3934212823' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 17:14:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3934212823' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 17:14:45 compute-0 ceph-mon[74273]: pgmap v1380: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:46 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:46 compute-0 ceph-mon[74273]: pgmap v1381: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:14:48 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:49 compute-0 ceph-mon[74273]: pgmap v1382: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:50 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:50 compute-0 ceph-mon[74273]: pgmap v1383: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:14:52 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:52 compute-0 nova_compute[259504]: 2025-10-01 17:14:52.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:14:52 compute-0 nova_compute[259504]: 2025-10-01 17:14:52.750 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 01 17:14:52 compute-0 nova_compute[259504]: 2025-10-01 17:14:52.780 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 01 17:14:53 compute-0 ceph-mon[74273]: pgmap v1384: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:54 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:55 compute-0 ceph-mon[74273]: pgmap v1385: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:55 compute-0 podman[290106]: 2025-10-01 17:14:55.779377709 +0000 UTC m=+0.093132849 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 01 17:14:56 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:56 compute-0 ceph-mon[74273]: pgmap v1386: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:14:58 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:14:59 compute-0 ceph-mon[74273]: pgmap v1387: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:00 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:00 compute-0 ceph-mon[74273]: pgmap v1388: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:00 compute-0 podman[290132]: 2025-10-01 17:15:00.733688731 +0000 UTC m=+0.049824788 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Oct 01 17:15:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:15:02 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:03 compute-0 ceph-mon[74273]: pgmap v1389: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:04 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:05 compute-0 ceph-mon[74273]: pgmap v1390: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:06 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:07 compute-0 ceph-osd[88140]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 17:15:07 compute-0 ceph-osd[88140]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 9121 writes, 33K keys, 9121 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 9121 writes, 2154 syncs, 4.23 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2198 writes, 6778 keys, 2198 commit groups, 1.0 writes per commit group, ingest: 9.44 MB, 0.02 MB/s
                                           Interval WAL: 2198 writes, 799 syncs, 2.75 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 17:15:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:15:07 compute-0 ceph-mon[74273]: pgmap v1391: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:08 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:09 compute-0 ceph-mon[74273]: pgmap v1392: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:10 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:15:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:15:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_17:15:11
Oct 01 17:15:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 17:15:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 17:15:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'vms', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', '.mgr', 'images']
Oct 01 17:15:11 compute-0 ceph-mon[74273]: pgmap v1393: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 17:15:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:15:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:15:11 compute-0 ceph-osd[89167]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 17:15:11 compute-0 ceph-osd[89167]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.2 total, 600.0 interval
                                           Cumulative writes: 13K writes, 51K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 13K writes, 3727 syncs, 3.56 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2847 writes, 9718 keys, 2847 commit groups, 1.0 writes per commit group, ingest: 13.06 MB, 0.02 MB/s
                                           Interval WAL: 2847 writes, 974 syncs, 2.92 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 17:15:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:15:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:15:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 17:15:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:15:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 17:15:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:15:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:15:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:15:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:15:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:15:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:15:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:15:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:15:12 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:13 compute-0 ceph-mon[74273]: pgmap v1394: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:13 compute-0 podman[290152]: 2025-10-01 17:15:13.74168318 +0000 UTC m=+0.060167420 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=iscsid)
Oct 01 17:15:13 compute-0 podman[290151]: 2025-10-01 17:15:13.769782724 +0000 UTC m=+0.091326030 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 01 17:15:14 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:14 compute-0 ceph-mon[74273]: pgmap v1395: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:14 compute-0 sudo[290189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:15:14 compute-0 sudo[290189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:15:14 compute-0 sudo[290189]: pam_unix(sudo:session): session closed for user root
Oct 01 17:15:14 compute-0 sudo[290214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:15:14 compute-0 sudo[290214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:15:14 compute-0 sudo[290214]: pam_unix(sudo:session): session closed for user root
Oct 01 17:15:14 compute-0 sudo[290239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:15:14 compute-0 sudo[290239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:15:14 compute-0 sudo[290239]: pam_unix(sudo:session): session closed for user root
Oct 01 17:15:14 compute-0 sudo[290264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 17:15:14 compute-0 sudo[290264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:15:15 compute-0 sudo[290264]: pam_unix(sudo:session): session closed for user root
Oct 01 17:15:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 17:15:15 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:15:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 17:15:15 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 17:15:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 17:15:15 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:15:15 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev f7181400-dd45-4474-a5a2-0fccf9e53ab4 does not exist
Oct 01 17:15:15 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 3bc7b0f5-11d2-4fda-ad60-b937a93de1bd does not exist
Oct 01 17:15:15 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 0817ced6-b1af-4a4b-a24f-e5e4ad4caacd does not exist
Oct 01 17:15:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 17:15:15 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 17:15:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 17:15:15 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 17:15:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 17:15:15 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:15:15 compute-0 sudo[290320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:15:15 compute-0 sudo[290320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:15:15 compute-0 sudo[290320]: pam_unix(sudo:session): session closed for user root
Oct 01 17:15:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:15:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 17:15:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:15:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 17:15:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 17:15:15 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:15:15 compute-0 sudo[290345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:15:15 compute-0 sudo[290345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:15:15 compute-0 sudo[290345]: pam_unix(sudo:session): session closed for user root
Oct 01 17:15:15 compute-0 sudo[290370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:15:15 compute-0 sudo[290370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:15:15 compute-0 sudo[290370]: pam_unix(sudo:session): session closed for user root
Oct 01 17:15:15 compute-0 sudo[290395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 17:15:15 compute-0 sudo[290395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:15:16 compute-0 ceph-osd[90269]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 17:15:16 compute-0 ceph-osd[90269]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 9360 writes, 34K keys, 9360 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 9360 writes, 2147 syncs, 4.36 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1877 writes, 5246 keys, 1877 commit groups, 1.0 writes per commit group, ingest: 4.05 MB, 0.01 MB/s
                                           Interval WAL: 1878 writes, 587 syncs, 3.20 writes per sync, written: 0.00 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 17:15:16 compute-0 podman[290460]: 2025-10-01 17:15:16.135319982 +0000 UTC m=+0.033094765 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:15:16 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:16 compute-0 podman[290460]: 2025-10-01 17:15:16.323615204 +0000 UTC m=+0.221389937 container create c213cb7e0fb89a0232cc54366d1fa48ef66969846dc8cc9a80c9dc64b1c2fe23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:15:16 compute-0 systemd[1]: Started libpod-conmon-c213cb7e0fb89a0232cc54366d1fa48ef66969846dc8cc9a80c9dc64b1c2fe23.scope.
Oct 01 17:15:16 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:15:16 compute-0 podman[290460]: 2025-10-01 17:15:16.514967782 +0000 UTC m=+0.412742555 container init c213cb7e0fb89a0232cc54366d1fa48ef66969846dc8cc9a80c9dc64b1c2fe23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:15:16 compute-0 podman[290460]: 2025-10-01 17:15:16.525618125 +0000 UTC m=+0.423392868 container start c213cb7e0fb89a0232cc54366d1fa48ef66969846dc8cc9a80c9dc64b1c2fe23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_liskov, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Oct 01 17:15:16 compute-0 systemd[1]: libpod-c213cb7e0fb89a0232cc54366d1fa48ef66969846dc8cc9a80c9dc64b1c2fe23.scope: Deactivated successfully.
Oct 01 17:15:16 compute-0 cool_liskov[290476]: 167 167
Oct 01 17:15:16 compute-0 conmon[290476]: conmon c213cb7e0fb89a0232cc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c213cb7e0fb89a0232cc54366d1fa48ef66969846dc8cc9a80c9dc64b1c2fe23.scope/container/memory.events
Oct 01 17:15:16 compute-0 podman[290460]: 2025-10-01 17:15:16.666650467 +0000 UTC m=+0.564425220 container attach c213cb7e0fb89a0232cc54366d1fa48ef66969846dc8cc9a80c9dc64b1c2fe23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 01 17:15:16 compute-0 podman[290460]: 2025-10-01 17:15:16.667753209 +0000 UTC m=+0.565527982 container died c213cb7e0fb89a0232cc54366d1fa48ef66969846dc8cc9a80c9dc64b1c2fe23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_liskov, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 17:15:16 compute-0 ceph-mon[74273]: pgmap v1396: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6ac30b7806eb1530b45e7f92f2c3e56a68285c72506055c376c9a58b3c6318d-merged.mount: Deactivated successfully.
Oct 01 17:15:17 compute-0 podman[290460]: 2025-10-01 17:15:17.194668857 +0000 UTC m=+1.092443590 container remove c213cb7e0fb89a0232cc54366d1fa48ef66969846dc8cc9a80c9dc64b1c2fe23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:15:17 compute-0 systemd[1]: libpod-conmon-c213cb7e0fb89a0232cc54366d1fa48ef66969846dc8cc9a80c9dc64b1c2fe23.scope: Deactivated successfully.
Oct 01 17:15:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:15:17 compute-0 podman[290503]: 2025-10-01 17:15:17.351281589 +0000 UTC m=+0.025863131 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:15:17 compute-0 podman[290503]: 2025-10-01 17:15:17.46086823 +0000 UTC m=+0.135449752 container create b9628a8c522fa9338ac72d38684aaccadeaf905b3e60ad857083b9bf428a61d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mestorf, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:15:17 compute-0 systemd[1]: Started libpod-conmon-b9628a8c522fa9338ac72d38684aaccadeaf905b3e60ad857083b9bf428a61d7.scope.
Oct 01 17:15:17 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:15:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/964ed276560fa7723fc91d02f01c9f09389e7576b277b29a6f6c65bfee6aa51d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:15:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/964ed276560fa7723fc91d02f01c9f09389e7576b277b29a6f6c65bfee6aa51d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:15:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/964ed276560fa7723fc91d02f01c9f09389e7576b277b29a6f6c65bfee6aa51d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:15:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/964ed276560fa7723fc91d02f01c9f09389e7576b277b29a6f6c65bfee6aa51d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:15:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/964ed276560fa7723fc91d02f01c9f09389e7576b277b29a6f6c65bfee6aa51d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 17:15:17 compute-0 podman[290503]: 2025-10-01 17:15:17.716462151 +0000 UTC m=+0.391043753 container init b9628a8c522fa9338ac72d38684aaccadeaf905b3e60ad857083b9bf428a61d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:15:17 compute-0 podman[290503]: 2025-10-01 17:15:17.730082697 +0000 UTC m=+0.404664259 container start b9628a8c522fa9338ac72d38684aaccadeaf905b3e60ad857083b9bf428a61d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mestorf, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 01 17:15:17 compute-0 podman[290503]: 2025-10-01 17:15:17.867395688 +0000 UTC m=+0.541977220 container attach b9628a8c522fa9338ac72d38684aaccadeaf905b3e60ad857083b9bf428a61d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 17:15:18 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:18 compute-0 ceph-mgr[74571]: [devicehealth INFO root] Check health
Oct 01 17:15:18 compute-0 practical_mestorf[290520]: --> passed data devices: 0 physical, 3 LVM
Oct 01 17:15:18 compute-0 practical_mestorf[290520]: --> relative data size: 1.0
Oct 01 17:15:18 compute-0 practical_mestorf[290520]: --> All data devices are unavailable
Oct 01 17:15:18 compute-0 systemd[1]: libpod-b9628a8c522fa9338ac72d38684aaccadeaf905b3e60ad857083b9bf428a61d7.scope: Deactivated successfully.
Oct 01 17:15:18 compute-0 systemd[1]: libpod-b9628a8c522fa9338ac72d38684aaccadeaf905b3e60ad857083b9bf428a61d7.scope: Consumed 1.162s CPU time.
Oct 01 17:15:18 compute-0 podman[290503]: 2025-10-01 17:15:18.957201805 +0000 UTC m=+1.631783347 container died b9628a8c522fa9338ac72d38684aaccadeaf905b3e60ad857083b9bf428a61d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Oct 01 17:15:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-964ed276560fa7723fc91d02f01c9f09389e7576b277b29a6f6c65bfee6aa51d-merged.mount: Deactivated successfully.
Oct 01 17:15:19 compute-0 podman[290503]: 2025-10-01 17:15:19.28813194 +0000 UTC m=+1.962713452 container remove b9628a8c522fa9338ac72d38684aaccadeaf905b3e60ad857083b9bf428a61d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 01 17:15:19 compute-0 systemd[1]: libpod-conmon-b9628a8c522fa9338ac72d38684aaccadeaf905b3e60ad857083b9bf428a61d7.scope: Deactivated successfully.
Oct 01 17:15:19 compute-0 sudo[290395]: pam_unix(sudo:session): session closed for user root
Oct 01 17:15:19 compute-0 sudo[290563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:15:19 compute-0 sudo[290563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:15:19 compute-0 sudo[290563]: pam_unix(sudo:session): session closed for user root
Oct 01 17:15:19 compute-0 sudo[290588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:15:19 compute-0 sudo[290588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:15:19 compute-0 sudo[290588]: pam_unix(sudo:session): session closed for user root
Oct 01 17:15:19 compute-0 ceph-mon[74273]: pgmap v1397: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:19 compute-0 sudo[290613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:15:19 compute-0 sudo[290613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:15:19 compute-0 sudo[290613]: pam_unix(sudo:session): session closed for user root
Oct 01 17:15:19 compute-0 sudo[290638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 17:15:19 compute-0 sudo[290638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:15:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:15:19.985 162304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:15:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:15:19.987 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:15:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:15:19.987 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:15:20 compute-0 podman[290701]: 2025-10-01 17:15:20.10152637 +0000 UTC m=+0.098810564 container create ea5db4bb3cf9c0ff17b954404bb6ed5c587afc4398ffb57c594408288370731b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:15:20 compute-0 podman[290701]: 2025-10-01 17:15:20.03184659 +0000 UTC m=+0.029130854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:15:20 compute-0 systemd[1]: Started libpod-conmon-ea5db4bb3cf9c0ff17b954404bb6ed5c587afc4398ffb57c594408288370731b.scope.
Oct 01 17:15:20 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:15:20 compute-0 podman[290701]: 2025-10-01 17:15:20.242696677 +0000 UTC m=+0.239980831 container init ea5db4bb3cf9c0ff17b954404bb6ed5c587afc4398ffb57c594408288370731b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_poincare, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 17:15:20 compute-0 podman[290701]: 2025-10-01 17:15:20.252615652 +0000 UTC m=+0.249899806 container start ea5db4bb3cf9c0ff17b954404bb6ed5c587afc4398ffb57c594408288370731b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_poincare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:15:20 compute-0 inspiring_poincare[290717]: 167 167
Oct 01 17:15:20 compute-0 systemd[1]: libpod-ea5db4bb3cf9c0ff17b954404bb6ed5c587afc4398ffb57c594408288370731b.scope: Deactivated successfully.
Oct 01 17:15:20 compute-0 podman[290701]: 2025-10-01 17:15:20.304935704 +0000 UTC m=+0.302219858 container attach ea5db4bb3cf9c0ff17b954404bb6ed5c587afc4398ffb57c594408288370731b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_poincare, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 01 17:15:20 compute-0 podman[290701]: 2025-10-01 17:15:20.306755333 +0000 UTC m=+0.304039477 container died ea5db4bb3cf9c0ff17b954404bb6ed5c587afc4398ffb57c594408288370731b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_poincare, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:15:20 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7d36d92a1ff01214e799919bc7cc96139109519ca787eea2b67afa15f365582-merged.mount: Deactivated successfully.
Oct 01 17:15:20 compute-0 ceph-mon[74273]: pgmap v1398: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:20 compute-0 podman[290701]: 2025-10-01 17:15:20.709767647 +0000 UTC m=+0.707051841 container remove ea5db4bb3cf9c0ff17b954404bb6ed5c587afc4398ffb57c594408288370731b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Oct 01 17:15:20 compute-0 systemd[1]: libpod-conmon-ea5db4bb3cf9c0ff17b954404bb6ed5c587afc4398ffb57c594408288370731b.scope: Deactivated successfully.
Oct 01 17:15:21 compute-0 podman[290743]: 2025-10-01 17:15:20.999052055 +0000 UTC m=+0.127383956 container create 3020a4df104b56692eebcec795e90aa1259a49b896903bdfb24c50dcd31ab741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_shamir, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:15:21 compute-0 podman[290743]: 2025-10-01 17:15:20.919793103 +0000 UTC m=+0.048125004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:15:21 compute-0 systemd[1]: Started libpod-conmon-3020a4df104b56692eebcec795e90aa1259a49b896903bdfb24c50dcd31ab741.scope.
Oct 01 17:15:21 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:15:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44a71cb50be19e4171ca54e261eb2f322913574c675038b089a6c8b8a9447939/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:15:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44a71cb50be19e4171ca54e261eb2f322913574c675038b089a6c8b8a9447939/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:15:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44a71cb50be19e4171ca54e261eb2f322913574c675038b089a6c8b8a9447939/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:15:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44a71cb50be19e4171ca54e261eb2f322913574c675038b089a6c8b8a9447939/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:15:21 compute-0 podman[290743]: 2025-10-01 17:15:21.312935344 +0000 UTC m=+0.441267275 container init 3020a4df104b56692eebcec795e90aa1259a49b896903bdfb24c50dcd31ab741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_shamir, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:15:21 compute-0 podman[290743]: 2025-10-01 17:15:21.327201504 +0000 UTC m=+0.455533445 container start 3020a4df104b56692eebcec795e90aa1259a49b896903bdfb24c50dcd31ab741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_shamir, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:15:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 17:15:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:15:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 17:15:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:15:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:15:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:15:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:15:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:15:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:15:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:15:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 01 17:15:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:15:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005739061380803542 of space, bias 4.0, pg target 0.6886873656964251 quantized to 16 (current 16)
Oct 01 17:15:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:15:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Oct 01 17:15:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:15:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 17:15:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:15:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 17:15:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:15:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:15:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:15:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 17:15:21 compute-0 podman[290743]: 2025-10-01 17:15:21.430093591 +0000 UTC m=+0.558425592 container attach 3020a4df104b56692eebcec795e90aa1259a49b896903bdfb24c50dcd31ab741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_shamir, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 01 17:15:21 compute-0 nova_compute[259504]: 2025-10-01 17:15:21.781 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:15:22 compute-0 agitated_shamir[290762]: {
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:     "0": [
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:         {
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             "devices": [
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "/dev/loop3"
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             ],
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             "lv_name": "ceph_lv0",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             "lv_size": "21470642176",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             "name": "ceph_lv0",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             "tags": {
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "ceph.cluster_name": "ceph",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "ceph.crush_device_class": "",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "ceph.encrypted": "0",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "ceph.osd_id": "0",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "ceph.type": "block",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "ceph.vdo": "0"
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             },
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             "type": "block",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             "vg_name": "ceph_vg0"
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:         }
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:     ],
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:     "1": [
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:         {
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             "devices": [
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "/dev/loop4"
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             ],
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             "lv_name": "ceph_lv1",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             "lv_size": "21470642176",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             "name": "ceph_lv1",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             "tags": {
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "ceph.cluster_name": "ceph",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "ceph.crush_device_class": "",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "ceph.encrypted": "0",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "ceph.osd_id": "1",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "ceph.type": "block",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "ceph.vdo": "0"
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             },
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             "type": "block",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             "vg_name": "ceph_vg1"
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:         }
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:     ],
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:     "2": [
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:         {
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             "devices": [
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "/dev/loop5"
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             ],
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             "lv_name": "ceph_lv2",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             "lv_size": "21470642176",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             "name": "ceph_lv2",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             "tags": {
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "ceph.cluster_name": "ceph",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "ceph.crush_device_class": "",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "ceph.encrypted": "0",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "ceph.osd_id": "2",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "ceph.type": "block",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:                 "ceph.vdo": "0"
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             },
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             "type": "block",
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:             "vg_name": "ceph_vg2"
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:         }
Oct 01 17:15:22 compute-0 agitated_shamir[290762]:     ]
Oct 01 17:15:22 compute-0 agitated_shamir[290762]: }
Oct 01 17:15:22 compute-0 systemd[1]: libpod-3020a4df104b56692eebcec795e90aa1259a49b896903bdfb24c50dcd31ab741.scope: Deactivated successfully.
Oct 01 17:15:22 compute-0 podman[290743]: 2025-10-01 17:15:22.120874676 +0000 UTC m=+1.249206567 container died 3020a4df104b56692eebcec795e90aa1259a49b896903bdfb24c50dcd31ab741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 01 17:15:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-44a71cb50be19e4171ca54e261eb2f322913574c675038b089a6c8b8a9447939-merged.mount: Deactivated successfully.
Oct 01 17:15:22 compute-0 podman[290743]: 2025-10-01 17:15:22.205225971 +0000 UTC m=+1.333557882 container remove 3020a4df104b56692eebcec795e90aa1259a49b896903bdfb24c50dcd31ab741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Oct 01 17:15:22 compute-0 systemd[1]: libpod-conmon-3020a4df104b56692eebcec795e90aa1259a49b896903bdfb24c50dcd31ab741.scope: Deactivated successfully.
Oct 01 17:15:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:15:22 compute-0 sudo[290638]: pam_unix(sudo:session): session closed for user root
Oct 01 17:15:22 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:22 compute-0 sudo[290785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:15:22 compute-0 sudo[290785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:15:22 compute-0 sudo[290785]: pam_unix(sudo:session): session closed for user root
Oct 01 17:15:22 compute-0 sudo[290810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:15:22 compute-0 sudo[290810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:15:22 compute-0 sudo[290810]: pam_unix(sudo:session): session closed for user root
Oct 01 17:15:22 compute-0 sudo[290835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:15:22 compute-0 sudo[290835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:15:22 compute-0 sudo[290835]: pam_unix(sudo:session): session closed for user root
Oct 01 17:15:22 compute-0 sudo[290860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 17:15:22 compute-0 sudo[290860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:15:23 compute-0 podman[290925]: 2025-10-01 17:15:23.056072628 +0000 UTC m=+0.068519997 container create 2d9d9dc2f5d1b2b46b4d31856226c3e4c263d9e4ac6560bb1ba1695aeb859e3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:15:23 compute-0 systemd[1]: Started libpod-conmon-2d9d9dc2f5d1b2b46b4d31856226c3e4c263d9e4ac6560bb1ba1695aeb859e3e.scope.
Oct 01 17:15:23 compute-0 podman[290925]: 2025-10-01 17:15:23.02867615 +0000 UTC m=+0.041123579 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:15:23 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:15:23 compute-0 podman[290925]: 2025-10-01 17:15:23.161089475 +0000 UTC m=+0.173536934 container init 2d9d9dc2f5d1b2b46b4d31856226c3e4c263d9e4ac6560bb1ba1695aeb859e3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:15:23 compute-0 podman[290925]: 2025-10-01 17:15:23.174231713 +0000 UTC m=+0.186679112 container start 2d9d9dc2f5d1b2b46b4d31856226c3e4c263d9e4ac6560bb1ba1695aeb859e3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_albattani, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 01 17:15:23 compute-0 podman[290925]: 2025-10-01 17:15:23.179287275 +0000 UTC m=+0.191734674 container attach 2d9d9dc2f5d1b2b46b4d31856226c3e4c263d9e4ac6560bb1ba1695aeb859e3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_albattani, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:15:23 compute-0 priceless_albattani[290942]: 167 167
Oct 01 17:15:23 compute-0 systemd[1]: libpod-2d9d9dc2f5d1b2b46b4d31856226c3e4c263d9e4ac6560bb1ba1695aeb859e3e.scope: Deactivated successfully.
Oct 01 17:15:23 compute-0 podman[290925]: 2025-10-01 17:15:23.183502774 +0000 UTC m=+0.195950173 container died 2d9d9dc2f5d1b2b46b4d31856226c3e4c263d9e4ac6560bb1ba1695aeb859e3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:15:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ab98d74f8b90ce292559ce866d70ce00d58d0cf9e4c79f686d3fc89e47c1ed9-merged.mount: Deactivated successfully.
Oct 01 17:15:23 compute-0 podman[290925]: 2025-10-01 17:15:23.232008962 +0000 UTC m=+0.244456341 container remove 2d9d9dc2f5d1b2b46b4d31856226c3e4c263d9e4ac6560bb1ba1695aeb859e3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Oct 01 17:15:23 compute-0 systemd[1]: libpod-conmon-2d9d9dc2f5d1b2b46b4d31856226c3e4c263d9e4ac6560bb1ba1695aeb859e3e.scope: Deactivated successfully.
Oct 01 17:15:23 compute-0 ceph-mon[74273]: pgmap v1399: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:23 compute-0 podman[290964]: 2025-10-01 17:15:23.482137186 +0000 UTC m=+0.061781171 container create c0b81d99c0104e79fc9ea896464569d36c785bcb33d37d1cc214197918595253 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_proskuriakova, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:15:23 compute-0 systemd[1]: Started libpod-conmon-c0b81d99c0104e79fc9ea896464569d36c785bcb33d37d1cc214197918595253.scope.
Oct 01 17:15:23 compute-0 podman[290964]: 2025-10-01 17:15:23.451443783 +0000 UTC m=+0.031087798 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:15:23 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:15:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbe7b0896a6e64be133d4fcb768945f31f7e41811059a38cc86f9a6a7060954b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:15:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbe7b0896a6e64be133d4fcb768945f31f7e41811059a38cc86f9a6a7060954b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:15:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbe7b0896a6e64be133d4fcb768945f31f7e41811059a38cc86f9a6a7060954b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:15:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbe7b0896a6e64be133d4fcb768945f31f7e41811059a38cc86f9a6a7060954b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:15:23 compute-0 podman[290964]: 2025-10-01 17:15:23.595211079 +0000 UTC m=+0.174855104 container init c0b81d99c0104e79fc9ea896464569d36c785bcb33d37d1cc214197918595253 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 01 17:15:23 compute-0 podman[290964]: 2025-10-01 17:15:23.61981015 +0000 UTC m=+0.199454135 container start c0b81d99c0104e79fc9ea896464569d36c785bcb33d37d1cc214197918595253 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_proskuriakova, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Oct 01 17:15:23 compute-0 podman[290964]: 2025-10-01 17:15:23.624384874 +0000 UTC m=+0.204028909 container attach c0b81d99c0104e79fc9ea896464569d36c785bcb33d37d1cc214197918595253 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:15:24 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:24 compute-0 hungry_proskuriakova[290981]: {
Oct 01 17:15:24 compute-0 hungry_proskuriakova[290981]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 17:15:24 compute-0 hungry_proskuriakova[290981]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:15:24 compute-0 hungry_proskuriakova[290981]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 17:15:24 compute-0 hungry_proskuriakova[290981]:         "osd_id": 2,
Oct 01 17:15:24 compute-0 hungry_proskuriakova[290981]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 17:15:24 compute-0 hungry_proskuriakova[290981]:         "type": "bluestore"
Oct 01 17:15:24 compute-0 hungry_proskuriakova[290981]:     },
Oct 01 17:15:24 compute-0 hungry_proskuriakova[290981]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 17:15:24 compute-0 hungry_proskuriakova[290981]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:15:24 compute-0 hungry_proskuriakova[290981]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 17:15:24 compute-0 hungry_proskuriakova[290981]:         "osd_id": 0,
Oct 01 17:15:24 compute-0 hungry_proskuriakova[290981]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 17:15:24 compute-0 hungry_proskuriakova[290981]:         "type": "bluestore"
Oct 01 17:15:24 compute-0 hungry_proskuriakova[290981]:     },
Oct 01 17:15:24 compute-0 hungry_proskuriakova[290981]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 17:15:24 compute-0 hungry_proskuriakova[290981]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:15:24 compute-0 hungry_proskuriakova[290981]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 17:15:24 compute-0 hungry_proskuriakova[290981]:         "osd_id": 1,
Oct 01 17:15:24 compute-0 hungry_proskuriakova[290981]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 17:15:24 compute-0 hungry_proskuriakova[290981]:         "type": "bluestore"
Oct 01 17:15:24 compute-0 hungry_proskuriakova[290981]:     }
Oct 01 17:15:24 compute-0 hungry_proskuriakova[290981]: }
Oct 01 17:15:24 compute-0 systemd[1]: libpod-c0b81d99c0104e79fc9ea896464569d36c785bcb33d37d1cc214197918595253.scope: Deactivated successfully.
Oct 01 17:15:24 compute-0 systemd[1]: libpod-c0b81d99c0104e79fc9ea896464569d36c785bcb33d37d1cc214197918595253.scope: Consumed 1.190s CPU time.
Oct 01 17:15:24 compute-0 podman[290964]: 2025-10-01 17:15:24.79570912 +0000 UTC m=+1.375353095 container died c0b81d99c0104e79fc9ea896464569d36c785bcb33d37d1cc214197918595253 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 01 17:15:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbe7b0896a6e64be133d4fcb768945f31f7e41811059a38cc86f9a6a7060954b-merged.mount: Deactivated successfully.
Oct 01 17:15:24 compute-0 podman[290964]: 2025-10-01 17:15:24.859458794 +0000 UTC m=+1.439102779 container remove c0b81d99c0104e79fc9ea896464569d36c785bcb33d37d1cc214197918595253 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 01 17:15:24 compute-0 systemd[1]: libpod-conmon-c0b81d99c0104e79fc9ea896464569d36c785bcb33d37d1cc214197918595253.scope: Deactivated successfully.
Oct 01 17:15:24 compute-0 sudo[290860]: pam_unix(sudo:session): session closed for user root
Oct 01 17:15:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 17:15:24 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:15:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 17:15:24 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:15:24 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 13248177-7d83-4f87-8d7e-0c8e7f80d94a does not exist
Oct 01 17:15:24 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 18064fc7-6f56-4d96-a2ca-decd431d169b does not exist
Oct 01 17:15:25 compute-0 sudo[291028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:15:25 compute-0 sudo[291028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:15:25 compute-0 sudo[291028]: pam_unix(sudo:session): session closed for user root
Oct 01 17:15:25 compute-0 sudo[291053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 17:15:25 compute-0 sudo[291053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:15:25 compute-0 sudo[291053]: pam_unix(sudo:session): session closed for user root
Oct 01 17:15:25 compute-0 ceph-mon[74273]: pgmap v1400: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:25 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:15:25 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:15:26 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:26 compute-0 podman[291078]: 2025-10-01 17:15:26.829207722 +0000 UTC m=+0.143069720 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 01 17:15:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:15:27 compute-0 ceph-mon[74273]: pgmap v1401: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:27 compute-0 nova_compute[259504]: 2025-10-01 17:15:27.751 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:15:27 compute-0 nova_compute[259504]: 2025-10-01 17:15:27.752 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 17:15:27 compute-0 nova_compute[259504]: 2025-10-01 17:15:27.752 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 17:15:27 compute-0 nova_compute[259504]: 2025-10-01 17:15:27.773 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 17:15:27 compute-0 nova_compute[259504]: 2025-10-01 17:15:27.774 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:15:27 compute-0 nova_compute[259504]: 2025-10-01 17:15:27.806 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:15:27 compute-0 nova_compute[259504]: 2025-10-01 17:15:27.807 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:15:27 compute-0 nova_compute[259504]: 2025-10-01 17:15:27.807 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:15:27 compute-0 nova_compute[259504]: 2025-10-01 17:15:27.807 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 17:15:27 compute-0 nova_compute[259504]: 2025-10-01 17:15:27.808 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:15:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:15:28 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1363745359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:15:28 compute-0 nova_compute[259504]: 2025-10-01 17:15:28.287 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:15:28 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:28 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1363745359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:15:28 compute-0 nova_compute[259504]: 2025-10-01 17:15:28.479 2 WARNING nova.virt.libvirt.driver [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 17:15:28 compute-0 nova_compute[259504]: 2025-10-01 17:15:28.480 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4917MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 17:15:28 compute-0 nova_compute[259504]: 2025-10-01 17:15:28.480 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:15:28 compute-0 nova_compute[259504]: 2025-10-01 17:15:28.480 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:15:28 compute-0 nova_compute[259504]: 2025-10-01 17:15:28.558 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 17:15:28 compute-0 nova_compute[259504]: 2025-10-01 17:15:28.559 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 17:15:28 compute-0 nova_compute[259504]: 2025-10-01 17:15:28.575 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:15:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:15:28 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/368746818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:15:29 compute-0 nova_compute[259504]: 2025-10-01 17:15:29.005 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:15:29 compute-0 nova_compute[259504]: 2025-10-01 17:15:29.012 2 DEBUG nova.compute.provider_tree [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed in ProviderTree for provider: 2417da73-53f1-4edf-ae4c-fbd9fa470d6b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 17:15:29 compute-0 nova_compute[259504]: 2025-10-01 17:15:29.039 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 17:15:29 compute-0 nova_compute[259504]: 2025-10-01 17:15:29.040 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 17:15:29 compute-0 nova_compute[259504]: 2025-10-01 17:15:29.041 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:15:29 compute-0 ceph-mon[74273]: pgmap v1402: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:29 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/368746818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:15:30 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1403: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:31 compute-0 nova_compute[259504]: 2025-10-01 17:15:31.017 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:15:31 compute-0 nova_compute[259504]: 2025-10-01 17:15:31.017 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:15:31 compute-0 ceph-mon[74273]: pgmap v1403: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:31 compute-0 podman[291149]: 2025-10-01 17:15:31.757804791 +0000 UTC m=+0.070991790 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 01 17:15:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:15:32 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:32 compute-0 nova_compute[259504]: 2025-10-01 17:15:32.746 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:15:33 compute-0 ceph-mon[74273]: pgmap v1404: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:33 compute-0 nova_compute[259504]: 2025-10-01 17:15:33.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:15:34 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:34 compute-0 nova_compute[259504]: 2025-10-01 17:15:34.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:15:34 compute-0 nova_compute[259504]: 2025-10-01 17:15:34.751 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:15:34 compute-0 nova_compute[259504]: 2025-10-01 17:15:34.751 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 17:15:35 compute-0 ceph-mon[74273]: pgmap v1405: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:36 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:15:37 compute-0 ceph-mon[74273]: pgmap v1406: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:38 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:38 compute-0 ceph-mon[74273]: pgmap v1407: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:40 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:15:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:15:41 compute-0 ceph-mon[74273]: pgmap v1408: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:15:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:15:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:15:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:15:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:15:42 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:43 compute-0 ceph-mon[74273]: pgmap v1409: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 17:15:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2443994400' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 17:15:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 17:15:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2443994400' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 17:15:44 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1410: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2443994400' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 17:15:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/2443994400' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 17:15:44 compute-0 ceph-mon[74273]: pgmap v1410: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:44 compute-0 podman[291169]: 2025-10-01 17:15:44.769568101 +0000 UTC m=+0.074961514 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Oct 01 17:15:44 compute-0 podman[291170]: 2025-10-01 17:15:44.770252944 +0000 UTC m=+0.079330801 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 01 17:15:46 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:15:47 compute-0 ceph-mon[74273]: pgmap v1411: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:48 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:48 compute-0 ceph-mon[74273]: pgmap v1412: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:50 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1413: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:51 compute-0 ceph-mon[74273]: pgmap v1413: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:15:52 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:53 compute-0 ceph-mon[74273]: pgmap v1414: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:54 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:55 compute-0 ceph-mon[74273]: pgmap v1415: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:56 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1416: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:15:57 compute-0 ceph-mon[74273]: pgmap v1416: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:57 compute-0 podman[291206]: 2025-10-01 17:15:57.778697813 +0000 UTC m=+0.097732282 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller)
Oct 01 17:15:58 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:15:59 compute-0 ceph-mon[74273]: pgmap v1417: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:00 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:00 compute-0 ceph-mon[74273]: pgmap v1418: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:16:02 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:02 compute-0 podman[291232]: 2025-10-01 17:16:02.731836719 +0000 UTC m=+0.047660661 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 01 17:16:03 compute-0 ceph-mon[74273]: pgmap v1419: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:04 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:05 compute-0 ceph-mon[74273]: pgmap v1420: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:06 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:06 compute-0 ceph-mon[74273]: pgmap v1421: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:16:08 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:09 compute-0 ceph-mon[74273]: pgmap v1422: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:10 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:16:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:16:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_17:16:11
Oct 01 17:16:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 17:16:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 17:16:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'volumes', 'images', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta']
Oct 01 17:16:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 17:16:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:16:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:16:11 compute-0 ceph-mon[74273]: pgmap v1423: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:16:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:16:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 17:16:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:16:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 17:16:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:16:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:16:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:16:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:16:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:16:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:16:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:16:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:16:12 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:13 compute-0 ceph-mon[74273]: pgmap v1424: 305 pgs: 305 active+clean; 77 MiB data, 331 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:14 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 17:16:15 compute-0 ceph-mon[74273]: pgmap v1425: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 17:16:15 compute-0 podman[291252]: 2025-10-01 17:16:15.749635821 +0000 UTC m=+0.067478556 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 01 17:16:15 compute-0 podman[291253]: 2025-10-01 17:16:15.75700908 +0000 UTC m=+0.066686226 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid)
Oct 01 17:16:16 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 17:16:16 compute-0 ceph-mon[74273]: pgmap v1426: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 17:16:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:16:18 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 17:16:18 compute-0 nova_compute[259504]: 2025-10-01 17:16:18.710 2 DEBUG oslo_concurrency.processutils [None req-7ff73782-b9af-446a-8d0f-ba14cefc86b6 39c4052e56fa42f19855653f980c1235 fc20e0a59557470e8cf5189ec2e95a2e - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:16:18 compute-0 nova_compute[259504]: 2025-10-01 17:16:18.768 2 DEBUG oslo_concurrency.processutils [None req-7ff73782-b9af-446a-8d0f-ba14cefc86b6 39c4052e56fa42f19855653f980c1235 fc20e0a59557470e8cf5189ec2e95a2e - - default default] CMD "env LANG=C uptime" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:16:19 compute-0 ceph-mon[74273]: pgmap v1427: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 17:16:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:16:19.986 162304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:16:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:16:19.986 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:16:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:16:19.986 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:16:20 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 17:16:20 compute-0 ceph-mon[74273]: pgmap v1428: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 17:16:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 17:16:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:16:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 17:16:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:16:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:16:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:16:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:16:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:16:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:16:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:16:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 01 17:16:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:16:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005739061380803542 of space, bias 4.0, pg target 0.6886873656964251 quantized to 16 (current 16)
Oct 01 17:16:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:16:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Oct 01 17:16:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:16:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 17:16:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:16:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 17:16:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:16:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:16:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:16:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 17:16:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:16:22 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 17:16:22 compute-0 ceph-mon[74273]: pgmap v1429: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 17:16:23 compute-0 nova_compute[259504]: 2025-10-01 17:16:23.751 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:16:24 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 17:16:24 compute-0 ceph-mon[74273]: pgmap v1430: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 17:16:25 compute-0 sudo[291293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:16:25 compute-0 sudo[291293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:16:25 compute-0 sudo[291293]: pam_unix(sudo:session): session closed for user root
Oct 01 17:16:25 compute-0 sudo[291318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:16:25 compute-0 sudo[291318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:16:25 compute-0 sudo[291318]: pam_unix(sudo:session): session closed for user root
Oct 01 17:16:25 compute-0 sudo[291343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:16:25 compute-0 sudo[291343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:16:25 compute-0 sudo[291343]: pam_unix(sudo:session): session closed for user root
Oct 01 17:16:25 compute-0 sudo[291368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 01 17:16:25 compute-0 sudo[291368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:16:25 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:16:25.659 162304 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:71:db', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:60:3f:78:bd:29'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 17:16:25 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:16:25.661 162304 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 17:16:25 compute-0 sudo[291368]: pam_unix(sudo:session): session closed for user root
Oct 01 17:16:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 17:16:25 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:16:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 17:16:26 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:16:26 compute-0 sudo[291412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:16:26 compute-0 sudo[291412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:16:26 compute-0 sudo[291412]: pam_unix(sudo:session): session closed for user root
Oct 01 17:16:26 compute-0 sudo[291437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:16:26 compute-0 sudo[291437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:16:26 compute-0 sudo[291437]: pam_unix(sudo:session): session closed for user root
Oct 01 17:16:26 compute-0 sudo[291462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:16:26 compute-0 sudo[291462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:16:26 compute-0 sudo[291462]: pam_unix(sudo:session): session closed for user root
Oct 01 17:16:26 compute-0 sudo[291487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 17:16:26 compute-0 sudo[291487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:16:26 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:26 compute-0 sudo[291487]: pam_unix(sudo:session): session closed for user root
Oct 01 17:16:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 17:16:26 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:16:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 17:16:26 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 17:16:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 17:16:27 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:16:27 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 03ab2095-d514-4c1f-a3ff-de4f678bffff does not exist
Oct 01 17:16:27 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev afc75023-0e5a-4b37-abac-8df09bf67a19 does not exist
Oct 01 17:16:27 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev e70928e7-66b4-4c9c-9dd0-b45cee5f70d2 does not exist
Oct 01 17:16:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 17:16:27 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 17:16:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:16:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 17:16:27 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 17:16:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 17:16:27 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:16:27 compute-0 sudo[291543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:16:27 compute-0 sudo[291543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:16:27 compute-0 sudo[291543]: pam_unix(sudo:session): session closed for user root
Oct 01 17:16:27 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:16:27 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:16:27 compute-0 ceph-mon[74273]: pgmap v1431: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:27 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:16:27 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 17:16:27 compute-0 sudo[291568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:16:27 compute-0 sudo[291568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:16:27 compute-0 sudo[291568]: pam_unix(sudo:session): session closed for user root
Oct 01 17:16:27 compute-0 sudo[291593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:16:27 compute-0 sudo[291593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:16:27 compute-0 sudo[291593]: pam_unix(sudo:session): session closed for user root
Oct 01 17:16:27 compute-0 sudo[291618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 17:16:27 compute-0 sudo[291618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:16:27 compute-0 podman[291683]: 2025-10-01 17:16:27.890247385 +0000 UTC m=+0.026668011 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:16:28 compute-0 podman[291683]: 2025-10-01 17:16:28.070502545 +0000 UTC m=+0.206923181 container create 22a1bfe5bafb6a163a7969b4589ee7bd2af2de2907e4b115d90996ea6e5dd260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_allen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 01 17:16:28 compute-0 systemd[1]: Started libpod-conmon-22a1bfe5bafb6a163a7969b4589ee7bd2af2de2907e4b115d90996ea6e5dd260.scope.
Oct 01 17:16:28 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:16:28 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:28 compute-0 podman[291683]: 2025-10-01 17:16:28.502661401 +0000 UTC m=+0.639082107 container init 22a1bfe5bafb6a163a7969b4589ee7bd2af2de2907e4b115d90996ea6e5dd260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:16:28 compute-0 podman[291683]: 2025-10-01 17:16:28.510400115 +0000 UTC m=+0.646820741 container start 22a1bfe5bafb6a163a7969b4589ee7bd2af2de2907e4b115d90996ea6e5dd260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_allen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:16:28 compute-0 priceless_allen[291715]: 167 167
Oct 01 17:16:28 compute-0 systemd[1]: libpod-22a1bfe5bafb6a163a7969b4589ee7bd2af2de2907e4b115d90996ea6e5dd260.scope: Deactivated successfully.
Oct 01 17:16:28 compute-0 podman[291683]: 2025-10-01 17:16:28.76064817 +0000 UTC m=+0.897068766 container attach 22a1bfe5bafb6a163a7969b4589ee7bd2af2de2907e4b115d90996ea6e5dd260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_allen, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:16:28 compute-0 podman[291683]: 2025-10-01 17:16:28.762177711 +0000 UTC m=+0.898598327 container died 22a1bfe5bafb6a163a7969b4589ee7bd2af2de2907e4b115d90996ea6e5dd260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_allen, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 01 17:16:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:16:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 17:16:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 17:16:28 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:16:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d25fbb0ec9543f40980e86d57a0f194ebe9e40116ba7e1db501e1847f7e10ed-merged.mount: Deactivated successfully.
Oct 01 17:16:29 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:16:29.663 162304 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d2971fc2-5b75-459a-98a0-6e626d0d4d99, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 17:16:29 compute-0 nova_compute[259504]: 2025-10-01 17:16:29.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:16:29 compute-0 nova_compute[259504]: 2025-10-01 17:16:29.751 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 17:16:29 compute-0 nova_compute[259504]: 2025-10-01 17:16:29.751 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 17:16:29 compute-0 nova_compute[259504]: 2025-10-01 17:16:29.782 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 17:16:29 compute-0 nova_compute[259504]: 2025-10-01 17:16:29.783 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:16:29 compute-0 nova_compute[259504]: 2025-10-01 17:16:29.809 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:16:29 compute-0 nova_compute[259504]: 2025-10-01 17:16:29.810 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:16:29 compute-0 nova_compute[259504]: 2025-10-01 17:16:29.810 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:16:29 compute-0 nova_compute[259504]: 2025-10-01 17:16:29.810 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 17:16:29 compute-0 nova_compute[259504]: 2025-10-01 17:16:29.810 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:16:30 compute-0 podman[291683]: 2025-10-01 17:16:30.0042215 +0000 UTC m=+2.140642136 container remove 22a1bfe5bafb6a163a7969b4589ee7bd2af2de2907e4b115d90996ea6e5dd260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 01 17:16:30 compute-0 systemd[1]: libpod-conmon-22a1bfe5bafb6a163a7969b4589ee7bd2af2de2907e4b115d90996ea6e5dd260.scope: Deactivated successfully.
Oct 01 17:16:30 compute-0 podman[291698]: 2025-10-01 17:16:30.067378188 +0000 UTC m=+1.940643339 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller)
Oct 01 17:16:30 compute-0 ceph-mon[74273]: pgmap v1432: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:30 compute-0 podman[291768]: 2025-10-01 17:16:30.162234055 +0000 UTC m=+0.023446871 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:16:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:16:30 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/996960972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:16:30 compute-0 nova_compute[259504]: 2025-10-01 17:16:30.360 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:16:30 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1433: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:30 compute-0 podman[291768]: 2025-10-01 17:16:30.439575835 +0000 UTC m=+0.300788641 container create e2166389a3ca1eaa4f08a59ba65734d932f1d3d5c2c46467e0dee99ca07a6f79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 17:16:30 compute-0 nova_compute[259504]: 2025-10-01 17:16:30.514 2 WARNING nova.virt.libvirt.driver [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 17:16:30 compute-0 nova_compute[259504]: 2025-10-01 17:16:30.515 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4958MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 17:16:30 compute-0 nova_compute[259504]: 2025-10-01 17:16:30.515 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:16:30 compute-0 nova_compute[259504]: 2025-10-01 17:16:30.516 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:16:30 compute-0 systemd[1]: Started libpod-conmon-e2166389a3ca1eaa4f08a59ba65734d932f1d3d5c2c46467e0dee99ca07a6f79.scope.
Oct 01 17:16:30 compute-0 nova_compute[259504]: 2025-10-01 17:16:30.680 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 17:16:30 compute-0 nova_compute[259504]: 2025-10-01 17:16:30.680 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 17:16:30 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:16:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/797d8a1524f44321aeeed62075e8d9f7d0bc3e10a6163f60bd3f5a12c502217c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:16:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/797d8a1524f44321aeeed62075e8d9f7d0bc3e10a6163f60bd3f5a12c502217c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:16:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/797d8a1524f44321aeeed62075e8d9f7d0bc3e10a6163f60bd3f5a12c502217c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:16:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/797d8a1524f44321aeeed62075e8d9f7d0bc3e10a6163f60bd3f5a12c502217c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:16:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/797d8a1524f44321aeeed62075e8d9f7d0bc3e10a6163f60bd3f5a12c502217c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 17:16:30 compute-0 nova_compute[259504]: 2025-10-01 17:16:30.697 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:16:30 compute-0 podman[291768]: 2025-10-01 17:16:30.950747672 +0000 UTC m=+0.811960488 container init e2166389a3ca1eaa4f08a59ba65734d932f1d3d5c2c46467e0dee99ca07a6f79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_jones, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:16:30 compute-0 podman[291768]: 2025-10-01 17:16:30.958498896 +0000 UTC m=+0.819711682 container start e2166389a3ca1eaa4f08a59ba65734d932f1d3d5c2c46467e0dee99ca07a6f79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_jones, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:16:31 compute-0 podman[291768]: 2025-10-01 17:16:31.129632559 +0000 UTC m=+0.990845375 container attach e2166389a3ca1eaa4f08a59ba65734d932f1d3d5c2c46467e0dee99ca07a6f79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_jones, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:16:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:16:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4165861182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:16:31 compute-0 nova_compute[259504]: 2025-10-01 17:16:31.278 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:16:31 compute-0 nova_compute[259504]: 2025-10-01 17:16:31.284 2 DEBUG nova.compute.provider_tree [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed in ProviderTree for provider: 2417da73-53f1-4edf-ae4c-fbd9fa470d6b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 17:16:31 compute-0 nova_compute[259504]: 2025-10-01 17:16:31.300 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 17:16:31 compute-0 nova_compute[259504]: 2025-10-01 17:16:31.301 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 17:16:31 compute-0 nova_compute[259504]: 2025-10-01 17:16:31.301 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:16:31 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/996960972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:16:31 compute-0 ceph-mon[74273]: pgmap v1433: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:31 compute-0 mystifying_jones[291786]: --> passed data devices: 0 physical, 3 LVM
Oct 01 17:16:31 compute-0 mystifying_jones[291786]: --> relative data size: 1.0
Oct 01 17:16:31 compute-0 mystifying_jones[291786]: --> All data devices are unavailable
Oct 01 17:16:31 compute-0 systemd[1]: libpod-e2166389a3ca1eaa4f08a59ba65734d932f1d3d5c2c46467e0dee99ca07a6f79.scope: Deactivated successfully.
Oct 01 17:16:31 compute-0 podman[291768]: 2025-10-01 17:16:31.98182007 +0000 UTC m=+1.843032846 container died e2166389a3ca1eaa4f08a59ba65734d932f1d3d5c2c46467e0dee99ca07a6f79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 01 17:16:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:16:32 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:32 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Oct 01 17:16:32 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:16:32.594932) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 17:16:32 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Oct 01 17:16:32 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338992594994, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 1232, "num_deletes": 250, "total_data_size": 1881684, "memory_usage": 1909224, "flush_reason": "Manual Compaction"}
Oct 01 17:16:32 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Oct 01 17:16:32 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338992725381, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 1104062, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31093, "largest_seqno": 32324, "table_properties": {"data_size": 1099588, "index_size": 1934, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11743, "raw_average_key_size": 20, "raw_value_size": 1089869, "raw_average_value_size": 1918, "num_data_blocks": 88, "num_entries": 568, "num_filter_entries": 568, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759338866, "oldest_key_time": 1759338866, "file_creation_time": 1759338992, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Oct 01 17:16:32 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 131015 microseconds, and 3620 cpu microseconds.
Oct 01 17:16:32 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 17:16:32 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:16:32.725954) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 1104062 bytes OK
Oct 01 17:16:32 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:16:32.726175) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Oct 01 17:16:32 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:16:32.860153) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Oct 01 17:16:32 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:16:32.860205) EVENT_LOG_v1 {"time_micros": 1759338992860191, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 17:16:32 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:16:32.860234) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 17:16:32 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 1876098, prev total WAL file size 1889401, number of live WAL files 2.
Oct 01 17:16:32 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 17:16:32 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:16:32.990331) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303033' seq:72057594037927935, type:22 .. '6D6772737461740031323534' seq:0, type:0; will stop at (end)
Oct 01 17:16:32 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 17:16:32 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(1078KB)], [65(10215KB)]
Oct 01 17:16:32 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338992990362, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 11564598, "oldest_snapshot_seqno": -1}
Oct 01 17:16:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-797d8a1524f44321aeeed62075e8d9f7d0bc3e10a6163f60bd3f5a12c502217c-merged.mount: Deactivated successfully.
Oct 01 17:16:33 compute-0 nova_compute[259504]: 2025-10-01 17:16:33.268 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:16:33 compute-0 nova_compute[259504]: 2025-10-01 17:16:33.269 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:16:33 compute-0 nova_compute[259504]: 2025-10-01 17:16:33.269 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:16:33 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6245 keys, 9012507 bytes, temperature: kUnknown
Oct 01 17:16:33 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338993588816, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 9012507, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8971490, "index_size": 24331, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15621, "raw_key_size": 157002, "raw_average_key_size": 25, "raw_value_size": 8860307, "raw_average_value_size": 1418, "num_data_blocks": 995, "num_entries": 6245, "num_filter_entries": 6245, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336399, "oldest_key_time": 0, "file_creation_time": 1759338992, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Oct 01 17:16:33 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 17:16:33 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4165861182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:16:33 compute-0 ceph-mon[74273]: pgmap v1434: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:33 compute-0 nova_compute[259504]: 2025-10-01 17:16:33.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:16:33 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:16:33.589060) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 9012507 bytes
Oct 01 17:16:33 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:16:33.798834) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 19.3 rd, 15.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 10.0 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(18.6) write-amplify(8.2) OK, records in: 6700, records dropped: 455 output_compression: NoCompression
Oct 01 17:16:33 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:16:33.798870) EVENT_LOG_v1 {"time_micros": 1759338993798857, "job": 36, "event": "compaction_finished", "compaction_time_micros": 598528, "compaction_time_cpu_micros": 20868, "output_level": 6, "num_output_files": 1, "total_output_size": 9012507, "num_input_records": 6700, "num_output_records": 6245, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 17:16:33 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 17:16:33 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338993799197, "job": 36, "event": "table_file_deletion", "file_number": 67}
Oct 01 17:16:33 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 17:16:33 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338993800880, "job": 36, "event": "table_file_deletion", "file_number": 65}
Oct 01 17:16:33 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:16:32.990244) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:16:33 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:16:33.800972) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:16:33 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:16:33.800978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:16:33 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:16:33.800980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:16:33 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:16:33.800981) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:16:33 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:16:33.800987) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:16:34 compute-0 podman[291768]: 2025-10-01 17:16:34.045874891 +0000 UTC m=+3.907087717 container remove e2166389a3ca1eaa4f08a59ba65734d932f1d3d5c2c46467e0dee99ca07a6f79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct 01 17:16:34 compute-0 systemd[1]: libpod-conmon-e2166389a3ca1eaa4f08a59ba65734d932f1d3d5c2c46467e0dee99ca07a6f79.scope: Deactivated successfully.
Oct 01 17:16:34 compute-0 sudo[291618]: pam_unix(sudo:session): session closed for user root
Oct 01 17:16:34 compute-0 podman[291849]: 2025-10-01 17:16:34.148464831 +0000 UTC m=+1.121456070 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 17:16:34 compute-0 sudo[291868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:16:34 compute-0 sudo[291868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:16:34 compute-0 sudo[291868]: pam_unix(sudo:session): session closed for user root
Oct 01 17:16:34 compute-0 sudo[291895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:16:34 compute-0 sudo[291895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:16:34 compute-0 sudo[291895]: pam_unix(sudo:session): session closed for user root
Oct 01 17:16:34 compute-0 sudo[291920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:16:34 compute-0 sudo[291920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:16:34 compute-0 sudo[291920]: pam_unix(sudo:session): session closed for user root
Oct 01 17:16:34 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:34 compute-0 sudo[291945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 17:16:34 compute-0 sudo[291945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:16:34 compute-0 podman[292010]: 2025-10-01 17:16:34.778756126 +0000 UTC m=+0.037012183 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:16:34 compute-0 podman[292010]: 2025-10-01 17:16:34.967510932 +0000 UTC m=+0.225766969 container create 5ec0c2bc1da48becc78313b0c43856862c43b5e9774e63c41b6be8e5599b78dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kare, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:16:35 compute-0 ceph-mon[74273]: pgmap v1435: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:35 compute-0 systemd[1]: Started libpod-conmon-5ec0c2bc1da48becc78313b0c43856862c43b5e9774e63c41b6be8e5599b78dd.scope.
Oct 01 17:16:35 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:16:35 compute-0 podman[292010]: 2025-10-01 17:16:35.337571215 +0000 UTC m=+0.595827272 container init 5ec0c2bc1da48becc78313b0c43856862c43b5e9774e63c41b6be8e5599b78dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kare, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:16:35 compute-0 podman[292010]: 2025-10-01 17:16:35.349487328 +0000 UTC m=+0.607743365 container start 5ec0c2bc1da48becc78313b0c43856862c43b5e9774e63c41b6be8e5599b78dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kare, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:16:35 compute-0 competent_kare[292027]: 167 167
Oct 01 17:16:35 compute-0 systemd[1]: libpod-5ec0c2bc1da48becc78313b0c43856862c43b5e9774e63c41b6be8e5599b78dd.scope: Deactivated successfully.
Oct 01 17:16:35 compute-0 podman[292010]: 2025-10-01 17:16:35.392326587 +0000 UTC m=+0.650582644 container attach 5ec0c2bc1da48becc78313b0c43856862c43b5e9774e63c41b6be8e5599b78dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 01 17:16:35 compute-0 podman[292010]: 2025-10-01 17:16:35.392638713 +0000 UTC m=+0.650894760 container died 5ec0c2bc1da48becc78313b0c43856862c43b5e9774e63c41b6be8e5599b78dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kare, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 01 17:16:35 compute-0 nova_compute[259504]: 2025-10-01 17:16:35.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:16:35 compute-0 nova_compute[259504]: 2025-10-01 17:16:35.751 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:16:35 compute-0 nova_compute[259504]: 2025-10-01 17:16:35.751 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 17:16:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc1a01e4a4f5d8f4115abb9f4f08b1588a3ebeaf0db82cf6e075841e2d892ecc-merged.mount: Deactivated successfully.
Oct 01 17:16:36 compute-0 podman[292010]: 2025-10-01 17:16:36.120433962 +0000 UTC m=+1.378689999 container remove 5ec0c2bc1da48becc78313b0c43856862c43b5e9774e63c41b6be8e5599b78dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kare, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:16:36 compute-0 systemd[1]: libpod-conmon-5ec0c2bc1da48becc78313b0c43856862c43b5e9774e63c41b6be8e5599b78dd.scope: Deactivated successfully.
Oct 01 17:16:36 compute-0 podman[292051]: 2025-10-01 17:16:36.367086321 +0000 UTC m=+0.102529753 container create aa963f7efea60ce5b45bc297dab8feed3f8a8e3e5ea13dcdbe724a02e0ff9533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:16:36 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1436: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:36 compute-0 podman[292051]: 2025-10-01 17:16:36.289037537 +0000 UTC m=+0.024480929 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:16:36 compute-0 systemd[1]: Started libpod-conmon-aa963f7efea60ce5b45bc297dab8feed3f8a8e3e5ea13dcdbe724a02e0ff9533.scope.
Oct 01 17:16:36 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:16:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8fabed8ae09dacf593fc473248cd70c1df348ab6819f0ef28e60d3736cbb972/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:16:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8fabed8ae09dacf593fc473248cd70c1df348ab6819f0ef28e60d3736cbb972/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:16:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8fabed8ae09dacf593fc473248cd70c1df348ab6819f0ef28e60d3736cbb972/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:16:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8fabed8ae09dacf593fc473248cd70c1df348ab6819f0ef28e60d3736cbb972/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:16:36 compute-0 ceph-mon[74273]: pgmap v1436: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:36 compute-0 podman[292051]: 2025-10-01 17:16:36.760474486 +0000 UTC m=+0.495917888 container init aa963f7efea60ce5b45bc297dab8feed3f8a8e3e5ea13dcdbe724a02e0ff9533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_yonath, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 01 17:16:36 compute-0 podman[292051]: 2025-10-01 17:16:36.766663619 +0000 UTC m=+0.502107001 container start aa963f7efea60ce5b45bc297dab8feed3f8a8e3e5ea13dcdbe724a02e0ff9533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_yonath, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:16:36 compute-0 podman[292051]: 2025-10-01 17:16:36.834337352 +0000 UTC m=+0.569780854 container attach aa963f7efea60ce5b45bc297dab8feed3f8a8e3e5ea13dcdbe724a02e0ff9533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_yonath, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:16:37 compute-0 hungry_yonath[292068]: {
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:     "0": [
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:         {
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             "devices": [
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "/dev/loop3"
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             ],
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             "lv_name": "ceph_lv0",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             "lv_size": "21470642176",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             "name": "ceph_lv0",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             "tags": {
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "ceph.cluster_name": "ceph",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "ceph.crush_device_class": "",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "ceph.encrypted": "0",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "ceph.osd_id": "0",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "ceph.type": "block",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "ceph.vdo": "0"
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             },
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             "type": "block",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             "vg_name": "ceph_vg0"
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:         }
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:     ],
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:     "1": [
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:         {
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             "devices": [
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "/dev/loop4"
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             ],
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             "lv_name": "ceph_lv1",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             "lv_size": "21470642176",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             "name": "ceph_lv1",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             "tags": {
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "ceph.cluster_name": "ceph",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "ceph.crush_device_class": "",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "ceph.encrypted": "0",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "ceph.osd_id": "1",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "ceph.type": "block",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "ceph.vdo": "0"
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             },
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             "type": "block",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             "vg_name": "ceph_vg1"
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:         }
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:     ],
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:     "2": [
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:         {
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             "devices": [
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "/dev/loop5"
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             ],
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             "lv_name": "ceph_lv2",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             "lv_size": "21470642176",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             "name": "ceph_lv2",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             "tags": {
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "ceph.cluster_name": "ceph",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "ceph.crush_device_class": "",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "ceph.encrypted": "0",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "ceph.osd_id": "2",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "ceph.type": "block",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:                 "ceph.vdo": "0"
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             },
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             "type": "block",
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:             "vg_name": "ceph_vg2"
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:         }
Oct 01 17:16:37 compute-0 hungry_yonath[292068]:     ]
Oct 01 17:16:37 compute-0 hungry_yonath[292068]: }
Oct 01 17:16:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:16:37 compute-0 systemd[1]: libpod-aa963f7efea60ce5b45bc297dab8feed3f8a8e3e5ea13dcdbe724a02e0ff9533.scope: Deactivated successfully.
Oct 01 17:16:37 compute-0 podman[292051]: 2025-10-01 17:16:37.52415268 +0000 UTC m=+1.259596092 container died aa963f7efea60ce5b45bc297dab8feed3f8a8e3e5ea13dcdbe724a02e0ff9533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_yonath, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 17:16:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8fabed8ae09dacf593fc473248cd70c1df348ab6819f0ef28e60d3736cbb972-merged.mount: Deactivated successfully.
Oct 01 17:16:38 compute-0 podman[292051]: 2025-10-01 17:16:38.19438022 +0000 UTC m=+1.929823612 container remove aa963f7efea60ce5b45bc297dab8feed3f8a8e3e5ea13dcdbe724a02e0ff9533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 01 17:16:38 compute-0 sudo[291945]: pam_unix(sudo:session): session closed for user root
Oct 01 17:16:38 compute-0 systemd[1]: libpod-conmon-aa963f7efea60ce5b45bc297dab8feed3f8a8e3e5ea13dcdbe724a02e0ff9533.scope: Deactivated successfully.
Oct 01 17:16:38 compute-0 sudo[292089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:16:38 compute-0 sudo[292089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:16:38 compute-0 sudo[292089]: pam_unix(sudo:session): session closed for user root
Oct 01 17:16:38 compute-0 sudo[292114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:16:38 compute-0 sudo[292114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:16:38 compute-0 sudo[292114]: pam_unix(sudo:session): session closed for user root
Oct 01 17:16:38 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1437: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:38 compute-0 sudo[292139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:16:38 compute-0 sudo[292139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:16:38 compute-0 sudo[292139]: pam_unix(sudo:session): session closed for user root
Oct 01 17:16:38 compute-0 sudo[292164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 17:16:38 compute-0 sudo[292164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:16:38 compute-0 podman[292229]: 2025-10-01 17:16:38.813342475 +0000 UTC m=+0.048358583 container create 54eaea4bd25f44447cdf70802e4f505bc000ae617a55670cf8bf015876f27be3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ganguly, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:16:38 compute-0 systemd[1]: Started libpod-conmon-54eaea4bd25f44447cdf70802e4f505bc000ae617a55670cf8bf015876f27be3.scope.
Oct 01 17:16:38 compute-0 podman[292229]: 2025-10-01 17:16:38.790031793 +0000 UTC m=+0.025047921 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:16:38 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:16:38 compute-0 podman[292229]: 2025-10-01 17:16:38.915283134 +0000 UTC m=+0.150299252 container init 54eaea4bd25f44447cdf70802e4f505bc000ae617a55670cf8bf015876f27be3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ganguly, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 01 17:16:38 compute-0 podman[292229]: 2025-10-01 17:16:38.92288435 +0000 UTC m=+0.157900458 container start 54eaea4bd25f44447cdf70802e4f505bc000ae617a55670cf8bf015876f27be3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ganguly, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 01 17:16:38 compute-0 podman[292229]: 2025-10-01 17:16:38.926678163 +0000 UTC m=+0.161694291 container attach 54eaea4bd25f44447cdf70802e4f505bc000ae617a55670cf8bf015876f27be3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 01 17:16:38 compute-0 hopeful_ganguly[292246]: 167 167
Oct 01 17:16:38 compute-0 systemd[1]: libpod-54eaea4bd25f44447cdf70802e4f505bc000ae617a55670cf8bf015876f27be3.scope: Deactivated successfully.
Oct 01 17:16:38 compute-0 podman[292229]: 2025-10-01 17:16:38.928407672 +0000 UTC m=+0.163423800 container died 54eaea4bd25f44447cdf70802e4f505bc000ae617a55670cf8bf015876f27be3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ganguly, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 01 17:16:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-09142f6c83e81c30da3a638efe4827002c5310de1afa942bc0e8921ebb019f72-merged.mount: Deactivated successfully.
Oct 01 17:16:38 compute-0 podman[292229]: 2025-10-01 17:16:38.965118978 +0000 UTC m=+0.200135086 container remove 54eaea4bd25f44447cdf70802e4f505bc000ae617a55670cf8bf015876f27be3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ganguly, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:16:38 compute-0 systemd[1]: libpod-conmon-54eaea4bd25f44447cdf70802e4f505bc000ae617a55670cf8bf015876f27be3.scope: Deactivated successfully.
Oct 01 17:16:39 compute-0 podman[292270]: 2025-10-01 17:16:39.136412039 +0000 UTC m=+0.051563413 container create a3e92b2b4458737ec4e456c9454c22d1f8df40f40810804c2b73ff229f12ac55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 01 17:16:39 compute-0 systemd[1]: Started libpod-conmon-a3e92b2b4458737ec4e456c9454c22d1f8df40f40810804c2b73ff229f12ac55.scope.
Oct 01 17:16:39 compute-0 podman[292270]: 2025-10-01 17:16:39.114291303 +0000 UTC m=+0.029442687 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:16:39 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:16:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/453e33aebe2d608cbb125dc060aeacd567744ecf8bb086c154a250d7fd89a637/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:16:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/453e33aebe2d608cbb125dc060aeacd567744ecf8bb086c154a250d7fd89a637/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:16:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/453e33aebe2d608cbb125dc060aeacd567744ecf8bb086c154a250d7fd89a637/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:16:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/453e33aebe2d608cbb125dc060aeacd567744ecf8bb086c154a250d7fd89a637/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:16:39 compute-0 podman[292270]: 2025-10-01 17:16:39.24717255 +0000 UTC m=+0.162323934 container init a3e92b2b4458737ec4e456c9454c22d1f8df40f40810804c2b73ff229f12ac55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hofstadter, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 01 17:16:39 compute-0 podman[292270]: 2025-10-01 17:16:39.254285691 +0000 UTC m=+0.169437055 container start a3e92b2b4458737ec4e456c9454c22d1f8df40f40810804c2b73ff229f12ac55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hofstadter, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:16:39 compute-0 podman[292270]: 2025-10-01 17:16:39.297452058 +0000 UTC m=+0.212603462 container attach a3e92b2b4458737ec4e456c9454c22d1f8df40f40810804c2b73ff229f12ac55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hofstadter, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 01 17:16:39 compute-0 ceph-mon[74273]: pgmap v1437: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:40 compute-0 suspicious_hofstadter[292286]: {
Oct 01 17:16:40 compute-0 suspicious_hofstadter[292286]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 17:16:40 compute-0 suspicious_hofstadter[292286]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:16:40 compute-0 suspicious_hofstadter[292286]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 17:16:40 compute-0 suspicious_hofstadter[292286]:         "osd_id": 2,
Oct 01 17:16:40 compute-0 suspicious_hofstadter[292286]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 17:16:40 compute-0 suspicious_hofstadter[292286]:         "type": "bluestore"
Oct 01 17:16:40 compute-0 suspicious_hofstadter[292286]:     },
Oct 01 17:16:40 compute-0 suspicious_hofstadter[292286]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 17:16:40 compute-0 suspicious_hofstadter[292286]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:16:40 compute-0 suspicious_hofstadter[292286]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 17:16:40 compute-0 suspicious_hofstadter[292286]:         "osd_id": 0,
Oct 01 17:16:40 compute-0 suspicious_hofstadter[292286]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 17:16:40 compute-0 suspicious_hofstadter[292286]:         "type": "bluestore"
Oct 01 17:16:40 compute-0 suspicious_hofstadter[292286]:     },
Oct 01 17:16:40 compute-0 suspicious_hofstadter[292286]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 17:16:40 compute-0 suspicious_hofstadter[292286]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:16:40 compute-0 suspicious_hofstadter[292286]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 17:16:40 compute-0 suspicious_hofstadter[292286]:         "osd_id": 1,
Oct 01 17:16:40 compute-0 suspicious_hofstadter[292286]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 17:16:40 compute-0 suspicious_hofstadter[292286]:         "type": "bluestore"
Oct 01 17:16:40 compute-0 suspicious_hofstadter[292286]:     }
Oct 01 17:16:40 compute-0 suspicious_hofstadter[292286]: }
Oct 01 17:16:40 compute-0 systemd[1]: libpod-a3e92b2b4458737ec4e456c9454c22d1f8df40f40810804c2b73ff229f12ac55.scope: Deactivated successfully.
Oct 01 17:16:40 compute-0 podman[292270]: 2025-10-01 17:16:40.276770745 +0000 UTC m=+1.191922109 container died a3e92b2b4458737ec4e456c9454c22d1f8df40f40810804c2b73ff229f12ac55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hofstadter, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:16:40 compute-0 systemd[1]: libpod-a3e92b2b4458737ec4e456c9454c22d1f8df40f40810804c2b73ff229f12ac55.scope: Consumed 1.029s CPU time.
Oct 01 17:16:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-453e33aebe2d608cbb125dc060aeacd567744ecf8bb086c154a250d7fd89a637-merged.mount: Deactivated successfully.
Oct 01 17:16:40 compute-0 podman[292270]: 2025-10-01 17:16:40.363296285 +0000 UTC m=+1.278447649 container remove a3e92b2b4458737ec4e456c9454c22d1f8df40f40810804c2b73ff229f12ac55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hofstadter, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 01 17:16:40 compute-0 systemd[1]: libpod-conmon-a3e92b2b4458737ec4e456c9454c22d1f8df40f40810804c2b73ff229f12ac55.scope: Deactivated successfully.
Oct 01 17:16:40 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:40 compute-0 sudo[292164]: pam_unix(sudo:session): session closed for user root
Oct 01 17:16:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 17:16:40 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:16:40 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 17:16:40 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:16:40 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev c95e276d-7e76-4929-b64b-170afec525f4 does not exist
Oct 01 17:16:40 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 68ec25b4-b678-4d9a-9235-d8dcdd5897c5 does not exist
Oct 01 17:16:40 compute-0 sudo[292333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:16:40 compute-0 sudo[292333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:16:40 compute-0 sudo[292333]: pam_unix(sudo:session): session closed for user root
Oct 01 17:16:40 compute-0 sudo[292358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 17:16:40 compute-0 sudo[292358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:16:40 compute-0 sudo[292358]: pam_unix(sudo:session): session closed for user root
Oct 01 17:16:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:16:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:16:41 compute-0 ceph-mon[74273]: pgmap v1438: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:16:41 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:16:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:16:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:16:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:16:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:16:41 compute-0 nova_compute[259504]: 2025-10-01 17:16:41.747 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:16:42 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:16:43 compute-0 ceph-mon[74273]: pgmap v1439: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 17:16:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1043460470' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 17:16:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 17:16:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1043460470' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 17:16:44 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1043460470' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 17:16:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/1043460470' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 17:16:44 compute-0 ceph-mon[74273]: pgmap v1440: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:46 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:46 compute-0 ceph-mon[74273]: pgmap v1441: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:46 compute-0 podman[292384]: 2025-10-01 17:16:46.746493398 +0000 UTC m=+0.064054889 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:16:46 compute-0 podman[292383]: 2025-10-01 17:16:46.746871773 +0000 UTC m=+0.063573355 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 01 17:16:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:16:48 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1442: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:49 compute-0 ceph-mon[74273]: pgmap v1442: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:50 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:51 compute-0 ceph-mon[74273]: pgmap v1443: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:52 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:16:52 compute-0 ceph-mon[74273]: pgmap v1444: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:54 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:55 compute-0 ceph-mon[74273]: pgmap v1445: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:56 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:57 compute-0 ceph-mon[74273]: pgmap v1446: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:16:58 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:16:59 compute-0 ceph-mon[74273]: pgmap v1447: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:00 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1448: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:00 compute-0 podman[292423]: 2025-10-01 17:17:00.759242983 +0000 UTC m=+0.082634364 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 01 17:17:01 compute-0 ceph-mon[74273]: pgmap v1448: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:02 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:17:03 compute-0 ceph-mon[74273]: pgmap v1449: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:04 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1450: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:04 compute-0 ceph-mon[74273]: pgmap v1450: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:04 compute-0 podman[292449]: 2025-10-01 17:17:04.770982463 +0000 UTC m=+0.088708050 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent)
Oct 01 17:17:06 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:06 compute-0 ceph-mon[74273]: pgmap v1451: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:17:08 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:09 compute-0 ceph-mon[74273]: pgmap v1452: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:10 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1453: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:10 compute-0 ceph-mon[74273]: pgmap v1453: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:17:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:17:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_17:17:11
Oct 01 17:17:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 17:17:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 17:17:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'backups', '.rgw.root', 'vms', 'volumes', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta']
Oct 01 17:17:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 17:17:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:17:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:17:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:17:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:17:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 17:17:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:17:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 17:17:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:17:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:17:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:17:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:17:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:17:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:17:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:17:12 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:17:13 compute-0 ceph-mon[74273]: pgmap v1454: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:14 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:15 compute-0 ceph-mon[74273]: pgmap v1455: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:16 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:17 compute-0 ceph-mon[74273]: pgmap v1456: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:17:17 compute-0 podman[292469]: 2025-10-01 17:17:17.742822036 +0000 UTC m=+0.053315679 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct 01 17:17:17 compute-0 podman[292468]: 2025-10-01 17:17:17.750194067 +0000 UTC m=+0.064525368 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 17:17:18 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:19 compute-0 ceph-mon[74273]: pgmap v1457: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:17:19.987 162304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:17:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:17:19.988 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:17:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:17:19.988 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:17:20 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1458: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:20 compute-0 ceph-mon[74273]: pgmap v1458: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 17:17:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:17:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 17:17:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:17:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:17:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:17:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:17:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:17:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:17:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:17:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 01 17:17:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:17:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005739061380803542 of space, bias 4.0, pg target 0.6886873656964251 quantized to 16 (current 16)
Oct 01 17:17:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:17:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Oct 01 17:17:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:17:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 17:17:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:17:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 17:17:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:17:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:17:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:17:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 17:17:22 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:17:23 compute-0 ceph-mon[74273]: pgmap v1459: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:23 compute-0 nova_compute[259504]: 2025-10-01 17:17:23.751 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:17:24 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1460: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:25 compute-0 ceph-mon[74273]: pgmap v1460: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:26 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:27 compute-0 ceph-mon[74273]: pgmap v1461: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:17:28 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1462: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:28 compute-0 ceph-mon[74273]: pgmap v1462: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:29 compute-0 nova_compute[259504]: 2025-10-01 17:17:29.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:17:29 compute-0 nova_compute[259504]: 2025-10-01 17:17:29.750 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 17:17:29 compute-0 nova_compute[259504]: 2025-10-01 17:17:29.750 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 17:17:29 compute-0 nova_compute[259504]: 2025-10-01 17:17:29.799 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 17:17:30 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:30 compute-0 ceph-mon[74273]: pgmap v1463: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:30 compute-0 nova_compute[259504]: 2025-10-01 17:17:30.751 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:17:31 compute-0 nova_compute[259504]: 2025-10-01 17:17:31.140 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:17:31 compute-0 nova_compute[259504]: 2025-10-01 17:17:31.141 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:17:31 compute-0 nova_compute[259504]: 2025-10-01 17:17:31.142 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:17:31 compute-0 nova_compute[259504]: 2025-10-01 17:17:31.142 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 17:17:31 compute-0 nova_compute[259504]: 2025-10-01 17:17:31.142 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:17:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:17:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/243270176' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:17:31 compute-0 nova_compute[259504]: 2025-10-01 17:17:31.589 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:17:31 compute-0 nova_compute[259504]: 2025-10-01 17:17:31.733 2 WARNING nova.virt.libvirt.driver [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 17:17:31 compute-0 nova_compute[259504]: 2025-10-01 17:17:31.734 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4977MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 17:17:31 compute-0 nova_compute[259504]: 2025-10-01 17:17:31.734 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:17:31 compute-0 nova_compute[259504]: 2025-10-01 17:17:31.735 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:17:31 compute-0 podman[292531]: 2025-10-01 17:17:31.782243798 +0000 UTC m=+0.092188782 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 17:17:31 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/243270176' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:17:31 compute-0 nova_compute[259504]: 2025-10-01 17:17:31.842 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 17:17:31 compute-0 nova_compute[259504]: 2025-10-01 17:17:31.842 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 17:17:31 compute-0 nova_compute[259504]: 2025-10-01 17:17:31.869 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:17:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:17:32 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/443998811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:17:32 compute-0 nova_compute[259504]: 2025-10-01 17:17:32.310 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:17:32 compute-0 nova_compute[259504]: 2025-10-01 17:17:32.316 2 DEBUG nova.compute.provider_tree [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed in ProviderTree for provider: 2417da73-53f1-4edf-ae4c-fbd9fa470d6b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 17:17:32 compute-0 nova_compute[259504]: 2025-10-01 17:17:32.386 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 17:17:32 compute-0 nova_compute[259504]: 2025-10-01 17:17:32.388 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 17:17:32 compute-0 nova_compute[259504]: 2025-10-01 17:17:32.388 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:17:32 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:17:32 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/443998811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:17:32 compute-0 ceph-mon[74273]: pgmap v1464: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:34 compute-0 nova_compute[259504]: 2025-10-01 17:17:34.383 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:17:34 compute-0 nova_compute[259504]: 2025-10-01 17:17:34.384 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:17:34 compute-0 nova_compute[259504]: 2025-10-01 17:17:34.384 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:17:34 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:34 compute-0 ceph-mon[74273]: pgmap v1465: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:35 compute-0 podman[292580]: 2025-10-01 17:17:35.736674383 +0000 UTC m=+0.053073742 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 01 17:17:35 compute-0 nova_compute[259504]: 2025-10-01 17:17:35.752 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:17:35 compute-0 nova_compute[259504]: 2025-10-01 17:17:35.753 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:17:36 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:36 compute-0 nova_compute[259504]: 2025-10-01 17:17:36.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:17:36 compute-0 nova_compute[259504]: 2025-10-01 17:17:36.750 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 17:17:36 compute-0 ceph-mon[74273]: pgmap v1466: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:17:38 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:38 compute-0 ceph-mon[74273]: pgmap v1467: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:40 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:40 compute-0 sudo[292599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:17:40 compute-0 sudo[292599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:17:40 compute-0 sudo[292599]: pam_unix(sudo:session): session closed for user root
Oct 01 17:17:40 compute-0 sudo[292624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:17:40 compute-0 sudo[292624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:17:40 compute-0 sudo[292624]: pam_unix(sudo:session): session closed for user root
Oct 01 17:17:40 compute-0 sudo[292649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:17:40 compute-0 sudo[292649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:17:40 compute-0 sudo[292649]: pam_unix(sudo:session): session closed for user root
Oct 01 17:17:40 compute-0 sudo[292674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 01 17:17:40 compute-0 sudo[292674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:17:41 compute-0 ceph-mon[74273]: pgmap v1468: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:17:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:17:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:17:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:17:41 compute-0 podman[292773]: 2025-10-01 17:17:41.55462943 +0000 UTC m=+0.208832832 container exec bfdaa9b78cc1558959452c7020a00aa78f3da27e3ededf3766f2f88165c2443b (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 01 17:17:41 compute-0 podman[292773]: 2025-10-01 17:17:41.683511067 +0000 UTC m=+0.337714479 container exec_died bfdaa9b78cc1558959452c7020a00aa78f3da27e3ededf3766f2f88165c2443b (image=quay.io/ceph/ceph:v18, name=ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 01 17:17:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:17:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:17:42 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:17:42 compute-0 sudo[292674]: pam_unix(sudo:session): session closed for user root
Oct 01 17:17:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 17:17:42 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:17:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 17:17:42 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:17:42 compute-0 sudo[292933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:17:42 compute-0 sudo[292933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:17:42 compute-0 sudo[292933]: pam_unix(sudo:session): session closed for user root
Oct 01 17:17:43 compute-0 sudo[292958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:17:43 compute-0 sudo[292958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:17:43 compute-0 sudo[292958]: pam_unix(sudo:session): session closed for user root
Oct 01 17:17:43 compute-0 sudo[292983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:17:43 compute-0 sudo[292983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:17:43 compute-0 sudo[292983]: pam_unix(sudo:session): session closed for user root
Oct 01 17:17:43 compute-0 sudo[293008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 17:17:43 compute-0 sudo[293008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:17:43 compute-0 ceph-mon[74273]: pgmap v1469: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:43 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:17:43 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:17:43 compute-0 sudo[293008]: pam_unix(sudo:session): session closed for user root
Oct 01 17:17:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 01 17:17:43 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 01 17:17:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 17:17:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:17:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 17:17:43 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 17:17:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 17:17:43 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:17:43 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 0411bb71-4732-475c-b35e-b352fdaacd0e does not exist
Oct 01 17:17:43 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 63ca0b81-c175-45db-bf42-5997c6c0d267 does not exist
Oct 01 17:17:43 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 0232e116-77c9-4d27-958f-2346045887bc does not exist
Oct 01 17:17:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 17:17:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 17:17:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 17:17:43 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 17:17:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 17:17:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:17:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 17:17:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3130756497' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 17:17:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 17:17:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3130756497' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 17:17:43 compute-0 sudo[293064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:17:43 compute-0 sudo[293064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:17:43 compute-0 sudo[293064]: pam_unix(sudo:session): session closed for user root
Oct 01 17:17:43 compute-0 sudo[293089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:17:43 compute-0 sudo[293089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:17:43 compute-0 sudo[293089]: pam_unix(sudo:session): session closed for user root
Oct 01 17:17:44 compute-0 sudo[293114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:17:44 compute-0 sudo[293114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:17:44 compute-0 sudo[293114]: pam_unix(sudo:session): session closed for user root
Oct 01 17:17:44 compute-0 sudo[293139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 17:17:44 compute-0 sudo[293139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:17:44 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:44 compute-0 podman[293206]: 2025-10-01 17:17:44.572048495 +0000 UTC m=+0.132107044 container create 389994a532fedf8fb5ac39563d14547c593161695bf4b5786b53d70128542175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 01 17:17:44 compute-0 podman[293206]: 2025-10-01 17:17:44.482283607 +0000 UTC m=+0.042342206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:17:44 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 01 17:17:44 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:17:44 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 17:17:44 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:17:44 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 17:17:44 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 17:17:44 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:17:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3130756497' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 17:17:44 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/3130756497' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 17:17:44 compute-0 ceph-mon[74273]: pgmap v1470: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:44 compute-0 systemd[1]: Started libpod-conmon-389994a532fedf8fb5ac39563d14547c593161695bf4b5786b53d70128542175.scope.
Oct 01 17:17:44 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:17:45 compute-0 podman[293206]: 2025-10-01 17:17:45.145440905 +0000 UTC m=+0.705499454 container init 389994a532fedf8fb5ac39563d14547c593161695bf4b5786b53d70128542175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_lamport, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 01 17:17:45 compute-0 podman[293206]: 2025-10-01 17:17:45.156734984 +0000 UTC m=+0.716793513 container start 389994a532fedf8fb5ac39563d14547c593161695bf4b5786b53d70128542175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_lamport, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 01 17:17:45 compute-0 strange_lamport[293222]: 167 167
Oct 01 17:17:45 compute-0 systemd[1]: libpod-389994a532fedf8fb5ac39563d14547c593161695bf4b5786b53d70128542175.scope: Deactivated successfully.
Oct 01 17:17:45 compute-0 podman[293206]: 2025-10-01 17:17:45.256230307 +0000 UTC m=+0.816288846 container attach 389994a532fedf8fb5ac39563d14547c593161695bf4b5786b53d70128542175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_lamport, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:17:45 compute-0 podman[293206]: 2025-10-01 17:17:45.257673001 +0000 UTC m=+0.817731520 container died 389994a532fedf8fb5ac39563d14547c593161695bf4b5786b53d70128542175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_lamport, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 01 17:17:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4e588ac30a2406d01f122d044b904c859498e255e76f16cec0a8adef17e7a17-merged.mount: Deactivated successfully.
Oct 01 17:17:46 compute-0 podman[293206]: 2025-10-01 17:17:46.269642656 +0000 UTC m=+1.829701185 container remove 389994a532fedf8fb5ac39563d14547c593161695bf4b5786b53d70128542175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_lamport, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 01 17:17:46 compute-0 systemd[1]: libpod-conmon-389994a532fedf8fb5ac39563d14547c593161695bf4b5786b53d70128542175.scope: Deactivated successfully.
Oct 01 17:17:46 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1471: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:46 compute-0 podman[293246]: 2025-10-01 17:17:46.454504583 +0000 UTC m=+0.029662333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:17:46 compute-0 podman[293246]: 2025-10-01 17:17:46.749783756 +0000 UTC m=+0.324941506 container create da9b0e9a612d5731c11c5f4788763e8f276a9c478486b6392bcf448d551a1e24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:17:46 compute-0 ceph-mon[74273]: pgmap v1471: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:46 compute-0 systemd[1]: Started libpod-conmon-da9b0e9a612d5731c11c5f4788763e8f276a9c478486b6392bcf448d551a1e24.scope.
Oct 01 17:17:46 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:17:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/136bc408327db0037fea02743abc37c9709c8a1f8341a297723fa8dbd1c696cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:17:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/136bc408327db0037fea02743abc37c9709c8a1f8341a297723fa8dbd1c696cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:17:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/136bc408327db0037fea02743abc37c9709c8a1f8341a297723fa8dbd1c696cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:17:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/136bc408327db0037fea02743abc37c9709c8a1f8341a297723fa8dbd1c696cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:17:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/136bc408327db0037fea02743abc37c9709c8a1f8341a297723fa8dbd1c696cf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 17:17:47 compute-0 podman[293246]: 2025-10-01 17:17:47.039617567 +0000 UTC m=+0.614775307 container init da9b0e9a612d5731c11c5f4788763e8f276a9c478486b6392bcf448d551a1e24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_proskuriakova, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:17:47 compute-0 podman[293246]: 2025-10-01 17:17:47.046765181 +0000 UTC m=+0.621922891 container start da9b0e9a612d5731c11c5f4788763e8f276a9c478486b6392bcf448d551a1e24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:17:47 compute-0 podman[293246]: 2025-10-01 17:17:47.110998952 +0000 UTC m=+0.686156752 container attach da9b0e9a612d5731c11c5f4788763e8f276a9c478486b6392bcf448d551a1e24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_proskuriakova, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 01 17:17:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:17:48 compute-0 quirky_proskuriakova[293263]: --> passed data devices: 0 physical, 3 LVM
Oct 01 17:17:48 compute-0 quirky_proskuriakova[293263]: --> relative data size: 1.0
Oct 01 17:17:48 compute-0 quirky_proskuriakova[293263]: --> All data devices are unavailable
Oct 01 17:17:48 compute-0 systemd[1]: libpod-da9b0e9a612d5731c11c5f4788763e8f276a9c478486b6392bcf448d551a1e24.scope: Deactivated successfully.
Oct 01 17:17:48 compute-0 systemd[1]: libpod-da9b0e9a612d5731c11c5f4788763e8f276a9c478486b6392bcf448d551a1e24.scope: Consumed 1.202s CPU time.
Oct 01 17:17:48 compute-0 podman[293246]: 2025-10-01 17:17:48.286131158 +0000 UTC m=+1.861288868 container died da9b0e9a612d5731c11c5f4788763e8f276a9c478486b6392bcf448d551a1e24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_proskuriakova, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 01 17:17:48 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-136bc408327db0037fea02743abc37c9709c8a1f8341a297723fa8dbd1c696cf-merged.mount: Deactivated successfully.
Oct 01 17:17:48 compute-0 ceph-mon[74273]: pgmap v1472: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:48 compute-0 podman[293246]: 2025-10-01 17:17:48.91707128 +0000 UTC m=+2.492228990 container remove da9b0e9a612d5731c11c5f4788763e8f276a9c478486b6392bcf448d551a1e24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_proskuriakova, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 01 17:17:48 compute-0 systemd[1]: libpod-conmon-da9b0e9a612d5731c11c5f4788763e8f276a9c478486b6392bcf448d551a1e24.scope: Deactivated successfully.
Oct 01 17:17:48 compute-0 podman[293299]: 2025-10-01 17:17:48.965740088 +0000 UTC m=+0.641376332 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 01 17:17:48 compute-0 podman[293292]: 2025-10-01 17:17:48.975012379 +0000 UTC m=+0.650790041 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 17:17:48 compute-0 sudo[293139]: pam_unix(sudo:session): session closed for user root
Oct 01 17:17:49 compute-0 sudo[293345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:17:49 compute-0 sudo[293345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:17:49 compute-0 sudo[293345]: pam_unix(sudo:session): session closed for user root
Oct 01 17:17:49 compute-0 sudo[293370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:17:49 compute-0 sudo[293370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:17:49 compute-0 sudo[293370]: pam_unix(sudo:session): session closed for user root
Oct 01 17:17:49 compute-0 sudo[293395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:17:49 compute-0 sudo[293395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:17:49 compute-0 sudo[293395]: pam_unix(sudo:session): session closed for user root
Oct 01 17:17:49 compute-0 sudo[293420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 17:17:49 compute-0 sudo[293420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:17:49 compute-0 podman[293486]: 2025-10-01 17:17:49.64891421 +0000 UTC m=+0.024616347 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:17:49 compute-0 podman[293486]: 2025-10-01 17:17:49.839158529 +0000 UTC m=+0.214860656 container create e837a469b5d2207111e165f855a2ad70ffea4663f3abcdbf4417477831b58dbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 01 17:17:50 compute-0 systemd[1]: Started libpod-conmon-e837a469b5d2207111e165f855a2ad70ffea4663f3abcdbf4417477831b58dbb.scope.
Oct 01 17:17:50 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:17:50 compute-0 podman[293486]: 2025-10-01 17:17:50.167183621 +0000 UTC m=+0.542885768 container init e837a469b5d2207111e165f855a2ad70ffea4663f3abcdbf4417477831b58dbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_murdock, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:17:50 compute-0 podman[293486]: 2025-10-01 17:17:50.177997435 +0000 UTC m=+0.553699582 container start e837a469b5d2207111e165f855a2ad70ffea4663f3abcdbf4417477831b58dbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:17:50 compute-0 pensive_murdock[293501]: 167 167
Oct 01 17:17:50 compute-0 systemd[1]: libpod-e837a469b5d2207111e165f855a2ad70ffea4663f3abcdbf4417477831b58dbb.scope: Deactivated successfully.
Oct 01 17:17:50 compute-0 conmon[293501]: conmon e837a469b5d2207111e1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e837a469b5d2207111e165f855a2ad70ffea4663f3abcdbf4417477831b58dbb.scope/container/memory.events
Oct 01 17:17:50 compute-0 podman[293486]: 2025-10-01 17:17:50.3406882 +0000 UTC m=+0.716390327 container attach e837a469b5d2207111e165f855a2ad70ffea4663f3abcdbf4417477831b58dbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_murdock, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:17:50 compute-0 podman[293486]: 2025-10-01 17:17:50.341298843 +0000 UTC m=+0.717000970 container died e837a469b5d2207111e165f855a2ad70ffea4663f3abcdbf4417477831b58dbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_murdock, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 01 17:17:50 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf31e4b029a7ec25e2cd580ff301311da55cd1d3460cd450cd1aa59e7916357d-merged.mount: Deactivated successfully.
Oct 01 17:17:50 compute-0 ceph-mon[74273]: pgmap v1473: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:51 compute-0 podman[293486]: 2025-10-01 17:17:51.182624159 +0000 UTC m=+1.558326296 container remove e837a469b5d2207111e165f855a2ad70ffea4663f3abcdbf4417477831b58dbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_murdock, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 01 17:17:51 compute-0 systemd[1]: libpod-conmon-e837a469b5d2207111e165f855a2ad70ffea4663f3abcdbf4417477831b58dbb.scope: Deactivated successfully.
Oct 01 17:17:51 compute-0 podman[293526]: 2025-10-01 17:17:51.353530986 +0000 UTC m=+0.028526395 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:17:51 compute-0 podman[293526]: 2025-10-01 17:17:51.486734078 +0000 UTC m=+0.161729447 container create 8c922eda4707652ad349a924d702868c16f642fc311978ace8a51e6e5930e1bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_montalcini, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:17:51 compute-0 systemd[1]: Started libpod-conmon-8c922eda4707652ad349a924d702868c16f642fc311978ace8a51e6e5930e1bf.scope.
Oct 01 17:17:51 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:17:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aae6936e0c93dc9cb7f421adae07d1805556b67f89aa693ab972fc9b800f4062/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:17:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aae6936e0c93dc9cb7f421adae07d1805556b67f89aa693ab972fc9b800f4062/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:17:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aae6936e0c93dc9cb7f421adae07d1805556b67f89aa693ab972fc9b800f4062/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:17:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aae6936e0c93dc9cb7f421adae07d1805556b67f89aa693ab972fc9b800f4062/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:17:51 compute-0 podman[293526]: 2025-10-01 17:17:51.736581538 +0000 UTC m=+0.411576917 container init 8c922eda4707652ad349a924d702868c16f642fc311978ace8a51e6e5930e1bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:17:51 compute-0 podman[293526]: 2025-10-01 17:17:51.7522447 +0000 UTC m=+0.427240059 container start 8c922eda4707652ad349a924d702868c16f642fc311978ace8a51e6e5930e1bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_montalcini, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 01 17:17:51 compute-0 podman[293526]: 2025-10-01 17:17:51.830813287 +0000 UTC m=+0.505808636 container attach 8c922eda4707652ad349a924d702868c16f642fc311978ace8a51e6e5930e1bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_montalcini, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Oct 01 17:17:52 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]: {
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:     "0": [
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:         {
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             "devices": [
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "/dev/loop3"
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             ],
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             "lv_name": "ceph_lv0",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             "lv_size": "21470642176",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             "name": "ceph_lv0",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             "tags": {
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "ceph.cluster_name": "ceph",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "ceph.crush_device_class": "",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "ceph.encrypted": "0",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "ceph.osd_id": "0",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "ceph.type": "block",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "ceph.vdo": "0"
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             },
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             "type": "block",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             "vg_name": "ceph_vg0"
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:         }
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:     ],
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:     "1": [
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:         {
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             "devices": [
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "/dev/loop4"
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             ],
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             "lv_name": "ceph_lv1",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             "lv_size": "21470642176",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             "name": "ceph_lv1",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             "tags": {
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "ceph.cluster_name": "ceph",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "ceph.crush_device_class": "",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "ceph.encrypted": "0",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "ceph.osd_id": "1",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "ceph.type": "block",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "ceph.vdo": "0"
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             },
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             "type": "block",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             "vg_name": "ceph_vg1"
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:         }
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:     ],
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:     "2": [
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:         {
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             "devices": [
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "/dev/loop5"
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             ],
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             "lv_name": "ceph_lv2",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             "lv_size": "21470642176",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             "name": "ceph_lv2",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             "tags": {
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "ceph.cluster_name": "ceph",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "ceph.crush_device_class": "",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "ceph.encrypted": "0",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "ceph.osd_id": "2",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "ceph.type": "block",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:                 "ceph.vdo": "0"
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             },
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             "type": "block",
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:             "vg_name": "ceph_vg2"
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:         }
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]:     ]
Oct 01 17:17:52 compute-0 goofy_montalcini[293542]: }
Oct 01 17:17:52 compute-0 systemd[1]: libpod-8c922eda4707652ad349a924d702868c16f642fc311978ace8a51e6e5930e1bf.scope: Deactivated successfully.
Oct 01 17:17:52 compute-0 podman[293526]: 2025-10-01 17:17:52.547258943 +0000 UTC m=+1.222254302 container died 8c922eda4707652ad349a924d702868c16f642fc311978ace8a51e6e5930e1bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 01 17:17:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:17:53 compute-0 ceph-mon[74273]: pgmap v1474: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-aae6936e0c93dc9cb7f421adae07d1805556b67f89aa693ab972fc9b800f4062-merged.mount: Deactivated successfully.
Oct 01 17:17:54 compute-0 podman[293526]: 2025-10-01 17:17:54.396300479 +0000 UTC m=+3.071295798 container remove 8c922eda4707652ad349a924d702868c16f642fc311978ace8a51e6e5930e1bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 01 17:17:54 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1475: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:54 compute-0 systemd[1]: libpod-conmon-8c922eda4707652ad349a924d702868c16f642fc311978ace8a51e6e5930e1bf.scope: Deactivated successfully.
Oct 01 17:17:54 compute-0 sudo[293420]: pam_unix(sudo:session): session closed for user root
Oct 01 17:17:54 compute-0 sudo[293564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:17:54 compute-0 sudo[293564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:17:54 compute-0 sudo[293564]: pam_unix(sudo:session): session closed for user root
Oct 01 17:17:54 compute-0 sudo[293589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:17:54 compute-0 sudo[293589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:17:54 compute-0 sudo[293589]: pam_unix(sudo:session): session closed for user root
Oct 01 17:17:54 compute-0 sudo[293614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:17:54 compute-0 sudo[293614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:17:54 compute-0 sudo[293614]: pam_unix(sudo:session): session closed for user root
Oct 01 17:17:54 compute-0 ceph-mon[74273]: pgmap v1475: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:54 compute-0 sudo[293639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 17:17:54 compute-0 sudo[293639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:17:55 compute-0 podman[293705]: 2025-10-01 17:17:54.971317462 +0000 UTC m=+0.029129069 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:17:55 compute-0 podman[293705]: 2025-10-01 17:17:55.150224123 +0000 UTC m=+0.208035710 container create 101e2176fbe4bfe28bba7633b87e0f973f46d41a3f2f0c16b36e31d1a042e4be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chatelet, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 01 17:17:55 compute-0 systemd[1]: Started libpod-conmon-101e2176fbe4bfe28bba7633b87e0f973f46d41a3f2f0c16b36e31d1a042e4be.scope.
Oct 01 17:17:55 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:17:55 compute-0 podman[293705]: 2025-10-01 17:17:55.397002176 +0000 UTC m=+0.454813793 container init 101e2176fbe4bfe28bba7633b87e0f973f46d41a3f2f0c16b36e31d1a042e4be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chatelet, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:17:55 compute-0 podman[293705]: 2025-10-01 17:17:55.405590044 +0000 UTC m=+0.463401631 container start 101e2176fbe4bfe28bba7633b87e0f973f46d41a3f2f0c16b36e31d1a042e4be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 17:17:55 compute-0 determined_chatelet[293721]: 167 167
Oct 01 17:17:55 compute-0 systemd[1]: libpod-101e2176fbe4bfe28bba7633b87e0f973f46d41a3f2f0c16b36e31d1a042e4be.scope: Deactivated successfully.
Oct 01 17:17:55 compute-0 podman[293705]: 2025-10-01 17:17:55.542207809 +0000 UTC m=+0.600019396 container attach 101e2176fbe4bfe28bba7633b87e0f973f46d41a3f2f0c16b36e31d1a042e4be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:17:55 compute-0 podman[293705]: 2025-10-01 17:17:55.542607344 +0000 UTC m=+0.600418941 container died 101e2176fbe4bfe28bba7633b87e0f973f46d41a3f2f0c16b36e31d1a042e4be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chatelet, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:17:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-a17bda95a42280359bc29738b674120b7fd8313cb058ef7674b81398d18137c2-merged.mount: Deactivated successfully.
Oct 01 17:17:56 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1476: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:56 compute-0 podman[293705]: 2025-10-01 17:17:56.459685647 +0000 UTC m=+1.517497234 container remove 101e2176fbe4bfe28bba7633b87e0f973f46d41a3f2f0c16b36e31d1a042e4be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 01 17:17:56 compute-0 systemd[1]: libpod-conmon-101e2176fbe4bfe28bba7633b87e0f973f46d41a3f2f0c16b36e31d1a042e4be.scope: Deactivated successfully.
Oct 01 17:17:56 compute-0 podman[293747]: 2025-10-01 17:17:56.61412999 +0000 UTC m=+0.029453525 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:17:56 compute-0 ceph-mon[74273]: pgmap v1476: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:56 compute-0 podman[293747]: 2025-10-01 17:17:56.907630492 +0000 UTC m=+0.322954007 container create a60473f918d8637d4830af8b2fd91f017fec1ae361b04f16d539fb8d632b7e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_aryabhata, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:17:57 compute-0 systemd[1]: Started libpod-conmon-a60473f918d8637d4830af8b2fd91f017fec1ae361b04f16d539fb8d632b7e8c.scope.
Oct 01 17:17:57 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:17:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac825dca2ca1f174299677d43c2e527027ba53a5c4db091680e3676f5d8d44e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:17:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac825dca2ca1f174299677d43c2e527027ba53a5c4db091680e3676f5d8d44e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:17:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac825dca2ca1f174299677d43c2e527027ba53a5c4db091680e3676f5d8d44e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:17:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac825dca2ca1f174299677d43c2e527027ba53a5c4db091680e3676f5d8d44e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:17:57 compute-0 podman[293747]: 2025-10-01 17:17:57.24449184 +0000 UTC m=+0.659815355 container init a60473f918d8637d4830af8b2fd91f017fec1ae361b04f16d539fb8d632b7e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_aryabhata, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:17:57 compute-0 podman[293747]: 2025-10-01 17:17:57.253069748 +0000 UTC m=+0.668393263 container start a60473f918d8637d4830af8b2fd91f017fec1ae361b04f16d539fb8d632b7e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_aryabhata, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 01 17:17:57 compute-0 podman[293747]: 2025-10-01 17:17:57.381814186 +0000 UTC m=+0.797137711 container attach a60473f918d8637d4830af8b2fd91f017fec1ae361b04f16d539fb8d632b7e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_aryabhata, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 01 17:17:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:17:58 compute-0 elated_aryabhata[293763]: {
Oct 01 17:17:58 compute-0 elated_aryabhata[293763]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 17:17:58 compute-0 elated_aryabhata[293763]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:17:58 compute-0 elated_aryabhata[293763]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 17:17:58 compute-0 elated_aryabhata[293763]:         "osd_id": 2,
Oct 01 17:17:58 compute-0 elated_aryabhata[293763]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 17:17:58 compute-0 elated_aryabhata[293763]:         "type": "bluestore"
Oct 01 17:17:58 compute-0 elated_aryabhata[293763]:     },
Oct 01 17:17:58 compute-0 elated_aryabhata[293763]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 17:17:58 compute-0 elated_aryabhata[293763]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:17:58 compute-0 elated_aryabhata[293763]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 17:17:58 compute-0 elated_aryabhata[293763]:         "osd_id": 0,
Oct 01 17:17:58 compute-0 elated_aryabhata[293763]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 17:17:58 compute-0 elated_aryabhata[293763]:         "type": "bluestore"
Oct 01 17:17:58 compute-0 elated_aryabhata[293763]:     },
Oct 01 17:17:58 compute-0 elated_aryabhata[293763]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 17:17:58 compute-0 elated_aryabhata[293763]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:17:58 compute-0 elated_aryabhata[293763]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 17:17:58 compute-0 elated_aryabhata[293763]:         "osd_id": 1,
Oct 01 17:17:58 compute-0 elated_aryabhata[293763]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 17:17:58 compute-0 elated_aryabhata[293763]:         "type": "bluestore"
Oct 01 17:17:58 compute-0 elated_aryabhata[293763]:     }
Oct 01 17:17:58 compute-0 elated_aryabhata[293763]: }
Oct 01 17:17:58 compute-0 systemd[1]: libpod-a60473f918d8637d4830af8b2fd91f017fec1ae361b04f16d539fb8d632b7e8c.scope: Deactivated successfully.
Oct 01 17:17:58 compute-0 podman[293747]: 2025-10-01 17:17:58.264968224 +0000 UTC m=+1.680291739 container died a60473f918d8637d4830af8b2fd91f017fec1ae361b04f16d539fb8d632b7e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:17:58 compute-0 systemd[1]: libpod-a60473f918d8637d4830af8b2fd91f017fec1ae361b04f16d539fb8d632b7e8c.scope: Consumed 1.015s CPU time.
Oct 01 17:17:58 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ac825dca2ca1f174299677d43c2e527027ba53a5c4db091680e3676f5d8d44e-merged.mount: Deactivated successfully.
Oct 01 17:17:58 compute-0 podman[293747]: 2025-10-01 17:17:58.624353229 +0000 UTC m=+2.039676744 container remove a60473f918d8637d4830af8b2fd91f017fec1ae361b04f16d539fb8d632b7e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 17:17:58 compute-0 systemd[1]: libpod-conmon-a60473f918d8637d4830af8b2fd91f017fec1ae361b04f16d539fb8d632b7e8c.scope: Deactivated successfully.
Oct 01 17:17:58 compute-0 sudo[293639]: pam_unix(sudo:session): session closed for user root
Oct 01 17:17:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 17:17:58 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:17:58 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 17:17:58 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:17:58 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev ff828265-47bc-4588-917a-181895c444e6 does not exist
Oct 01 17:17:58 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev ab20532d-adb6-4a56-bbb4-22a64d4549a8 does not exist
Oct 01 17:17:58 compute-0 sudo[293811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:17:58 compute-0 sudo[293811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:17:58 compute-0 sudo[293811]: pam_unix(sudo:session): session closed for user root
Oct 01 17:17:59 compute-0 sudo[293836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 17:17:59 compute-0 sudo[293836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:17:59 compute-0 sudo[293836]: pam_unix(sudo:session): session closed for user root
Oct 01 17:17:59 compute-0 ceph-mon[74273]: pgmap v1477: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:17:59 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:17:59 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:18:00 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1478: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:00 compute-0 ceph-mon[74273]: pgmap v1478: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:02 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1479: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:18:02 compute-0 podman[293861]: 2025-10-01 17:18:02.801840098 +0000 UTC m=+0.114295994 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:18:03 compute-0 ceph-mon[74273]: pgmap v1479: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:04 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:04 compute-0 ceph-mon[74273]: pgmap v1480: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:06 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:06 compute-0 podman[293887]: 2025-10-01 17:18:06.741615249 +0000 UTC m=+0.057078616 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 01 17:18:07 compute-0 ceph-mon[74273]: pgmap v1481: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:18:08 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:09 compute-0 ceph-mon[74273]: pgmap v1482: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:10 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:18:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:18:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_17:18:11
Oct 01 17:18:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 17:18:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 17:18:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'images', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'vms']
Oct 01 17:18:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:18:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:18:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 17:18:11 compute-0 ceph-mon[74273]: pgmap v1483: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:18:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:18:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 17:18:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:18:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 17:18:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:18:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:18:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:18:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:18:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:18:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:18:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:18:12 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:18:12 compute-0 ceph-mon[74273]: pgmap v1484: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:14 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:14 compute-0 ceph-mon[74273]: pgmap v1485: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:16 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1486: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:16 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Oct 01 17:18:16 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:18:16.995575) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 17:18:16 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Oct 01 17:18:16 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759339096995606, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1071, "num_deletes": 251, "total_data_size": 1639768, "memory_usage": 1661712, "flush_reason": "Manual Compaction"}
Oct 01 17:18:16 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Oct 01 17:18:16 compute-0 ceph-mon[74273]: pgmap v1486: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:17 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759339097117692, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 1603194, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32325, "largest_seqno": 33395, "table_properties": {"data_size": 1597878, "index_size": 2776, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11136, "raw_average_key_size": 19, "raw_value_size": 1587354, "raw_average_value_size": 2809, "num_data_blocks": 125, "num_entries": 565, "num_filter_entries": 565, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759338992, "oldest_key_time": 1759338992, "file_creation_time": 1759339096, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Oct 01 17:18:17 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 122196 microseconds, and 4349 cpu microseconds.
Oct 01 17:18:17 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 17:18:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:18:17.117762) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 1603194 bytes OK
Oct 01 17:18:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:18:17.117789) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Oct 01 17:18:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:18:17.121585) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Oct 01 17:18:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:18:17.121619) EVENT_LOG_v1 {"time_micros": 1759339097121609, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 17:18:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:18:17.121635) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 17:18:17 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 1634756, prev total WAL file size 1634756, number of live WAL files 2.
Oct 01 17:18:17 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 17:18:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:18:17.122288) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Oct 01 17:18:17 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 17:18:17 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(1565KB)], [68(8801KB)]
Oct 01 17:18:17 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759339097122362, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 10615701, "oldest_snapshot_seqno": -1}
Oct 01 17:18:17 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6296 keys, 8876705 bytes, temperature: kUnknown
Oct 01 17:18:17 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759339097311704, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 8876705, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8835501, "index_size": 24390, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15749, "raw_key_size": 158711, "raw_average_key_size": 25, "raw_value_size": 8723484, "raw_average_value_size": 1385, "num_data_blocks": 992, "num_entries": 6296, "num_filter_entries": 6296, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336399, "oldest_key_time": 0, "file_creation_time": 1759339097, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Oct 01 17:18:17 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 17:18:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:18:17.311947) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 8876705 bytes
Oct 01 17:18:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:18:17.331259) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 56.1 rd, 46.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 8.6 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(12.2) write-amplify(5.5) OK, records in: 6810, records dropped: 514 output_compression: NoCompression
Oct 01 17:18:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:18:17.331340) EVENT_LOG_v1 {"time_micros": 1759339097331280, "job": 38, "event": "compaction_finished", "compaction_time_micros": 189269, "compaction_time_cpu_micros": 19553, "output_level": 6, "num_output_files": 1, "total_output_size": 8876705, "num_input_records": 6810, "num_output_records": 6296, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 17:18:17 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 17:18:17 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759339097331795, "job": 38, "event": "table_file_deletion", "file_number": 70}
Oct 01 17:18:17 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 17:18:17 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759339097333404, "job": 38, "event": "table_file_deletion", "file_number": 68}
Oct 01 17:18:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:18:17.122178) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:18:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:18:17.333480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:18:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:18:17.333485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:18:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:18:17.333486) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:18:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:18:17.333488) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:18:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:18:17.333489) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:18:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:18:18 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:18 compute-0 ceph-mon[74273]: pgmap v1487: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:19 compute-0 podman[293906]: 2025-10-01 17:18:19.748756633 +0000 UTC m=+0.066322851 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=iscsid, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 01 17:18:19 compute-0 podman[293905]: 2025-10-01 17:18:19.785794176 +0000 UTC m=+0.103859048 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 01 17:18:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:18:19.988 162304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:18:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:18:19.989 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:18:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:18:19.989 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:18:20 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1488: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:20 compute-0 ceph-mon[74273]: pgmap v1488: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 17:18:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:18:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 17:18:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:18:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:18:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:18:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:18:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:18:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:18:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:18:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 01 17:18:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:18:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005739061380803542 of space, bias 4.0, pg target 0.6886873656964251 quantized to 16 (current 16)
Oct 01 17:18:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:18:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Oct 01 17:18:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:18:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 17:18:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:18:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 17:18:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:18:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:18:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:18:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 17:18:22 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1489: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:18:23 compute-0 ceph-mon[74273]: pgmap v1489: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:24 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:24 compute-0 ceph-mon[74273]: pgmap v1490: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:24 compute-0 nova_compute[259504]: 2025-10-01 17:18:24.751 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:18:26 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1491: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:26 compute-0 ceph-mon[74273]: pgmap v1491: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:18:28 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:29 compute-0 ceph-mon[74273]: pgmap v1492: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:30 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:30 compute-0 nova_compute[259504]: 2025-10-01 17:18:30.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:18:30 compute-0 nova_compute[259504]: 2025-10-01 17:18:30.751 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 17:18:30 compute-0 nova_compute[259504]: 2025-10-01 17:18:30.751 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 17:18:30 compute-0 ceph-mon[74273]: pgmap v1493: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:30 compute-0 nova_compute[259504]: 2025-10-01 17:18:30.766 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 17:18:32 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1494: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:32 compute-0 ceph-mon[74273]: pgmap v1494: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:18:32 compute-0 nova_compute[259504]: 2025-10-01 17:18:32.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:18:32 compute-0 nova_compute[259504]: 2025-10-01 17:18:32.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:18:32 compute-0 nova_compute[259504]: 2025-10-01 17:18:32.751 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:18:32 compute-0 nova_compute[259504]: 2025-10-01 17:18:32.772 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:18:32 compute-0 nova_compute[259504]: 2025-10-01 17:18:32.773 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:18:32 compute-0 nova_compute[259504]: 2025-10-01 17:18:32.773 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:18:32 compute-0 nova_compute[259504]: 2025-10-01 17:18:32.773 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 17:18:32 compute-0 nova_compute[259504]: 2025-10-01 17:18:32.773 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:18:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:18:33 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2455997445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:18:33 compute-0 nova_compute[259504]: 2025-10-01 17:18:33.222 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:18:33 compute-0 nova_compute[259504]: 2025-10-01 17:18:33.443 2 WARNING nova.virt.libvirt.driver [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 17:18:33 compute-0 nova_compute[259504]: 2025-10-01 17:18:33.445 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4964MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 17:18:33 compute-0 nova_compute[259504]: 2025-10-01 17:18:33.445 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:18:33 compute-0 nova_compute[259504]: 2025-10-01 17:18:33.446 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:18:33 compute-0 nova_compute[259504]: 2025-10-01 17:18:33.527 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 17:18:33 compute-0 nova_compute[259504]: 2025-10-01 17:18:33.527 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 17:18:33 compute-0 nova_compute[259504]: 2025-10-01 17:18:33.552 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:18:33 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2455997445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:18:33 compute-0 podman[293978]: 2025-10-01 17:18:33.756344397 +0000 UTC m=+0.068507679 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:18:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:18:33 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2662827524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:18:33 compute-0 nova_compute[259504]: 2025-10-01 17:18:33.967 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:18:33 compute-0 nova_compute[259504]: 2025-10-01 17:18:33.975 2 DEBUG nova.compute.provider_tree [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed in ProviderTree for provider: 2417da73-53f1-4edf-ae4c-fbd9fa470d6b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 17:18:34 compute-0 nova_compute[259504]: 2025-10-01 17:18:34.010 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Inventory has not changed for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 17:18:34 compute-0 nova_compute[259504]: 2025-10-01 17:18:34.012 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 17:18:34 compute-0 nova_compute[259504]: 2025-10-01 17:18:34.013 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:18:34 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1495: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:34 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2662827524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:18:34 compute-0 ceph-mon[74273]: pgmap v1495: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:35 compute-0 nova_compute[259504]: 2025-10-01 17:18:35.014 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:18:36 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1496: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:36 compute-0 nova_compute[259504]: 2025-10-01 17:18:36.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:18:36 compute-0 nova_compute[259504]: 2025-10-01 17:18:36.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:18:36 compute-0 nova_compute[259504]: 2025-10-01 17:18:36.751 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:18:36 compute-0 nova_compute[259504]: 2025-10-01 17:18:36.751 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 17:18:37 compute-0 ceph-mon[74273]: pgmap v1496: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:37 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:18:37 compute-0 podman[294015]: 2025-10-01 17:18:37.749670758 +0000 UTC m=+0.061940095 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct 01 17:18:38 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1497: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:39 compute-0 ceph-mon[74273]: pgmap v1497: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:40 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:18:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:18:41 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 17:18:41 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 17:18:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:18:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:18:41 compute-0 ceph-mon[74273]: pgmap v1498: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:18:41 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:18:41 compute-0 nova_compute[259504]: 2025-10-01 17:18:41.746 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:18:42 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1499: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:42 compute-0 ceph-mon[74273]: pgmap v1499: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:42 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:18:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 17:18:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/489817131' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 17:18:43 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 17:18:43 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/489817131' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 17:18:43 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/489817131' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 17:18:43 compute-0 ceph-mon[74273]: from='client.? 192.168.122.10:0/489817131' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 17:18:44 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1500: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:44 compute-0 ceph-mon[74273]: pgmap v1500: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:46 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1501: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:47 compute-0 ceph-mon[74273]: pgmap v1501: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:47 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:18:48 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1502: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:48 compute-0 ceph-mon[74273]: pgmap v1502: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:50 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:50 compute-0 podman[294036]: 2025-10-01 17:18:50.753151203 +0000 UTC m=+0.059469610 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible)
Oct 01 17:18:50 compute-0 podman[294035]: 2025-10-01 17:18:50.778804348 +0000 UTC m=+0.088721488 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 01 17:18:51 compute-0 ceph-mon[74273]: pgmap v1503: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:52 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1504: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:52 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:18:53 compute-0 ceph-mon[74273]: pgmap v1504: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:54 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1505: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:54 compute-0 ceph-mon[74273]: pgmap v1505: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:56 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:56 compute-0 ceph-mon[74273]: pgmap v1506: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:57 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:18:58 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1507: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:59 compute-0 sudo[294074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:18:59 compute-0 sudo[294074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:18:59 compute-0 sudo[294074]: pam_unix(sudo:session): session closed for user root
Oct 01 17:18:59 compute-0 sudo[294099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:18:59 compute-0 sudo[294099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:18:59 compute-0 sudo[294099]: pam_unix(sudo:session): session closed for user root
Oct 01 17:18:59 compute-0 sudo[294124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:18:59 compute-0 sudo[294124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:18:59 compute-0 sudo[294124]: pam_unix(sudo:session): session closed for user root
Oct 01 17:18:59 compute-0 sudo[294149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 17:18:59 compute-0 sudo[294149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:18:59 compute-0 ceph-mon[74273]: pgmap v1507: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:18:59 compute-0 sudo[294149]: pam_unix(sudo:session): session closed for user root
Oct 01 17:18:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 17:18:59 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:18:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 17:18:59 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 17:18:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 17:18:59 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:18:59 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 47422c3e-4208-4447-97cc-0393903e71ec does not exist
Oct 01 17:18:59 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 9f3c485c-d74d-46f9-acbc-306376efb432 does not exist
Oct 01 17:18:59 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 75a3d386-8d3f-4158-a909-32ecff3757ef does not exist
Oct 01 17:18:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 17:18:59 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 17:18:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 17:18:59 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 17:18:59 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 17:18:59 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:18:59 compute-0 sudo[294205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:18:59 compute-0 sudo[294205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:18:59 compute-0 sudo[294205]: pam_unix(sudo:session): session closed for user root
Oct 01 17:19:00 compute-0 sudo[294230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:19:00 compute-0 sudo[294230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:19:00 compute-0 sudo[294230]: pam_unix(sudo:session): session closed for user root
Oct 01 17:19:00 compute-0 sudo[294255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:19:00 compute-0 sudo[294255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:19:00 compute-0 sudo[294255]: pam_unix(sudo:session): session closed for user root
Oct 01 17:19:00 compute-0 sudo[294280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 17:19:00 compute-0 sudo[294280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:19:00 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:19:00 compute-0 podman[294346]: 2025-10-01 17:19:00.500765329 +0000 UTC m=+0.045698208 container create 438e79c2d0c143aa0cdef9ead78f2092ac464a03c6d37c23753318eae1217b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_darwin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 01 17:19:00 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:19:00 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 17:19:00 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:19:00 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 17:19:00 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 17:19:00 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:19:00 compute-0 systemd[1]: Started libpod-conmon-438e79c2d0c143aa0cdef9ead78f2092ac464a03c6d37c23753318eae1217b2f.scope.
Oct 01 17:19:00 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:19:00 compute-0 podman[294346]: 2025-10-01 17:19:00.477377123 +0000 UTC m=+0.022309982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:19:00 compute-0 podman[294346]: 2025-10-01 17:19:00.586435321 +0000 UTC m=+0.131368200 container init 438e79c2d0c143aa0cdef9ead78f2092ac464a03c6d37c23753318eae1217b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_darwin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 01 17:19:00 compute-0 podman[294346]: 2025-10-01 17:19:00.594071974 +0000 UTC m=+0.139004823 container start 438e79c2d0c143aa0cdef9ead78f2092ac464a03c6d37c23753318eae1217b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 17:19:00 compute-0 podman[294346]: 2025-10-01 17:19:00.597534527 +0000 UTC m=+0.142467376 container attach 438e79c2d0c143aa0cdef9ead78f2092ac464a03c6d37c23753318eae1217b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 01 17:19:00 compute-0 clever_darwin[294362]: 167 167
Oct 01 17:19:00 compute-0 systemd[1]: libpod-438e79c2d0c143aa0cdef9ead78f2092ac464a03c6d37c23753318eae1217b2f.scope: Deactivated successfully.
Oct 01 17:19:00 compute-0 podman[294346]: 2025-10-01 17:19:00.601362588 +0000 UTC m=+0.146295437 container died 438e79c2d0c143aa0cdef9ead78f2092ac464a03c6d37c23753318eae1217b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:19:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f7c1f4e991b05095893c14c53bc154a7f520486b04a016264b8188fe23d2e03-merged.mount: Deactivated successfully.
Oct 01 17:19:00 compute-0 podman[294346]: 2025-10-01 17:19:00.643933641 +0000 UTC m=+0.188866490 container remove 438e79c2d0c143aa0cdef9ead78f2092ac464a03c6d37c23753318eae1217b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct 01 17:19:00 compute-0 systemd[1]: libpod-conmon-438e79c2d0c143aa0cdef9ead78f2092ac464a03c6d37c23753318eae1217b2f.scope: Deactivated successfully.
Oct 01 17:19:00 compute-0 podman[294386]: 2025-10-01 17:19:00.859821326 +0000 UTC m=+0.049343029 container create 737b91098c9de090930d2d4d70d53b2bb188f95cb8e98e3798f9b4ff63eb2c71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 01 17:19:00 compute-0 sshd-session[294385]: Accepted publickey for zuul from 192.168.122.10 port 39566 ssh2: ECDSA SHA256:cAu4I/kPoFUKOLOQB71BUt6Th09G4PIJ2iHT8DD8gEY
Oct 01 17:19:00 compute-0 systemd[1]: Started libpod-conmon-737b91098c9de090930d2d4d70d53b2bb188f95cb8e98e3798f9b4ff63eb2c71.scope.
Oct 01 17:19:00 compute-0 systemd-logind[788]: New session 56 of user zuul.
Oct 01 17:19:00 compute-0 systemd[1]: Started Session 56 of User zuul.
Oct 01 17:19:00 compute-0 sshd-session[294385]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 17:19:00 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:19:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c259287a061c9c5113404a8df33dc2939c1ab24a338de59c9bac3b0f8e5336e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:19:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c259287a061c9c5113404a8df33dc2939c1ab24a338de59c9bac3b0f8e5336e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:19:00 compute-0 podman[294386]: 2025-10-01 17:19:00.83877581 +0000 UTC m=+0.028297553 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:19:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c259287a061c9c5113404a8df33dc2939c1ab24a338de59c9bac3b0f8e5336e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:19:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c259287a061c9c5113404a8df33dc2939c1ab24a338de59c9bac3b0f8e5336e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:19:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c259287a061c9c5113404a8df33dc2939c1ab24a338de59c9bac3b0f8e5336e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 17:19:00 compute-0 podman[294386]: 2025-10-01 17:19:00.953387409 +0000 UTC m=+0.142909122 container init 737b91098c9de090930d2d4d70d53b2bb188f95cb8e98e3798f9b4ff63eb2c71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_villani, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:19:00 compute-0 podman[294386]: 2025-10-01 17:19:00.962746204 +0000 UTC m=+0.152267917 container start 737b91098c9de090930d2d4d70d53b2bb188f95cb8e98e3798f9b4ff63eb2c71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_villani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 01 17:19:00 compute-0 podman[294386]: 2025-10-01 17:19:00.967261692 +0000 UTC m=+0.156783385 container attach 737b91098c9de090930d2d4d70d53b2bb188f95cb8e98e3798f9b4ff63eb2c71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 17:19:01 compute-0 sudo[294410]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Oct 01 17:19:01 compute-0 sudo[294410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 17:19:01 compute-0 ceph-mon[74273]: pgmap v1508: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:19:01 compute-0 infallible_villani[294404]: --> passed data devices: 0 physical, 3 LVM
Oct 01 17:19:01 compute-0 infallible_villani[294404]: --> relative data size: 1.0
Oct 01 17:19:01 compute-0 infallible_villani[294404]: --> All data devices are unavailable
Oct 01 17:19:01 compute-0 podman[294386]: 2025-10-01 17:19:01.978327107 +0000 UTC m=+1.167848800 container died 737b91098c9de090930d2d4d70d53b2bb188f95cb8e98e3798f9b4ff63eb2c71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Oct 01 17:19:02 compute-0 systemd[1]: libpod-737b91098c9de090930d2d4d70d53b2bb188f95cb8e98e3798f9b4ff63eb2c71.scope: Deactivated successfully.
Oct 01 17:19:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c259287a061c9c5113404a8df33dc2939c1ab24a338de59c9bac3b0f8e5336e-merged.mount: Deactivated successfully.
Oct 01 17:19:02 compute-0 podman[294386]: 2025-10-01 17:19:02.099053768 +0000 UTC m=+1.288575451 container remove 737b91098c9de090930d2d4d70d53b2bb188f95cb8e98e3798f9b4ff63eb2c71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_villani, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:19:02 compute-0 systemd[1]: libpod-conmon-737b91098c9de090930d2d4d70d53b2bb188f95cb8e98e3798f9b4ff63eb2c71.scope: Deactivated successfully.
Oct 01 17:19:02 compute-0 sudo[294280]: pam_unix(sudo:session): session closed for user root
Oct 01 17:19:02 compute-0 sudo[294499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:19:02 compute-0 sudo[294499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:19:02 compute-0 sudo[294499]: pam_unix(sudo:session): session closed for user root
Oct 01 17:19:02 compute-0 sudo[294539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:19:02 compute-0 sudo[294539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:19:02 compute-0 sudo[294539]: pam_unix(sudo:session): session closed for user root
Oct 01 17:19:02 compute-0 sudo[294575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:19:02 compute-0 sudo[294575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:19:02 compute-0 sudo[294575]: pam_unix(sudo:session): session closed for user root
Oct 01 17:19:02 compute-0 sudo[294603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- lvm list --format json
Oct 01 17:19:02 compute-0 sudo[294603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:19:02 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1509: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:19:02 compute-0 podman[294686]: 2025-10-01 17:19:02.673405185 +0000 UTC m=+0.045576638 container create 2b65f0fbbbc9ce5932621bc352c2451fd51b5d5f6fb26c5e12fe9a3c4101a8c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lamport, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:19:02 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:19:02 compute-0 systemd[1]: Started libpod-conmon-2b65f0fbbbc9ce5932621bc352c2451fd51b5d5f6fb26c5e12fe9a3c4101a8c5.scope.
Oct 01 17:19:02 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:19:02 compute-0 podman[294686]: 2025-10-01 17:19:02.650199228 +0000 UTC m=+0.022370691 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:19:02 compute-0 podman[294686]: 2025-10-01 17:19:02.763654395 +0000 UTC m=+0.135825828 container init 2b65f0fbbbc9ce5932621bc352c2451fd51b5d5f6fb26c5e12fe9a3c4101a8c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lamport, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 01 17:19:02 compute-0 podman[294686]: 2025-10-01 17:19:02.775504797 +0000 UTC m=+0.147676210 container start 2b65f0fbbbc9ce5932621bc352c2451fd51b5d5f6fb26c5e12fe9a3c4101a8c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:19:02 compute-0 podman[294686]: 2025-10-01 17:19:02.778625232 +0000 UTC m=+0.150796735 container attach 2b65f0fbbbc9ce5932621bc352c2451fd51b5d5f6fb26c5e12fe9a3c4101a8c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 17:19:02 compute-0 stupefied_lamport[294706]: 167 167
Oct 01 17:19:02 compute-0 systemd[1]: libpod-2b65f0fbbbc9ce5932621bc352c2451fd51b5d5f6fb26c5e12fe9a3c4101a8c5.scope: Deactivated successfully.
Oct 01 17:19:02 compute-0 podman[294686]: 2025-10-01 17:19:02.781977485 +0000 UTC m=+0.154148918 container died 2b65f0fbbbc9ce5932621bc352c2451fd51b5d5f6fb26c5e12fe9a3c4101a8c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lamport, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 01 17:19:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-69cc5de1fcf1c481ff6919552246da3ebe07e84e554b35db04d1cedb6faf2ee6-merged.mount: Deactivated successfully.
Oct 01 17:19:02 compute-0 podman[294686]: 2025-10-01 17:19:02.832113331 +0000 UTC m=+0.204284744 container remove 2b65f0fbbbc9ce5932621bc352c2451fd51b5d5f6fb26c5e12fe9a3c4101a8c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lamport, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 17:19:02 compute-0 systemd[1]: libpod-conmon-2b65f0fbbbc9ce5932621bc352c2451fd51b5d5f6fb26c5e12fe9a3c4101a8c5.scope: Deactivated successfully.
Oct 01 17:19:03 compute-0 podman[294738]: 2025-10-01 17:19:03.007991222 +0000 UTC m=+0.048168976 container create 143f2329f97688c95ab8d89b5447758737da00391e1994efc39cdf89934cedc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct 01 17:19:03 compute-0 systemd[1]: Started libpod-conmon-143f2329f97688c95ab8d89b5447758737da00391e1994efc39cdf89934cedc5.scope.
Oct 01 17:19:03 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:19:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9087a140e95bd3f24f273e695142f4e0abf57c94dbd54f0986853a32d7e5498f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:19:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9087a140e95bd3f24f273e695142f4e0abf57c94dbd54f0986853a32d7e5498f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:19:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9087a140e95bd3f24f273e695142f4e0abf57c94dbd54f0986853a32d7e5498f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:19:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9087a140e95bd3f24f273e695142f4e0abf57c94dbd54f0986853a32d7e5498f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:19:03 compute-0 podman[294738]: 2025-10-01 17:19:02.984136489 +0000 UTC m=+0.024314263 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:19:03 compute-0 podman[294738]: 2025-10-01 17:19:03.098818369 +0000 UTC m=+0.138996123 container init 143f2329f97688c95ab8d89b5447758737da00391e1994efc39cdf89934cedc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_villani, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:19:03 compute-0 podman[294738]: 2025-10-01 17:19:03.10477393 +0000 UTC m=+0.144951684 container start 143f2329f97688c95ab8d89b5447758737da00391e1994efc39cdf89934cedc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_villani, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 01 17:19:03 compute-0 podman[294738]: 2025-10-01 17:19:03.108357392 +0000 UTC m=+0.148535146 container attach 143f2329f97688c95ab8d89b5447758737da00391e1994efc39cdf89934cedc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_villani, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 17:19:03 compute-0 ceph-mon[74273]: pgmap v1509: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:19:03 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14811 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:03 compute-0 vigorous_villani[294773]: {
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:     "0": [
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:         {
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             "devices": [
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "/dev/loop3"
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             ],
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             "lv_name": "ceph_lv0",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             "lv_size": "21470642176",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5fa2557d-fd0d-408d-adf7-3e2a01798c5f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             "lv_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             "name": "ceph_lv0",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             "tags": {
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "ceph.block_uuid": "foEYsQ-w7BX-3Jv6-ng21-OgLt-Wth0-x4k3ws",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "ceph.cluster_name": "ceph",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "ceph.crush_device_class": "",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "ceph.encrypted": "0",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "ceph.osd_fsid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "ceph.osd_id": "0",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "ceph.type": "block",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "ceph.vdo": "0"
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             },
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             "type": "block",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             "vg_name": "ceph_vg0"
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:         }
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:     ],
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:     "1": [
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:         {
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             "devices": [
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "/dev/loop4"
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             ],
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             "lv_name": "ceph_lv1",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             "lv_size": "21470642176",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fb77ac1b-9869-45aa-84e9-22b10d405207,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             "lv_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             "name": "ceph_lv1",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             "tags": {
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "ceph.block_uuid": "XSgUPv-Rk5b-GtWp-c20F-WbIM-XxsA-Mlax4k",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "ceph.cluster_name": "ceph",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "ceph.crush_device_class": "",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "ceph.encrypted": "0",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "ceph.osd_fsid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "ceph.osd_id": "1",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "ceph.type": "block",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "ceph.vdo": "0"
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             },
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             "type": "block",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             "vg_name": "ceph_vg1"
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:         }
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:     ],
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:     "2": [
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:         {
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             "devices": [
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "/dev/loop5"
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             ],
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             "lv_name": "ceph_lv2",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             "lv_size": "21470642176",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f44264e3-e26a-5bd3-9e84-b4ba651d9cf5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=565523b5-fa16-4aaa-b37b-8314e4edb10e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             "lv_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             "name": "ceph_lv2",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             "tags": {
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "ceph.block_uuid": "v9qNyE-udgg-vTY7-kFj2-QJh2-V6n1-Twxfpl",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "ceph.cluster_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "ceph.cluster_name": "ceph",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "ceph.crush_device_class": "",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "ceph.encrypted": "0",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "ceph.osd_fsid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "ceph.osd_id": "2",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "ceph.type": "block",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:                 "ceph.vdo": "0"
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             },
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             "type": "block",
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:             "vg_name": "ceph_vg2"
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:         }
Oct 01 17:19:03 compute-0 vigorous_villani[294773]:     ]
Oct 01 17:19:03 compute-0 vigorous_villani[294773]: }
Oct 01 17:19:03 compute-0 podman[294738]: 2025-10-01 17:19:03.894091441 +0000 UTC m=+0.934269195 container died 143f2329f97688c95ab8d89b5447758737da00391e1994efc39cdf89934cedc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_villani, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 01 17:19:03 compute-0 systemd[1]: libpod-143f2329f97688c95ab8d89b5447758737da00391e1994efc39cdf89934cedc5.scope: Deactivated successfully.
Oct 01 17:19:04 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14813 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-9087a140e95bd3f24f273e695142f4e0abf57c94dbd54f0986853a32d7e5498f-merged.mount: Deactivated successfully.
Oct 01 17:19:04 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:19:04 compute-0 podman[294863]: 2025-10-01 17:19:04.523667513 +0000 UTC m=+0.579979262 container health_status 347184c563237d8bfa2a474dcaad27034bd475ef5d41df45a9e6aa274f3cc25f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 17:19:04 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 01 17:19:04 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3198631947' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 01 17:19:04 compute-0 ceph-mon[74273]: from='client.14811 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:04 compute-0 ceph-mon[74273]: from='client.14813 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:04 compute-0 ceph-mon[74273]: pgmap v1510: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:19:04 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3198631947' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 01 17:19:04 compute-0 podman[294738]: 2025-10-01 17:19:04.751143165 +0000 UTC m=+1.791320919 container remove 143f2329f97688c95ab8d89b5447758737da00391e1994efc39cdf89934cedc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:19:04 compute-0 sudo[294603]: pam_unix(sudo:session): session closed for user root
Oct 01 17:19:04 compute-0 systemd[1]: libpod-conmon-143f2329f97688c95ab8d89b5447758737da00391e1994efc39cdf89934cedc5.scope: Deactivated successfully.
Oct 01 17:19:04 compute-0 sudo[294943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:19:04 compute-0 sudo[294943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:19:04 compute-0 sudo[294943]: pam_unix(sudo:session): session closed for user root
Oct 01 17:19:04 compute-0 sudo[294968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 17:19:04 compute-0 sudo[294968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:19:04 compute-0 sudo[294968]: pam_unix(sudo:session): session closed for user root
Oct 01 17:19:05 compute-0 sudo[294993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:19:05 compute-0 sudo[294993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:19:05 compute-0 sudo[294993]: pam_unix(sudo:session): session closed for user root
Oct 01 17:19:05 compute-0 sudo[295018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f44264e3-e26a-5bd3-9e84-b4ba651d9cf5/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f44264e3-e26a-5bd3-9e84-b4ba651d9cf5 -- raw list --format json
Oct 01 17:19:05 compute-0 sudo[295018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:19:05 compute-0 podman[295088]: 2025-10-01 17:19:05.432922317 +0000 UTC m=+0.040793281 container create 00c165bf9e6d896ecc8623d1426bd0d4ada44f9498faab8b04bee93740a90bdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jennings, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 01 17:19:05 compute-0 systemd[1]: Started libpod-conmon-00c165bf9e6d896ecc8623d1426bd0d4ada44f9498faab8b04bee93740a90bdd.scope.
Oct 01 17:19:05 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:19:05 compute-0 podman[295088]: 2025-10-01 17:19:05.414623228 +0000 UTC m=+0.022494212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:19:05 compute-0 podman[295088]: 2025-10-01 17:19:05.522062952 +0000 UTC m=+0.129933936 container init 00c165bf9e6d896ecc8623d1426bd0d4ada44f9498faab8b04bee93740a90bdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 01 17:19:05 compute-0 podman[295088]: 2025-10-01 17:19:05.530018737 +0000 UTC m=+0.137889701 container start 00c165bf9e6d896ecc8623d1426bd0d4ada44f9498faab8b04bee93740a90bdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jennings, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 01 17:19:05 compute-0 podman[295088]: 2025-10-01 17:19:05.53371215 +0000 UTC m=+0.141583114 container attach 00c165bf9e6d896ecc8623d1426bd0d4ada44f9498faab8b04bee93740a90bdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jennings, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:19:05 compute-0 gracious_jennings[295104]: 167 167
Oct 01 17:19:05 compute-0 systemd[1]: libpod-00c165bf9e6d896ecc8623d1426bd0d4ada44f9498faab8b04bee93740a90bdd.scope: Deactivated successfully.
Oct 01 17:19:05 compute-0 podman[295088]: 2025-10-01 17:19:05.537733073 +0000 UTC m=+0.145604037 container died 00c165bf9e6d896ecc8623d1426bd0d4ada44f9498faab8b04bee93740a90bdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 01 17:19:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c2fca7d1032cb95d7f1033172f16f4b18db41d72470fdf2eed3c2dd2a3bc5ec-merged.mount: Deactivated successfully.
Oct 01 17:19:05 compute-0 podman[295088]: 2025-10-01 17:19:05.57211165 +0000 UTC m=+0.179982614 container remove 00c165bf9e6d896ecc8623d1426bd0d4ada44f9498faab8b04bee93740a90bdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 17:19:05 compute-0 systemd[1]: libpod-conmon-00c165bf9e6d896ecc8623d1426bd0d4ada44f9498faab8b04bee93740a90bdd.scope: Deactivated successfully.
Oct 01 17:19:05 compute-0 podman[295128]: 2025-10-01 17:19:05.733327577 +0000 UTC m=+0.040488502 container create 8727451e8706872ca6fa6476cc53fbbf11571466569b359f829235d823d5d864 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_perlman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 01 17:19:05 compute-0 systemd[1]: Started libpod-conmon-8727451e8706872ca6fa6476cc53fbbf11571466569b359f829235d823d5d864.scope.
Oct 01 17:19:05 compute-0 podman[295128]: 2025-10-01 17:19:05.716372752 +0000 UTC m=+0.023533687 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 17:19:05 compute-0 systemd[1]: Started libcrun container.
Oct 01 17:19:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74c00d4a8711609df7d8a766b2da7f12582d81c714334e1895404c6b84b54bb2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 17:19:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74c00d4a8711609df7d8a766b2da7f12582d81c714334e1895404c6b84b54bb2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 17:19:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74c00d4a8711609df7d8a766b2da7f12582d81c714334e1895404c6b84b54bb2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 17:19:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74c00d4a8711609df7d8a766b2da7f12582d81c714334e1895404c6b84b54bb2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 17:19:05 compute-0 podman[295128]: 2025-10-01 17:19:05.831236133 +0000 UTC m=+0.138397088 container init 8727451e8706872ca6fa6476cc53fbbf11571466569b359f829235d823d5d864 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 01 17:19:05 compute-0 podman[295128]: 2025-10-01 17:19:05.837560815 +0000 UTC m=+0.144721740 container start 8727451e8706872ca6fa6476cc53fbbf11571466569b359f829235d823d5d864 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_perlman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 01 17:19:05 compute-0 podman[295128]: 2025-10-01 17:19:05.849144194 +0000 UTC m=+0.156305139 container attach 8727451e8706872ca6fa6476cc53fbbf11571466569b359f829235d823d5d864 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 01 17:19:06 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1511: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:19:06 compute-0 musing_perlman[295145]: {
Oct 01 17:19:06 compute-0 musing_perlman[295145]:     "565523b5-fa16-4aaa-b37b-8314e4edb10e": {
Oct 01 17:19:06 compute-0 musing_perlman[295145]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:19:06 compute-0 musing_perlman[295145]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 17:19:06 compute-0 musing_perlman[295145]:         "osd_id": 2,
Oct 01 17:19:06 compute-0 musing_perlman[295145]:         "osd_uuid": "565523b5-fa16-4aaa-b37b-8314e4edb10e",
Oct 01 17:19:06 compute-0 musing_perlman[295145]:         "type": "bluestore"
Oct 01 17:19:06 compute-0 musing_perlman[295145]:     },
Oct 01 17:19:06 compute-0 musing_perlman[295145]:     "5fa2557d-fd0d-408d-adf7-3e2a01798c5f": {
Oct 01 17:19:06 compute-0 musing_perlman[295145]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:19:06 compute-0 musing_perlman[295145]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 17:19:06 compute-0 musing_perlman[295145]:         "osd_id": 0,
Oct 01 17:19:06 compute-0 musing_perlman[295145]:         "osd_uuid": "5fa2557d-fd0d-408d-adf7-3e2a01798c5f",
Oct 01 17:19:06 compute-0 musing_perlman[295145]:         "type": "bluestore"
Oct 01 17:19:06 compute-0 musing_perlman[295145]:     },
Oct 01 17:19:06 compute-0 musing_perlman[295145]:     "fb77ac1b-9869-45aa-84e9-22b10d405207": {
Oct 01 17:19:06 compute-0 musing_perlman[295145]:         "ceph_fsid": "f44264e3-e26a-5bd3-9e84-b4ba651d9cf5",
Oct 01 17:19:06 compute-0 musing_perlman[295145]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 17:19:06 compute-0 musing_perlman[295145]:         "osd_id": 1,
Oct 01 17:19:06 compute-0 musing_perlman[295145]:         "osd_uuid": "fb77ac1b-9869-45aa-84e9-22b10d405207",
Oct 01 17:19:06 compute-0 musing_perlman[295145]:         "type": "bluestore"
Oct 01 17:19:06 compute-0 musing_perlman[295145]:     }
Oct 01 17:19:06 compute-0 musing_perlman[295145]: }
Oct 01 17:19:06 compute-0 systemd[1]: libpod-8727451e8706872ca6fa6476cc53fbbf11571466569b359f829235d823d5d864.scope: Deactivated successfully.
Oct 01 17:19:06 compute-0 systemd[1]: libpod-8727451e8706872ca6fa6476cc53fbbf11571466569b359f829235d823d5d864.scope: Consumed 1.065s CPU time.
Oct 01 17:19:06 compute-0 podman[295128]: 2025-10-01 17:19:06.899937001 +0000 UTC m=+1.207097926 container died 8727451e8706872ca6fa6476cc53fbbf11571466569b359f829235d823d5d864 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_perlman, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 17:19:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-74c00d4a8711609df7d8a766b2da7f12582d81c714334e1895404c6b84b54bb2-merged.mount: Deactivated successfully.
Oct 01 17:19:06 compute-0 podman[295128]: 2025-10-01 17:19:06.965520801 +0000 UTC m=+1.272681726 container remove 8727451e8706872ca6fa6476cc53fbbf11571466569b359f829235d823d5d864 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_perlman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 01 17:19:06 compute-0 systemd[1]: libpod-conmon-8727451e8706872ca6fa6476cc53fbbf11571466569b359f829235d823d5d864.scope: Deactivated successfully.
Oct 01 17:19:07 compute-0 sudo[295018]: pam_unix(sudo:session): session closed for user root
Oct 01 17:19:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 17:19:07 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:19:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 17:19:07 compute-0 ceph-mon[74273]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:19:07 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev e30c6cd3-1ced-4288-95d0-f50d64234e16 does not exist
Oct 01 17:19:07 compute-0 ceph-mgr[74571]: [progress WARNING root] complete: ev 6cdff85f-ad5c-4308-974b-6ad7195b25b1 does not exist
Oct 01 17:19:07 compute-0 sudo[295195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 17:19:07 compute-0 sudo[295195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:19:07 compute-0 sudo[295195]: pam_unix(sudo:session): session closed for user root
Oct 01 17:19:07 compute-0 sudo[295220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 17:19:07 compute-0 sudo[295220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 17:19:07 compute-0 sudo[295220]: pam_unix(sudo:session): session closed for user root
Oct 01 17:19:07 compute-0 ovs-vsctl[295271]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 01 17:19:07 compute-0 ceph-mon[74273]: pgmap v1511: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:19:07 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:19:07 compute-0 ceph-mon[74273]: from='mgr.14132 192.168.122.100:0/2899239938' entity='mgr.compute-0.pmbdpj' 
Oct 01 17:19:07 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:19:08 compute-0 virtqemud[259310]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 01 17:19:08 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1512: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:19:08 compute-0 virtqemud[259310]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 01 17:19:08 compute-0 virtqemud[259310]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 01 17:19:08 compute-0 podman[295448]: 2025-10-01 17:19:08.763369539 +0000 UTC m=+0.078654892 container health_status a93f1900c4d515fc6ab7fcdf580ae2c62cce27d0a75aab6c22388057f73031e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 01 17:19:08 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: cache status {prefix=cache status} (starting...)
Oct 01 17:19:09 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: client ls {prefix=client ls} (starting...)
Oct 01 17:19:09 compute-0 lvm[295614]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 01 17:19:09 compute-0 lvm[295614]: VG ceph_vg0 finished
Oct 01 17:19:09 compute-0 lvm[295633]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 01 17:19:09 compute-0 lvm[295633]: VG ceph_vg1 finished
Oct 01 17:19:09 compute-0 lvm[295638]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 01 17:19:09 compute-0 lvm[295638]: VG ceph_vg2 finished
Oct 01 17:19:09 compute-0 ceph-mon[74273]: pgmap v1512: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:19:09 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14817 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:09 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: damage ls {prefix=damage ls} (starting...)
Oct 01 17:19:09 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: dump loads {prefix=dump loads} (starting...)
Oct 01 17:19:09 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14819 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:10 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 01 17:19:10 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 01 17:19:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Oct 01 17:19:10 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1942779558' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 01 17:19:10 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:19:10 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 01 17:19:10 compute-0 ceph-mon[74273]: from='client.14817 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:10 compute-0 ceph-mon[74273]: from='client.14819 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:10 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1942779558' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 01 17:19:10 compute-0 ceph-mon[74273]: pgmap v1513: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:19:10 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 01 17:19:10 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14825 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:10 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 01 17:19:10 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:19:10.708+0000 7f816b913640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 01 17:19:10 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 17:19:10 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2787023077' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:19:10 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 01 17:19:11 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 01 17:19:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct 01 17:19:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1657915984' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 01 17:19:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct 01 17:19:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2091224514' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 01 17:19:11 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: ops {prefix=ops} (starting...)
Oct 01 17:19:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:19:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:19:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:19:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:19:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Optimize plan auto_2025-10-01_17:19:11
Oct 01 17:19:11 compute-0 ceph-mgr[74571]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 17:19:11 compute-0 ceph-mgr[74571]: [balancer INFO root] do_upmap
Oct 01 17:19:11 compute-0 ceph-mgr[74571]: [balancer INFO root] pools ['vms', '.mgr', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'backups', 'default.rgw.control', 'default.rgw.meta', 'volumes']
Oct 01 17:19:11 compute-0 ceph-mgr[74571]: [balancer INFO root] prepared 0/10 changes
Oct 01 17:19:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 01 17:19:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1979478708' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 01 17:19:11 compute-0 ceph-mon[74273]: from='client.14825 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:11 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2787023077' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 17:19:11 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1657915984' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 01 17:19:11 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2091224514' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 01 17:19:11 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1979478708' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 01 17:19:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 17:19:11 compute-0 ceph-mgr[74571]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 17:19:11 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct 01 17:19:11 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/499279424' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 01 17:19:11 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: session ls {prefix=session ls} (starting...)
Oct 01 17:19:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 01 17:19:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2279084199' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 01 17:19:12 compute-0 ceph-mds[100624]: mds.cephfs.compute-0.dbklxe asok_command: status {prefix=status} (starting...)
Oct 01 17:19:12 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14839 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 17:19:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 17:19:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:19:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 17:19:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:19:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 17:19:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:19:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 17:19:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:19:12 compute-0 ceph-mgr[74571]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 17:19:12 compute-0 ceph-mgr[74571]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3235544197
Oct 01 17:19:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 01 17:19:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2312966159' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 01 17:19:12 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:19:12 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14843 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:19:12 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/499279424' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 01 17:19:12 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2279084199' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 01 17:19:12 compute-0 ceph-mon[74273]: from='client.14839 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:12 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2312966159' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 01 17:19:12 compute-0 ceph-mon[74273]: pgmap v1514: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:19:12 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 01 17:19:12 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1723658207' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 01 17:19:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Oct 01 17:19:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2259800320' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 01 17:19:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct 01 17:19:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2710739589' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 01 17:19:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct 01 17:19:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/313671392' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 01 17:19:13 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 01 17:19:13 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2478299265' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 01 17:19:13 compute-0 ceph-mon[74273]: from='client.14843 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:13 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1723658207' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 01 17:19:13 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2259800320' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 01 17:19:13 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2710739589' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 01 17:19:13 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/313671392' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 01 17:19:13 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2478299265' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 01 17:19:13 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14855 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:13 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 01 17:19:13 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:19:13.934+0000 7f816b913640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 01 17:19:14 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14857 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct 01 17:19:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2914108197' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 01 17:19:14 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1515: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:19:14 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14861 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:14 compute-0 ceph-mon[74273]: from='client.14855 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:14 compute-0 ceph-mon[74273]: from='client.14857 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:14 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2914108197' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 01 17:19:14 compute-0 ceph-mon[74273]: pgmap v1515: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:19:14 compute-0 ceph-mon[74273]: from='client.14861 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:14 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct 01 17:19:14 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2478246780' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 01 17:19:14 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14865 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 01 17:19:15 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2643128419' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 01 17:19:15 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14869 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:15 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14873 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:24.369341+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 557056 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:25.369567+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68599808 unmapped: 548864 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:26.369773+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68599808 unmapped: 548864 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:27.370089+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68599808 unmapped: 548864 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:28.370224+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 540672 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:29.370485+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 532480 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:30.370688+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 532480 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:31.370837+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 532480 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:32.370971+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 532480 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:33.371112+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68624384 unmapped: 524288 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:34.371260+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68624384 unmapped: 524288 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:35.371520+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 516096 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:36.371652+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 516096 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:37.371852+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 516096 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 282.795837402s of 282.837615967s, submitted: 8
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:38.372046+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 68583424 unmapped: 565248 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:39.372236+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:40.372415+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 360448 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:41.372575+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 360448 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:42.372716+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 360448 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:43.372944+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 360448 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2015205854' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:44.373118+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 360448 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:45.373283+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 352256 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:46.373464+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 352256 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:47.373608+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69853184 unmapped: 344064 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:48.373756+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 335872 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:49.373908+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 335872 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:50.374090+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69869568 unmapped: 327680 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:51.374218+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69869568 unmapped: 327680 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:52.374342+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69877760 unmapped: 319488 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:53.374514+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69877760 unmapped: 319488 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:54.374706+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69885952 unmapped: 311296 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:55.374847+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69885952 unmapped: 311296 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:56.374931+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69885952 unmapped: 311296 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:57.375057+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69894144 unmapped: 303104 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:58.375158+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69894144 unmapped: 303104 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:59.375275+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69894144 unmapped: 303104 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:00.375599+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69902336 unmapped: 294912 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:01.375689+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69902336 unmapped: 294912 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:02.375846+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69910528 unmapped: 286720 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:03.375960+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69910528 unmapped: 286720 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:04.376139+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69910528 unmapped: 286720 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:05.376305+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69918720 unmapped: 278528 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:06.376525+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69918720 unmapped: 278528 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:07.376642+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69926912 unmapped: 270336 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:08.376758+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69926912 unmapped: 270336 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:09.376885+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69926912 unmapped: 270336 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:10.377219+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69935104 unmapped: 262144 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:11.377417+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69935104 unmapped: 262144 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:12.377561+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69943296 unmapped: 253952 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:13.377701+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69943296 unmapped: 253952 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:14.377866+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69943296 unmapped: 253952 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:15.377946+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 245760 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:16.378055+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 245760 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:17.378176+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69959680 unmapped: 237568 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:18.378301+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69959680 unmapped: 237568 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:19.378432+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69967872 unmapped: 229376 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:20.378613+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69967872 unmapped: 229376 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:21.378747+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69967872 unmapped: 229376 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:22.378876+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69976064 unmapped: 221184 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:23.379032+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69976064 unmapped: 221184 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:24.379168+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69984256 unmapped: 212992 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:25.379290+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69984256 unmapped: 212992 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:26.379442+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69992448 unmapped: 204800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:27.379586+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69992448 unmapped: 204800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:28.379738+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 69992448 unmapped: 204800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:29.379908+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 196608 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:30.380125+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 196608 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:31.380288+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70008832 unmapped: 188416 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:32.380436+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70008832 unmapped: 188416 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:33.380594+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 163840 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:34.380744+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 163840 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:35.380918+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 163840 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:36.381040+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 155648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:37.381188+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 155648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:38.381341+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 155648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:39.381500+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70049792 unmapped: 147456 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:40.381669+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70049792 unmapped: 147456 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:41.381841+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70057984 unmapped: 139264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:42.381955+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70057984 unmapped: 139264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:43.382093+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 131072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:44.382237+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 131072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:45.382370+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 131072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:46.382515+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:47.382673+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:48.382823+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:49.383493+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:50.384525+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:51.384666+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:52.384812+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:53.384980+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:54.385096+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:55.385240+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:56.385344+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:57.385462+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:58.385634+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:59.385827+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:00.386079+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:01.386219+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:02.386368+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:03.386521+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:04.386658+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:05.386851+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:06.386990+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:07.387154+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:08.387283+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:09.387440+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:10.387632+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:11.387797+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:12.387955+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:13.388133+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:14.388269+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:15.388393+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:16.388515+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:17.388629+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:18.388766+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:19.388907+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:20.389059+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:21.389176+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:22.389338+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:23.389485+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:24.389617+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:25.389843+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:26.390645+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:27.390765+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:28.390932+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:29.391221+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:30.391817+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:31.391948+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:32.392082+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 122880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:33.392244+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:34.392399+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:35.392579+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:36.392691+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:37.392844+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:38.393095+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 98304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:39.393315+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 98304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:40.393492+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 98304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:41.393657+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 98304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:42.393787+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 98304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:43.393959+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 98304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:44.394072+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 98304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:45.394221+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 98304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:46.394371+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 98304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:47.394509+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 98304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:48.394683+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 98304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:49.394836+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 98304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:50.394988+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 98304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:51.395150+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 98304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:52.395263+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 98304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:53.395423+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 90112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:54.395609+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 90112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:55.395757+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 90112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:56.395970+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 90112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:57.396114+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 90112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:58.396353+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 90112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:59.397037+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 90112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:00.397213+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 81920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:01.397405+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 81920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:02.397594+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 81920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:03.397730+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 81920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:04.397857+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 81920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:05.397955+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 81920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:06.398101+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 81920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:07.398286+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 81920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:08.398463+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 81920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:09.398577+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 81920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:10.398730+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 81920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:11.398926+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 81920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:12.399071+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 81920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:13.399203+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 81920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:14.399579+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 81920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:15.399691+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 81920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:16.399819+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 81920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:17.399976+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 81920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:18.400131+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:19.400273+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:20.400454+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:21.400615+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:22.400777+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:23.400924+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:24.401069+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:25.401219+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:26.401324+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:27.401431+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:28.401545+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:29.401682+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:30.401836+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:31.401993+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:32.402149+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:33.402309+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:34.402438+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:35.402601+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:36.402795+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:37.402961+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:38.403079+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:39.403231+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:40.403410+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 73728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:41.403523+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 65536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:42.403637+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 65536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:43.403787+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 65536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:44.403984+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 65536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:45.404103+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 65536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:46.404240+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 65536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:47.404413+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 65536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:48.404534+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 65536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:49.404718+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 65536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:50.404933+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 65536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:51.405048+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 65536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:52.405185+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:53.405368+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:54.405484+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:55.405598+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:56.405731+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:57.405919+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:58.406079+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:59.406274+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:00.406473+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:01.407070+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:02.407261+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:03.408311+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:04.408453+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:05.409219+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:06.409405+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:07.409565+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:08.409829+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:09.410088+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:10.410270+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:11.410719+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:12.410929+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 57344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:13.411340+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70148096 unmapped: 49152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:14.411478+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 40960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:15.411592+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 40960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:16.411723+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 40960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:17.411850+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 40960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:18.411956+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 40960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:19.412071+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 40960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:20.412471+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 40960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:21.413032+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 40960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:22.413190+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 40960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:23.413266+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 40960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:24.413417+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 40960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:25.413584+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 40960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:26.413701+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 40960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:27.413873+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 40960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:28.414013+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 32768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:29.414179+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 32768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:30.414400+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 32768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:31.414603+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 32768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:32.414715+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 32768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:33.414833+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 16384 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:34.414929+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 16384 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:35.415074+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 16384 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:36.415194+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 16384 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:37.415341+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 16384 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:38.415490+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:39.415588+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:40.415746+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:41.415913+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:42.416011+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:43.416158+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:44.416268+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:45.416430+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:46.416572+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:47.416695+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:48.416804+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:49.417013+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:50.417743+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:51.417927+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:52.418077+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:53.418216+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:54.418366+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:55.418518+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:56.418697+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:57.418933+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:58.419121+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:59.419248+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:00.419418+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:01.419568+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:02.419718+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:03.419858+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:04.420204+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:05.420449+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:06.420601+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:07.420795+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:08.420926+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:09.421045+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:10.421263+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:11.421400+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:12.421547+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:13.421669+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:14.421808+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:15.421967+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:16.422100+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:17.422220+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:18.422349+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:19.422594+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:20.422806+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:21.422967+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:22.423126+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:23.423314+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:24.423466+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:25.423642+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:26.423770+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:27.424000+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:28.424151+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:29.424298+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:30.424450+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:31.424607+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:32.424779+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:33.424977+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:34.425110+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:35.425298+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:36.425461+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:37.425632+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:38.425798+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:39.426016+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:40.426254+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:41.426510+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:42.426771+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 8192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:43.426966+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 0 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:44.427203+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 0 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:45.427460+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 0 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:46.427646+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 0 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:47.427849+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 0 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:48.428044+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 0 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:49.428286+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 0 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:50.428515+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 0 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:51.428752+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 0 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:52.429028+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 0 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:53.429293+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:54.429487+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:55.429677+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:56.429857+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:57.430053+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:58.430283+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:59.430516+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:00.430796+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:01.430993+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:02.431172+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:03.431404+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:04.431605+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:05.431809+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:06.432071+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:07.432314+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:08.432550+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:09.432989+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:10.433550+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:11.433786+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:12.433954+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:13.434262+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:14.434597+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:15.434853+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:16.435070+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:17.435297+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:18.435465+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:19.435649+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:20.436051+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:21.436295+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:22.436526+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:23.436768+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:24.436970+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:25.437182+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:26.437344+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:27.437518+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:28.437699+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 1024000 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:29.437873+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 1024000 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:30.438075+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 1024000 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:31.438216+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 1024000 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:32.438340+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 1024000 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:33.438456+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 1015808 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:34.438559+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 1015808 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:35.439422+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 1015808 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:36.439549+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 1015808 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:37.439684+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 1015808 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:38.439793+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 1015808 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:39.439940+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 1015808 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:40.440070+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 1015808 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:41.440222+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 999424 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:42.440378+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 999424 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:43.440510+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 999424 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:44.440647+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 999424 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:45.440805+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 999424 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:46.440962+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 999424 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:47.441117+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 999424 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:48.441286+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:49.441666+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:50.441868+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:51.442041+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:52.442158+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:53.442287+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:54.442490+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:55.442635+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:56.442787+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:57.442921+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:58.443079+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:59.443236+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:00.443400+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:01.443568+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:02.443714+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:03.443884+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:04.444055+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:05.444204+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:06.444380+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:07.444512+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:08.444654+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:09.444801+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:10.444972+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:11.445119+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:12.445270+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:13.445387+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:14.445545+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:15.445700+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:16.445865+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:17.445994+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:18.446143+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:19.446311+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:20.446462+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:21.446595+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:22.446759+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:23.446992+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:24.447157+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:25.447425+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:26.447703+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:27.447877+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:28.448045+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:29.448210+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:30.448448+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:31.448682+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:32.448853+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:33.448973+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:34.449123+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:35.449315+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:36.449495+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:37.449660+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:38.449810+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:39.450048+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:40.450216+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:41.450363+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:42.450577+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:43.450726+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:44.450856+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:45.450949+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:46.451088+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:47.451281+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:48.451476+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:49.451638+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:50.451810+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:51.451949+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:52.452134+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:53.452270+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:54.452402+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:55.452543+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:56.452661+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:57.452801+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:58.452957+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:59.453101+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:00.453281+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:01.453452+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:02.453594+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:03.453728+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:04.453872+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:05.454041+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:06.454168+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:07.454305+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:08.454426+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:09.454565+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:10.454761+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:11.454968+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:12.455136+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:13.455299+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:14.455442+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:15.455545+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:16.455760+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:17.455938+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:18.456074+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:19.456255+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:20.456369+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:21.456473+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:22.456632+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:23.456790+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:24.456965+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:25.457154+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:26.457309+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:27.457458+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:28.457599+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:29.457704+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:30.457845+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:31.458009+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:32.458151+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:33.458334+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:34.458487+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:35.458640+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:36.458777+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:37.458943+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:38.459126+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:39.459273+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:40.459663+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:41.459818+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:42.459990+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:43.460128+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:44.460273+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:45.460374+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:46.460545+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:47.460669+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:48.460805+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:49.460943+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:50.461960+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:51.462285+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:52.462998+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:53.463726+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:54.463871+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:55.464012+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:56.464168+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:57.464377+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:58.464526+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:59.464979+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:00.465274+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:01.465668+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:02.465970+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:03.466262+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:04.466471+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:05.466726+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:06.466939+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:07.467082+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:08.467248+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:09.467397+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:10.467597+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:11.467806+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:12.467963+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:13.468136+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:14.468299+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:15.468457+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:16.468606+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:17.468802+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:18.468981+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:19.469190+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:20.469421+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:21.469575+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:22.469755+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:23.469973+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:24.470143+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:25.470301+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:26.470467+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:27.470639+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:28.470769+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:29.470967+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:30.471166+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:31.471299+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:32.471411+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:33.471580+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:34.471749+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:35.471928+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:36.472075+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:37.472219+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:38.472347+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:39.472572+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:40.472759+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:41.472999+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:42.473195+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:43.473371+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:44.473631+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:45.473861+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 5598 writes, 23K keys, 5598 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5598 writes, 864 syncs, 6.48 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.55 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.55 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56260ed1b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:46.474116+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70328320 unmapped: 917504 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:47.474276+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70328320 unmapped: 917504 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:48.474470+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:49.474622+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:50.474849+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:51.475032+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:52.475180+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:53.475308+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:54.475498+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:55.475707+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:56.475864+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:57.476032+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:58.476210+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:59.476365+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:00.476515+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:01.476664+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:02.476783+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:03.476958+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:04.477075+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:05.477246+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:06.477383+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:07.477530+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:08.477741+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:09.478028+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:10.478224+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:11.478441+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:12.478593+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:13.478731+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:14.478875+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:15.479087+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:16.479228+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:17.479378+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:18.479547+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:19.479733+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:20.480001+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:21.480148+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:22.480327+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:23.480467+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:24.480552+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:25.480668+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:26.480861+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:27.481046+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:28.481203+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:29.481388+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:30.481552+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:31.481709+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:32.481961+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:33.482126+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:34.482310+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:35.482443+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:36.482585+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:37.482701+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 599.436401367s of 600.340209961s, submitted: 90
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:38.482871+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [0,0,0,0,0,0,1])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:39.483046+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70328320 unmapped: 917504 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:40.483300+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 1933312 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:41.483451+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:42.483654+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:43.483751+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:44.483953+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:45.484129+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:46.484360+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:47.484512+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:48.484656+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:49.484813+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:50.485009+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:51.485178+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:52.485356+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:53.485524+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:54.485701+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:55.485859+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:56.485976+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:57.486152+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:58.486254+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:59.486440+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:00.486587+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:01.486759+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:02.486879+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:03.487053+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:04.487260+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:05.487406+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:06.487635+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:07.487791+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:08.487969+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:09.488129+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:10.489780+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:11.489937+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:12.490110+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:13.490251+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:14.490372+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:15.490514+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:16.490696+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:17.490846+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:18.491009+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:19.491226+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:20.491452+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:21.491718+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:22.492016+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:23.492243+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:24.492420+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:25.492648+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:26.492876+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:27.493123+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:28.493302+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:29.493580+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:30.493822+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:31.493993+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:32.494227+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:33.494519+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:34.494721+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:35.495101+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:36.495418+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:37.495682+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:38.495868+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:39.496051+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:40.496230+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:41.496411+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:42.496573+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:43.496831+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:44.497093+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:45.497247+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:46.497402+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:47.497552+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:48.497725+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:49.497882+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:50.498141+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:51.498331+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:52.498486+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:53.498619+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:54.498772+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:55.498972+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:56.499109+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:57.499252+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:58.499388+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:59.499539+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:00.499749+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:01.499880+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:02.500083+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:03.500296+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:04.500460+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:05.500649+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:06.500840+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:07.501052+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:08.501233+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:09.501415+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:10.501617+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:11.501792+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:12.502028+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:13.502215+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:14.502428+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:15.502628+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:16.502812+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:17.502992+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:18.503138+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:19.503310+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:20.503534+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:21.503713+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:22.503874+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:23.504091+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:24.504253+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:25.504462+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:26.504640+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:27.504839+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:28.505029+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:29.505251+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:30.505520+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:31.505674+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:32.505815+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 1908736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:33.505979+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 1900544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:34.506168+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 1900544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:35.506341+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 1900544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:36.506534+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 1900544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:37.506751+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 1900544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:38.506941+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 1900544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:39.507124+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 1900544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:40.507332+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 1900544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:41.507494+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:42.507641+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:43.507772+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:44.507972+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:45.508102+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:46.508242+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:47.508419+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:48.508572+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:49.508773+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:50.508994+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:51.509209+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:52.509422+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:53.509630+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:54.509836+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:55.510046+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:56.510204+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:57.510390+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:58.510560+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:59.510729+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:00.510968+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:01.511156+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:02.511340+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:03.511524+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:04.511700+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:05.511885+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:06.512079+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:07.512348+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:08.512553+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:09.512830+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:10.513012+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:11.513117+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:12.513284+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:13.513449+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:14.513595+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:15.513730+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:16.513879+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:17.514052+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:18.514334+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:19.514501+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:20.514727+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:21.514851+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:22.514982+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:23.515141+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:24.515301+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:25.515444+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:26.515622+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:27.515770+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:28.515901+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:29.516079+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:30.516267+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:31.516395+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:32.516579+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 1884160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:33.516753+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:34.516929+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:35.517219+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:36.517358+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:37.517563+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:38.517695+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:39.517817+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:40.518069+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:41.518249+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:42.518447+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:43.518624+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:44.518798+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:45.518972+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:46.519109+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:47.519219+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:48.519344+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:49.519555+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:50.519774+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:51.519992+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:52.520177+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:53.520304+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:54.520540+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:55.520716+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:56.520996+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:57.521172+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:58.521302+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:59.521522+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:00.521742+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:01.521984+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:02.522132+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:03.522272+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:04.522424+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:05.522560+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:06.522685+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:07.522836+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:08.523188+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:09.523411+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:10.523689+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:11.523914+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:12.524084+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:13.524242+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:14.524397+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:15.524517+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:16.524662+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:17.524822+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:18.525041+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:19.525186+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb9b1c/0x16c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:20.525377+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 151552
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:21.525506+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 122 handle_osd_map epochs [122,123], i have 122, src has [1,123]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 222.690231323s of 224.058120728s, submitted: 90
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 1875968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:22.525634+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 123 handle_osd_map epochs [123,124], i have 123, src has [1,124]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: handle_auth_request added challenge on 0x562612741000
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70459392 unmapped: 1835008 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:23.525766+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _renew_subs
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 125 ms_handle_reset con 0x562612741000 session 0x56261030ab40
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 1687552 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fcaa9000/0x0/0x4ffc00000, data 0xbd288/0x173000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:24.525974+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 1687552 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:25.526134+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: handle_auth_request added challenge on 0x562612741400
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 879394 data_alloc: 218103808 data_used: 159744
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 18300928 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:26.526289+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _renew_subs
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 126 ms_handle_reset con 0x562612741400 session 0x562612fca5a0
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 18259968 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:27.526449+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70836224 unmapped: 18243584 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:28.526591+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70836224 unmapped: 18243584 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:29.526719+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fc2a4000/0x0/0x4ffc00000, data 0x8c09bf/0x979000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 126 handle_osd_map epochs [126,127], i have 126, src has [1,127]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70836224 unmapped: 18243584 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:30.527014+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886478 data_alloc: 218103808 data_used: 172032
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70836224 unmapped: 18243584 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:31.527175+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fc2a1000/0x0/0x4ffc00000, data 0x8c2422/0x97c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 18210816 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:32.527291+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 18210816 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:33.527406+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 18210816 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:34.527539+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 18210816 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:35.527660+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886478 data_alloc: 218103808 data_used: 172032
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 18210816 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:36.527758+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fc2a1000/0x0/0x4ffc00000, data 0x8c2422/0x97c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 18178048 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:37.527999+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 18178048 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:38.528129+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 18178048 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:39.528250+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 18178048 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:40.528411+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fc2a1000/0x0/0x4ffc00000, data 0x8c2422/0x97c000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886478 data_alloc: 218103808 data_used: 172032
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 18178048 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:41.528556+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 18178048 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:42.528840+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 18178048 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:43.528945+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: handle_auth_request added challenge on 0x56261301fc00
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.454425812s of 22.008068085s, submitted: 54
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 18169856 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:44.529084+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 18104320 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:45.529233+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fc2a0000/0x0/0x4ffc00000, data 0x8c2558/0x97e000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Got map version 10
Oct 01 17:19:15 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 889294 data_alloc: 218103808 data_used: 176128
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 18145280 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:46.529367+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 18145280 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:47.529503+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fc2a0000/0x0/0x4ffc00000, data 0x8c2558/0x97e000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 18145280 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:48.529693+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 18145280 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:49.529811+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 18145280 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:50.529994+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fc29f000/0x0/0x4ffc00000, data 0x8c25f3/0x97f000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 891062 data_alloc: 218103808 data_used: 176128
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 18145280 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:51.530157+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 18145280 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:52.530309+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 18145280 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:53.530435+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Got map version 11
Oct 01 17:19:15 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70991872 unmapped: 18087936 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:54.530542+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fc29f000/0x0/0x4ffc00000, data 0x8c25f3/0x97f000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: handle_auth_request added challenge on 0x56261301f800
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.824827194s of 10.837122917s, submitted: 3
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 70991872 unmapped: 18087936 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:55.530667+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fc29f000/0x0/0x4ffc00000, data 0x8c25f3/0x97f000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892302 data_alloc: 218103808 data_used: 176128
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:56.530818+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 18071552 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:57.530977+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 18071552 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:58.531141+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 18071552 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:59.531297+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 18071552 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:00.531518+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 18071552 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fc29f000/0x0/0x4ffc00000, data 0x8c25f3/0x97f000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 891612 data_alloc: 218103808 data_used: 176128
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:01.531659+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 18055168 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:02.531817+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 18055168 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:03.531989+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 18055168 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:04.532124+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 18055168 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:05.532251+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 18055168 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.991678238s of 11.003929138s, submitted: 3
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 891612 data_alloc: 218103808 data_used: 176128
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:06.532382+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fc29f000/0x0/0x4ffc00000, data 0x8c25f3/0x97f000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 18014208 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:07.532539+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 18014208 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:08.532705+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 18014208 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fc29f000/0x0/0x4ffc00000, data 0x8c25f3/0x97f000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:09.532861+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 18014208 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:10.533100+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 18014208 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _renew_subs
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 895610 data_alloc: 218103808 data_used: 184320
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:11.533258+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 18055168 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:12.533444+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 18055168 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:13.533605+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 18055168 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fc29b000/0x0/0x4ffc00000, data 0x8c41d9/0x982000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:14.533752+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:15.533981+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 893350 data_alloc: 218103808 data_used: 184320
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:16.534148+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:17.534245+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fc29e000/0x0/0x4ffc00000, data 0x8c40a3/0x980000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:18.534385+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.727453232s of 13.014475822s, submitted: 28
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:19.534560+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 128 handle_osd_map epochs [128,129], i have 128, src has [1,129]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:20.534734+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 897524 data_alloc: 218103808 data_used: 192512
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:21.534943+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:22.535121+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fc29a000/0x0/0x4ffc00000, data 0x8c5b06/0x983000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:23.535304+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fc29a000/0x0/0x4ffc00000, data 0x8c5b06/0x983000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:24.535424+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:25.535590+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fc29a000/0x0/0x4ffc00000, data 0x8c5b06/0x983000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 897524 data_alloc: 218103808 data_used: 192512
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:26.535707+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fc29a000/0x0/0x4ffc00000, data 0x8c5b06/0x983000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:27.535874+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fc29a000/0x0/0x4ffc00000, data 0x8c5b06/0x983000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:28.536113+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:29.536266+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:30.536464+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fc29b000/0x0/0x4ffc00000, data 0x8c5a6b/0x982000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:31.536633+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 896834 data_alloc: 218103808 data_used: 192512
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:32.536835+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:33.536963+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fc29b000/0x0/0x4ffc00000, data 0x8c5a6b/0x982000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:34.537093+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:35.537287+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:36.537498+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 896834 data_alloc: 218103808 data_used: 192512
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:37.537678+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:38.537855+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _renew_subs
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.400629044s of 19.416978836s, submitted: 14
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:39.538026+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fc298000/0x0/0x4ffc00000, data 0x8c7651/0x985000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:40.538213+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 18046976 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:41.538359+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 899808 data_alloc: 218103808 data_used: 192512
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 18038784 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:42.538525+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 18038784 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:43.538690+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 18038784 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:44.538944+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 18038784 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fc298000/0x0/0x4ffc00000, data 0x8c7651/0x985000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:45.539059+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 18038784 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:46.539185+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 899808 data_alloc: 218103808 data_used: 192512
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 18038784 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:47.539362+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 16990208 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:48.539567+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 16990208 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:49.539721+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 16990208 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.459293365s of 11.637549400s, submitted: 37
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:50.540000+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 16982016 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fc295000/0x0/0x4ffc00000, data 0x8c90b4/0x988000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:51.540127+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902782 data_alloc: 218103808 data_used: 192512
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 16982016 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fc295000/0x0/0x4ffc00000, data 0x8c90b4/0x988000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:52.540297+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 16982016 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:53.540462+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 16982016 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:54.540603+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 16982016 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:55.540763+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 16982016 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fc295000/0x0/0x4ffc00000, data 0x8c90b4/0x988000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:56.540943+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 903670 data_alloc: 218103808 data_used: 192512
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 16982016 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:57.541103+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 16973824 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:58.541248+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 16973824 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:59.541404+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 16973824 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.640153885s of 10.068619728s, submitted: 5
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:00.541607+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 16973824 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fc293000/0x0/0x4ffc00000, data 0x8c9285/0x98b000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:01.541773+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907030 data_alloc: 218103808 data_used: 192512
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 16973824 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:02.541958+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 16973824 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fc28f000/0x0/0x4ffc00000, data 0x8cae6b/0x98e000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:03.542115+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 16973824 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:04.542278+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 16973824 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:05.542452+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 16973824 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:06.542621+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911050 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 16932864 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:07.542785+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: handle_auth_request added challenge on 0x56261301f400
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 16900096 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fc28a000/0x0/0x4ffc00000, data 0x8ccc8c/0x993000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [1])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:08.542956+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Got map version 12
Oct 01 17:19:15 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 16883712 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fc28a000/0x0/0x4ffc00000, data 0x8ccc8c/0x993000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:09.543298+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 16875520 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:10.543478+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.811242580s of 10.441696167s, submitted: 53
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 16867328 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:11.543632+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920358 data_alloc: 218103808 data_used: 212992
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 16859136 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _renew_subs
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:12.543785+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 16842752 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:13.543996+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 16842752 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:14.544126+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 16842752 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fc28b000/0x0/0x4ffc00000, data 0x8ce599/0x992000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:15.544302+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 16826368 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fc288000/0x0/0x4ffc00000, data 0x8d0207/0x995000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:16.544474+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925632 data_alloc: 218103808 data_used: 225280
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72261632 unmapped: 16818176 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:17.544622+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72261632 unmapped: 16818176 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:18.544757+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72261632 unmapped: 16818176 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fc287000/0x0/0x4ffc00000, data 0x8d02a2/0x996000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:19.544921+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72261632 unmapped: 16818176 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:20.545073+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 72261632 unmapped: 16818176 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.666393280s of 10.306402206s, submitted: 99
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:21.545184+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931706 data_alloc: 218103808 data_used: 233472
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 15769600 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:22.545309+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 15769600 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:23.545432+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 15769600 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:24.545576+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fc282000/0x0/0x4ffc00000, data 0x8d3805/0x99a000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 15769600 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fc282000/0x0/0x4ffc00000, data 0x8d3805/0x99a000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:25.545686+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 15769600 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:26.545833+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933318 data_alloc: 218103808 data_used: 233472
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 15753216 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:27.545969+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 15753216 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:28.546139+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 15753216 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:29.546268+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 15753216 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:30.546478+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 15720448 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc280000/0x0/0x4ffc00000, data 0x8d5288/0x99d000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:31.546625+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935086 data_alloc: 218103808 data_used: 233472
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 15720448 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:32.546774+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.658912659s of 11.807299614s, submitted: 52
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 15720448 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:33.546962+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 15720448 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc27f000/0x0/0x4ffc00000, data 0x8d53be/0x99f000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:34.547173+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 15720448 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:35.547336+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 15712256 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:36.547468+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935798 data_alloc: 218103808 data_used: 233472
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 15712256 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc27f000/0x0/0x4ffc00000, data 0x8d53be/0x99f000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:37.547600+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 15712256 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:38.547760+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc27f000/0x0/0x4ffc00000, data 0x8d53be/0x99f000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 15712256 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:39.547959+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 15704064 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:40.548160+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 15704064 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc280000/0x0/0x4ffc00000, data 0x8d5323/0x99e000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 138 handle_osd_map epochs [139,139], i have 139, src has [1,139]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:41.548307+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939004 data_alloc: 218103808 data_used: 262144
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 15695872 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:42.548467+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 15695872 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:43.548659+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:44.548802+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fc27b000/0x0/0x4ffc00000, data 0x8d6fd4/0x9a2000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fc27b000/0x0/0x4ffc00000, data 0x8d6fd4/0x9a2000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:45.548929+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.430072784s of 13.595589638s, submitted: 29
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:46.549037+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943570 data_alloc: 218103808 data_used: 262144
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:47.549202+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:48.549343+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:49.549499+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fc278000/0x0/0x4ffc00000, data 0x8d8a57/0x9a5000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:50.549675+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:51.550146+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945338 data_alloc: 218103808 data_used: 262144
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:52.550355+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:53.550650+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:54.550799+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fc279000/0x0/0x4ffc00000, data 0x8d8a57/0x9a5000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:55.551376+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _renew_subs
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:56.552171+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947076 data_alloc: 218103808 data_used: 270336
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:57.552361+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:58.552638+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:59.552993+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fc276000/0x0/0x4ffc00000, data 0x8da5a2/0x9a7000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.812130928s of 14.100782394s, submitted: 45
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:00.553317+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:01.553557+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950050 data_alloc: 218103808 data_used: 270336
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:02.553798+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:03.553940+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc273000/0x0/0x4ffc00000, data 0x8dc005/0x9aa000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc273000/0x0/0x4ffc00000, data 0x8dc005/0x9aa000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:04.554077+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:05.554281+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:06.554532+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950050 data_alloc: 218103808 data_used: 270336
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:07.554967+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc273000/0x0/0x4ffc00000, data 0x8dc005/0x9aa000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:08.555134+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:09.555767+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:10.555999+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:11.556196+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951818 data_alloc: 218103808 data_used: 270336
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:12.556455+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 15687680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Got map version 13
Oct 01 17:19:15 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:13.556591+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc272000/0x0/0x4ffc00000, data 0x8dc0a0/0x9ab000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 15630336 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:14.556765+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc272000/0x0/0x4ffc00000, data 0x8dc0a0/0x9ab000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 15630336 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:15.556956+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 15630336 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.800889015s of 15.819281578s, submitted: 16
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:16.557134+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950200 data_alloc: 218103808 data_used: 270336
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 15630336 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:17.557332+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 15630336 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc273000/0x0/0x4ffc00000, data 0x8dc005/0x9aa000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:18.557475+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 15630336 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:19.557628+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 15622144 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:20.557986+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 15622144 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:21.558178+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc273000/0x0/0x4ffc00000, data 0x8dc0a0/0x9ab000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950938 data_alloc: 218103808 data_used: 270336
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 15622144 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:22.558371+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 15622144 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc274000/0x0/0x4ffc00000, data 0x8dc005/0x9aa000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:23.558520+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc274000/0x0/0x4ffc00000, data 0x8dc005/0x9aa000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 15581184 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:24.558662+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 15581184 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:25.558817+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 15581184 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:26.558989+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951840 data_alloc: 218103808 data_used: 270336
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 15581184 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.204713821s of 11.229690552s, submitted: 4
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:27.559149+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 15540224 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:28.559313+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 15540224 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:29.559454+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc272000/0x0/0x4ffc00000, data 0x8dc035/0x9ab000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 15540224 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:30.559607+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 15556608 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:31.559758+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951968 data_alloc: 218103808 data_used: 270336
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 15556608 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:32.559929+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 15556608 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:33.560090+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 15556608 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:34.560237+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 15556608 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:35.560402+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc272000/0x0/0x4ffc00000, data 0x8dc035/0x9ab000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 15556608 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:36.560579+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951968 data_alloc: 218103808 data_used: 270336
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 15556608 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:37.560682+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 15556608 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.572502136s of 10.632000923s, submitted: 12
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:38.560829+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 15499264 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:39.560969+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc271000/0x0/0x4ffc00000, data 0x8dc103/0x9ac000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 15499264 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc271000/0x0/0x4ffc00000, data 0x8dc103/0x9ac000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:40.561144+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 15499264 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 142 ms_handle_reset con 0x56261301f400 session 0x562612fc9c20
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:41.561315+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953560 data_alloc: 218103808 data_used: 270336
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 14909440 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:42.561481+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Got map version 14
Oct 01 17:19:15 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 14901248 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:43.561643+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc271000/0x0/0x4ffc00000, data 0x8dc0d0/0x9ac000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 14901248 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:44.561803+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 14893056 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:45.561947+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 14876672 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:46.562086+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc26e000/0x0/0x4ffc00000, data 0x8dc265/0x9ae000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958336 data_alloc: 218103808 data_used: 270336
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 14843904 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:47.562208+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 14843904 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc26e000/0x0/0x4ffc00000, data 0x8dc32e/0x9af000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:48.562359+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.431452751s of 10.563019753s, submitted: 199
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 14819328 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:49.562543+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 14819328 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:50.562756+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 14819328 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:51.562954+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957518 data_alloc: 218103808 data_used: 270336
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 14786560 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:52.563115+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 14794752 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:53.563297+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc271000/0x0/0x4ffc00000, data 0x8dc1cc/0x9ad000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 14786560 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:54.563460+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 14786560 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:55.563596+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 14778368 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:56.563724+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958724 data_alloc: 218103808 data_used: 270336
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 14778368 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:57.563866+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0x8dc232/0x9ae000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 14753792 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:58.563943+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 14753792 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:59.564112+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.824316025s of 10.967424393s, submitted: 29
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 14721024 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:00.564288+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 14721024 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:01.564484+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961346 data_alloc: 218103808 data_used: 270336
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 14721024 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:02.566191+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0x8dc32c/0x9af000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 14721024 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:03.566308+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0x8dc32c/0x9af000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 13672448 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:04.566454+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 13672448 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:05.566634+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 13672448 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:06.566830+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960612 data_alloc: 218103808 data_used: 270336
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 13631488 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:07.584542+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 13631488 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:08.584773+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0x8dc1c8/0x9ad000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 13631488 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:09.584981+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.987728119s of 10.111098289s, submitted: 25
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 13631488 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:10.585182+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 13598720 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:11.585296+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962332 data_alloc: 218103808 data_used: 270336
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 13598720 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0x8dc20a/0x9ae000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:12.585443+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 13598720 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:13.585621+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 13598720 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:14.585741+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc26e000/0x0/0x4ffc00000, data 0x8dc1f5/0x9ae000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 13533184 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:15.585868+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 13533184 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:16.586018+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962800 data_alloc: 218103808 data_used: 270336
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 13516800 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:17.586164+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc271000/0x0/0x4ffc00000, data 0x8dc170/0x9ad000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75579392 unmapped: 13500416 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:18.586275+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75579392 unmapped: 13500416 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:19.586346+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75579392 unmapped: 13500416 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:20.586531+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.955349922s of 11.056019783s, submitted: 22
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 13475840 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:21.586680+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964568 data_alloc: 218103808 data_used: 270336
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 13475840 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:22.586857+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc26d000/0x0/0x4ffc00000, data 0x8dc1fc/0x9ae000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 13475840 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:23.586988+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 13475840 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:24.587154+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75653120 unmapped: 13426688 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:25.587279+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75653120 unmapped: 13426688 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:26.587418+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963832 data_alloc: 218103808 data_used: 270336
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75653120 unmapped: 13426688 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc270000/0x0/0x4ffc00000, data 0x8dc20a/0x9ae000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:27.587561+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 13418496 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:28.587846+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 13402112 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:29.588057+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 13402112 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:30.588303+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc26e000/0x0/0x4ffc00000, data 0x8dc1fa/0x9ae000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 13402112 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.464566231s of 10.624966621s, submitted: 33
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:31.588658+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964862 data_alloc: 218103808 data_used: 270336
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 13402112 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:32.588958+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 13402112 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc26e000/0x0/0x4ffc00000, data 0x8dc209/0x9ae000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:33.589136+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 13410304 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:34.589402+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _renew_subs
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 13385728 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:35.589571+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 13377536 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:36.589759+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972314 data_alloc: 218103808 data_used: 278528
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 13303808 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:37.589983+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 13303808 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:38.590212+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fc266000/0x0/0x4ffc00000, data 0x8ddde2/0x9b1000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 13279232 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:39.590381+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fc26c000/0x0/0x4ffc00000, data 0x8dde23/0x9b1000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 13271040 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 143 handle_osd_map epochs [144,145], i have 143, src has [1,145]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:40.590537+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 13180928 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:41.590676+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976430 data_alloc: 218103808 data_used: 286720
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 13180928 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:42.590867+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 13180928 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.312106133s of 12.044094086s, submitted: 91
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:43.591043+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 13172736 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:44.591222+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fc267000/0x0/0x4ffc00000, data 0x8e13f0/0x9b6000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 13164544 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:45.591513+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 13164544 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:46.591695+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978682 data_alloc: 218103808 data_used: 294912
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 13156352 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:47.591970+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 13131776 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:48.592129+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 13123584 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:49.592307+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fc263000/0x0/0x4ffc00000, data 0x8e2e30/0x9b9000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 13123584 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:50.592481+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 13107200 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:51.592632+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977930 data_alloc: 218103808 data_used: 294912
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 13115392 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:52.592785+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 13115392 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:53.592958+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 13115392 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.075264931s of 11.000350952s, submitted: 27
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:54.593062+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 13115392 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:55.593212+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fc264000/0x0/0x4ffc00000, data 0x8e2dfd/0x9b8000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 13107200 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:56.593372+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976246 data_alloc: 218103808 data_used: 294912
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 13107200 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:57.593523+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 13107200 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:58.593656+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 13107200 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:59.593791+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 13107200 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:00.594081+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 13107200 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:01.594258+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fc266000/0x0/0x4ffc00000, data 0x8e2d33/0x9b7000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976550 data_alloc: 218103808 data_used: 294912
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 13107200 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:02.594386+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 13107200 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:03.594516+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 13107200 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:04.594663+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 13107200 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:05.594808+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.013538361s of 11.418750763s, submitted: 10
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12648448 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:06.595048+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982684 data_alloc: 218103808 data_used: 294912
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12648448 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:07.595333+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fc251000/0x0/0x4ffc00000, data 0x8f7825/0x9cc000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 12615680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:08.595509+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 12615680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:09.595708+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 12271616 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:10.595931+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 12173312 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fc22b000/0x0/0x4ffc00000, data 0x91daf7/0x9f2000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:11.596093+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fc22b000/0x0/0x4ffc00000, data 0x91daf7/0x9f2000, compress 0x0/0x0/0x0, omap 0x637, meta 0x2fdf9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987080 data_alloc: 218103808 data_used: 294912
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 12173312 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:12.596239+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 9641984 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:13.596425+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 79601664 unmapped: 9478144 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:14.596574+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 9363456 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:15.596730+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 9363456 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:16.596938+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.701203346s of 11.124578476s, submitted: 48
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989334 data_alloc: 218103808 data_used: 294912
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 9109504 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:17.597161+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fb02b000/0x0/0x4ffc00000, data 0x97d095/0xa52000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 9109504 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:18.598159+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 9109504 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:19.598307+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 8962048 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:20.598519+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 8986624 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:21.598754+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994082 data_alloc: 218103808 data_used: 294912
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 80257024 unmapped: 8822784 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:22.598964+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4faff0000/0x0/0x4ffc00000, data 0x9b88d9/0xa8d000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 8568832 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:23.599164+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 8568832 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:24.599289+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 8380416 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:25.599475+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 7217152 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:26.599691+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993026 data_alloc: 218103808 data_used: 294912
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 7217152 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:27.599872+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.497201920s of 10.907721519s, submitted: 61
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:28.600114+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 7036928 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4faf79000/0x0/0x4ffc00000, data 0xa2ee5e/0xb04000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:29.600279+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 7168000 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:30.600465+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 6922240 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4faf41000/0x0/0x4ffc00000, data 0xa6721a/0xb3d000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:31.600631+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 6897664 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002310 data_alloc: 218103808 data_used: 294912
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:32.600752+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 6938624 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:33.600884+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 6938624 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:34.601035+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 6963200 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:35.601223+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 7200768 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:36.601366+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 7176192 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4faef7000/0x0/0x4ffc00000, data 0xab16ad/0xb87000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007510 data_alloc: 218103808 data_used: 294912
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:37.601535+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 83238912 unmapped: 5840896 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2478246780' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 01 17:19:15 compute-0 ceph-mon[74273]: from='client.14865 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:15 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2643128419' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 01 17:19:15 compute-0 ceph-mon[74273]: from='client.14869 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:15 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2015205854' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 3.503462076s of 10.137934685s, submitted: 59
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4faede000/0x0/0x4ffc00000, data 0xaca0ee/0xba0000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:38.601678+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 5480448 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:39.601849+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 5644288 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:40.602072+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 5636096 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:41.602216+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 5423104 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010706 data_alloc: 218103808 data_used: 294912
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:42.602360+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 5300224 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4faea5000/0x0/0x4ffc00000, data 0xb020a3/0xbd9000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:43.602646+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 5267456 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fae99000/0x0/0x4ffc00000, data 0xb0e3c7/0xbe5000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [0,0,0,0,1])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:44.602827+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 5128192 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fae98000/0x0/0x4ffc00000, data 0xb0f3fa/0xbe6000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:45.603011+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 5267456 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 7483 writes, 29K keys, 7483 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7482 writes, 1560 syncs, 4.80 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1885 writes, 5550 keys, 1885 commit groups, 1.0 writes per commit group, ingest: 5.36 MB, 0.01 MB/s
                                           Interval WAL: 1884 writes, 696 syncs, 2.71 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:46.603171+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 5251072 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fae5a000/0x0/0x4ffc00000, data 0xb4c322/0xc23000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1018658 data_alloc: 218103808 data_used: 294912
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:47.603416+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 5046272 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 3.035921097s of 10.091075897s, submitted: 57
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:48.603534+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 4022272 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fae2a000/0x0/0x4ffc00000, data 0xb7b397/0xc52000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [0,0,0,0,0,0,1])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:49.603713+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 3874816 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fae13000/0x0/0x4ffc00000, data 0xb9341f/0xc6a000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [0,0,0,0,0,0,0,4])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:50.603974+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 4153344 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:51.604269+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 4030464 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1021560 data_alloc: 218103808 data_used: 294912
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:52.604447+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 85139456 unmapped: 3940352 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: mgrc ms_handle_reset ms_handle_reset con 0x562610a32000
Oct 01 17:19:15 compute-0 ceph-osd[90269]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3235544197
Oct 01 17:19:15 compute-0 ceph-osd[90269]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: get_auth_request con 0x562611b52800 auth_method 0
Oct 01 17:19:15 compute-0 ceph-osd[90269]: mgrc handle_mgr_configure stats_period=5
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:53.604593+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 4186112 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:54.604792+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 4055040 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fad9a000/0x0/0x4ffc00000, data 0xc0c0ab/0xce3000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:55.604952+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 3645440 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:56.605183+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 2998272 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029546 data_alloc: 218103808 data_used: 294912
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:57.605341+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 2154496 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 4.690278053s of 10.004817963s, submitted: 79
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:58.605543+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 2088960 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:59.605723+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 2523136 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fad08000/0x0/0x4ffc00000, data 0xc9f894/0xd75000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fad08000/0x0/0x4ffc00000, data 0xc9f894/0xd75000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:00.605979+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86384640 unmapped: 2695168 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:01.606141+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 2646016 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1030630 data_alloc: 218103808 data_used: 294912
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:02.606313+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86663168 unmapped: 2416640 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:03.606449+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86695936 unmapped: 2383872 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4face7000/0x0/0x4ffc00000, data 0xcc11da/0xd97000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:04.606616+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86802432 unmapped: 2277376 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:05.606770+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86818816 unmapped: 2260992 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:06.606948+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86818816 unmapped: 2260992 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0xce2d96/0xdb8000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1031962 data_alloc: 218103808 data_used: 294912
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:07.607108+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86769664 unmapped: 2310144 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:08.607279+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86777856 unmapped: 2301952 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0xce2d63/0xdb8000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:09.607506+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86777856 unmapped: 2301952 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:10.607743+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86777856 unmapped: 2301952 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:11.607887+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86777856 unmapped: 2301952 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.069967270s of 13.553690910s, submitted: 48
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0xce2d63/0xdb8000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:12.608082+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1032642 data_alloc: 218103808 data_used: 294912
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86786048 unmapped: 2293760 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4facc4000/0x0/0x4ffc00000, data 0xce2e30/0xdb9000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [0,0,0,0,1])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4facc1000/0x0/0x4ffc00000, data 0xce2f58/0xdbb000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:13.608206+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86794240 unmapped: 2285568 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4facc1000/0x0/0x4ffc00000, data 0xce2f58/0xdbb000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:14.608372+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86802432 unmapped: 2277376 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:15.608542+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86810624 unmapped: 2269184 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:16.608677+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86810624 unmapped: 2269184 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:17.608802+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1035954 data_alloc: 218103808 data_used: 294912
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86810624 unmapped: 2269184 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4facc1000/0x0/0x4ffc00000, data 0xce2f57/0xdbb000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:18.608946+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86810624 unmapped: 2269184 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:19.609077+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2252800 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:20.609259+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2252800 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:21.609498+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2252800 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.607082367s of 10.006441116s, submitted: 26
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:22.609633+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1035650 data_alloc: 218103808 data_used: 294912
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2236416 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:23.609764+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2236416 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 146 heartbeat osd_stat(store_statfs(0x4facbf000/0x0/0x4ffc00000, data 0xce3020/0xdbc000, compress 0x0/0x0/0x0, omap 0x637, meta 0x417f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:24.609926+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 2211840 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:25.610062+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 2187264 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:26.610219+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 2187264 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:27.610417+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1042184 data_alloc: 218103808 data_used: 303104
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 87949312 unmapped: 1130496 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:28.610605+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 87949312 unmapped: 1130496 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa8ae000/0x0/0x4ffc00000, data 0xce4c05/0xdbf000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:29.610792+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 87957504 unmapped: 1122304 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa8ae000/0x0/0x4ffc00000, data 0xce4bd3/0xdbf000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:30.611016+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 87957504 unmapped: 1122304 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:31.611185+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 87957504 unmapped: 1122304 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.498619080s of 10.211069107s, submitted: 60
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:32.611325+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043426 data_alloc: 218103808 data_used: 303104
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa8ad000/0x0/0x4ffc00000, data 0xce4ba1/0xdbe000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 87973888 unmapped: 1105920 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:33.611455+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 87982080 unmapped: 1097728 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:34.611594+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 87982080 unmapped: 1097728 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:35.611776+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 87982080 unmapped: 1097728 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:36.611958+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 87982080 unmapped: 1097728 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa8ad000/0x0/0x4ffc00000, data 0xce6763/0xdc0000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:37.612170+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047380 data_alloc: 218103808 data_used: 311296
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 87982080 unmapped: 1097728 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:38.612364+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 88039424 unmapped: 2088960 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:39.612477+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 2023424 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:40.612622+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 88113152 unmapped: 2015232 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fa8ac000/0x0/0x4ffc00000, data 0xce6815/0xdc1000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:41.613590+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 88137728 unmapped: 1990656 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.183484077s of 10.012637138s, submitted: 165
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fa8a9000/0x0/0x4ffc00000, data 0xce83fb/0xdc4000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:42.613742+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053858 data_alloc: 218103808 data_used: 319488
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 88137728 unmapped: 1990656 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:43.613946+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 88154112 unmapped: 1974272 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fa8a6000/0x0/0x4ffc00000, data 0xce8414/0xdc5000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:44.614141+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 88154112 unmapped: 1974272 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:45.614311+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 88162304 unmapped: 1966080 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:46.614495+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0xce8489/0xdc6000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 88178688 unmapped: 1949696 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa8a4000/0x0/0x4ffc00000, data 0xce9eec/0xdc9000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:47.614671+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061560 data_alloc: 218103808 data_used: 319488
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89243648 unmapped: 884736 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:48.614847+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89251840 unmapped: 876544 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa8a1000/0x0/0x4ffc00000, data 0xce9f7e/0xdca000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:49.615026+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89251840 unmapped: 876544 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:50.615218+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89251840 unmapped: 876544 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:51.615485+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89251840 unmapped: 876544 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:52.615666+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058656 data_alloc: 218103808 data_used: 319488
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89251840 unmapped: 876544 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:53.615819+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.486909866s of 11.658401489s, submitted: 39
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89251840 unmapped: 876544 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa8a9000/0x0/0x4ffc00000, data 0xce9cb6/0xdc5000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:54.616220+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89251840 unmapped: 876544 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:55.616393+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89251840 unmapped: 876544 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:56.616586+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89251840 unmapped: 876544 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:57.616840+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057902 data_alloc: 218103808 data_used: 327680
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89251840 unmapped: 876544 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:58.617067+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fa8a6000/0x0/0x4ffc00000, data 0xceb89c/0xdc8000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89251840 unmapped: 876544 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fa8a6000/0x0/0x4ffc00000, data 0xceb89c/0xdc8000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:59.617249+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89251840 unmapped: 876544 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:00.617455+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 151 handle_osd_map epochs [151,152], i have 151, src has [1,152]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89260032 unmapped: 868352 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:01.617657+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89260032 unmapped: 1916928 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:02.617810+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1065032 data_alloc: 218103808 data_used: 335872
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89260032 unmapped: 1916928 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:03.617997+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89260032 unmapped: 1916928 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fa89f000/0x0/0x4ffc00000, data 0xceee4a/0xdcd000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:04.618147+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89260032 unmapped: 1916928 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:05.618385+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89260032 unmapped: 1916928 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:06.618571+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89260032 unmapped: 1916928 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:07.618710+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1065032 data_alloc: 218103808 data_used: 335872
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89260032 unmapped: 1916928 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.398724556s of 14.739449501s, submitted: 62
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:08.618846+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fa89f000/0x0/0x4ffc00000, data 0xceee4a/0xdcd000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89268224 unmapped: 1908736 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:09.619008+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 89268224 unmapped: 1908736 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:10.619223+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 154 handle_osd_map epochs [154,155], i have 154, src has [1,155]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 90333184 unmapped: 843776 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:11.619370+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 90333184 unmapped: 843776 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:12.619497+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1071592 data_alloc: 218103808 data_used: 335872
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 90333184 unmapped: 843776 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:13.619627+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 90333184 unmapped: 843776 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:14.619724+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 155 heartbeat osd_stat(store_statfs(0x4fa899000/0x0/0x4ffc00000, data 0xcf24ef/0xdd3000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 90341376 unmapped: 835584 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:15.619983+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 90357760 unmapped: 819200 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:16.620107+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 90357760 unmapped: 819200 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fa896000/0x0/0x4ffc00000, data 0xcf400d/0xdd7000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:17.620269+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072468 data_alloc: 218103808 data_used: 344064
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91414528 unmapped: 811008 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:18.620415+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91414528 unmapped: 811008 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:19.620565+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91414528 unmapped: 811008 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:20.620718+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91414528 unmapped: 811008 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fa899000/0x0/0x4ffc00000, data 0xcf3ed7/0xdd5000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.493772507s of 12.691194534s, submitted: 54
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:21.620883+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91414528 unmapped: 811008 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:22.621102+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0xcf5abd/0xdd8000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076802 data_alloc: 218103808 data_used: 356352
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91414528 unmapped: 811008 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:23.621241+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91414528 unmapped: 811008 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:24.621393+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91414528 unmapped: 811008 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fa895000/0x0/0x4ffc00000, data 0xcf5abd/0xdd8000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:25.621526+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91422720 unmapped: 802816 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:26.621677+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91422720 unmapped: 802816 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:27.621831+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080448 data_alloc: 218103808 data_used: 356352
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91430912 unmapped: 794624 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:28.621990+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91439104 unmapped: 786432 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:29.622155+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91439104 unmapped: 786432 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fa892000/0x0/0x4ffc00000, data 0xcf7520/0xddb000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:30.622348+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91439104 unmapped: 786432 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:31.622507+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91439104 unmapped: 786432 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fa892000/0x0/0x4ffc00000, data 0xcf7520/0xddb000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:32.622644+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079776 data_alloc: 218103808 data_used: 356352
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91439104 unmapped: 786432 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:33.622864+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fa892000/0x0/0x4ffc00000, data 0xcf7520/0xddb000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91447296 unmapped: 778240 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:34.623080+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91447296 unmapped: 778240 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:35.623256+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91447296 unmapped: 778240 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:36.623446+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91447296 unmapped: 778240 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:37.623684+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079776 data_alloc: 218103808 data_used: 356352
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91447296 unmapped: 778240 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fa892000/0x0/0x4ffc00000, data 0xcf7520/0xddb000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:38.623837+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91447296 unmapped: 778240 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:39.623968+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91447296 unmapped: 778240 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:40.624136+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fa892000/0x0/0x4ffc00000, data 0xcf7520/0xddb000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91463680 unmapped: 761856 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 158 handle_osd_map epochs [158,159], i have 158, src has [1,159]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.418195724s of 20.149101257s, submitted: 36
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:41.624271+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91488256 unmapped: 737280 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 159 heartbeat osd_stat(store_statfs(0x4fa88f000/0x0/0x4ffc00000, data 0xcf9136/0xdde000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:42.624420+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082750 data_alloc: 218103808 data_used: 356352
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91488256 unmapped: 737280 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:43.624578+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91488256 unmapped: 737280 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 159 heartbeat osd_stat(store_statfs(0x4fa88f000/0x0/0x4ffc00000, data 0xcf9136/0xdde000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:44.624747+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Got map version 15
Oct 01 17:19:15 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91496448 unmapped: 729088 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: handle_auth_request added challenge on 0x5626134a6000
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:45.624889+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 91496448 unmapped: 729088 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 159 heartbeat osd_stat(store_statfs(0x4fa88f000/0x0/0x4ffc00000, data 0xcf9248/0xddf000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:46.625031+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 729088 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:47.625129+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087812 data_alloc: 218103808 data_used: 364544
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 729088 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa88b000/0x0/0x4ffc00000, data 0xcfaccb/0xde2000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:48.625246+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 712704 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:49.625375+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 712704 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:50.625511+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa88b000/0x0/0x4ffc00000, data 0xcfabb9/0xde1000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 712704 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:51.625623+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 712704 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa88b000/0x0/0x4ffc00000, data 0xcfabb9/0xde1000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:52.625755+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087074 data_alloc: 218103808 data_used: 364544
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 712704 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:53.625923+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 712704 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:54.626085+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 712704 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:55.626285+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 712704 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:56.626454+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa88b000/0x0/0x4ffc00000, data 0xcfabb9/0xde1000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 704512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:57.626577+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087074 data_alloc: 218103808 data_used: 364544
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 704512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:58.626725+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 704512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:59.626885+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 704512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:00.627114+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.447078705s of 19.782892227s, submitted: 39
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 704512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:01.627298+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 704512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:02.627478+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa88c000/0x0/0x4ffc00000, data 0xcfac54/0xde2000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086932 data_alloc: 218103808 data_used: 364544
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 704512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:03.627649+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 704512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:04.627831+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 704512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:05.628032+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 704512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:06.628284+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 704512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa88c000/0x0/0x4ffc00000, data 0xcfac54/0xde2000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:07.628468+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086756 data_alloc: 218103808 data_used: 364544
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 704512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:08.628611+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 704512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:09.628809+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 704512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa88c000/0x0/0x4ffc00000, data 0xcfac54/0xde2000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:10.629033+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 704512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.371644974s of 10.547924042s, submitted: 4
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:11.629164+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 704512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:12.629331+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088524 data_alloc: 218103808 data_used: 364544
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 696320 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:13.629489+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 696320 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:14.629677+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa88b000/0x0/0x4ffc00000, data 0xcfacef/0xde3000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 696320 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa88b000/0x0/0x4ffc00000, data 0xcfacef/0xde3000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:15.629837+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 696320 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:16.630025+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 696320 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:17.630184+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087786 data_alloc: 218103808 data_used: 364544
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 696320 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa88b000/0x0/0x4ffc00000, data 0xcfacef/0xde3000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:18.630361+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa88b000/0x0/0x4ffc00000, data 0xcfac54/0xde2000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 696320 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:19.630540+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 696320 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:20.630750+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 696320 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:21.631009+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa88b000/0x0/0x4ffc00000, data 0xcfacef/0xde3000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 696320 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:22.631129+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088524 data_alloc: 218103808 data_used: 364544
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 688128 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:23.631287+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 688128 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:24.631554+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.063726425s of 13.080301285s, submitted: 4
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 688128 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:25.631709+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 688128 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:26.631850+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 688128 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:27.631984+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa88c000/0x0/0x4ffc00000, data 0xcfac54/0xde2000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086756 data_alloc: 218103808 data_used: 364544
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 688128 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:28.632104+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 688128 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:29.632277+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 688128 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:30.632520+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa88a000/0x0/0x4ffc00000, data 0xcfad8a/0xde4000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 679936 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:31.632791+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa88a000/0x0/0x4ffc00000, data 0xcfad8a/0xde4000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 679936 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:32.632934+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090292 data_alloc: 218103808 data_used: 364544
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 679936 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:33.633154+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 679936 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:34.633346+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.989791870s of 10.007966042s, submitted: 3
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa88a000/0x0/0x4ffc00000, data 0xcfad8a/0xde4000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 679936 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:35.633540+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 679936 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:36.633682+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 679936 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:37.633959+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089426 data_alloc: 218103808 data_used: 364544
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa88b000/0x0/0x4ffc00000, data 0xcfacef/0xde3000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 671744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:38.634125+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 671744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:39.634362+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 671744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa88b000/0x0/0x4ffc00000, data 0xcfacef/0xde3000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:40.634569+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 671744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:41.634749+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 671744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:42.634979+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092048 data_alloc: 218103808 data_used: 364544
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92610560 unmapped: 663552 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa889000/0x0/0x4ffc00000, data 0xcfae25/0xde5000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:43.635227+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92610560 unmapped: 663552 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:44.635424+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92610560 unmapped: 663552 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:45.635567+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _renew_subs
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.755108833s of 11.030391693s, submitted: 7
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 647168 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:46.635763+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 647168 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:47.636065+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1093636 data_alloc: 218103808 data_used: 372736
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 647168 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:48.636497+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa888000/0x0/0x4ffc00000, data 0xcfc86a/0xde5000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 647168 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:49.636850+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 647168 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:50.637287+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 647168 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:51.637519+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 647168 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:52.637718+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa888000/0x0/0x4ffc00000, data 0xcfc86a/0xde5000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 161 handle_osd_map epochs [162,162], i have 162, src has [1,162]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097282 data_alloc: 218103808 data_used: 372736
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92651520 unmapped: 622592 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:53.637855+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92651520 unmapped: 622592 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:54.638003+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92651520 unmapped: 622592 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:55.638150+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.397669792s of 10.364931107s, submitted: 39
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92651520 unmapped: 622592 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:56.638372+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92651520 unmapped: 622592 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:57.638561+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1096808 data_alloc: 218103808 data_used: 372736
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92651520 unmapped: 622592 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:58.638735+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fa886000/0x0/0x4ffc00000, data 0xcfe2ed/0xde8000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92651520 unmapped: 622592 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:59.639028+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 162 handle_osd_map epochs [162,163], i have 162, src has [1,163]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92659712 unmapped: 1662976 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:00.639367+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92659712 unmapped: 1662976 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:01.639746+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92659712 unmapped: 1662976 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:02.639977+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100116 data_alloc: 218103808 data_used: 380928
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92659712 unmapped: 1662976 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa883000/0x0/0x4ffc00000, data 0xcffe68/0xdea000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:03.640168+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92659712 unmapped: 1662976 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:04.640407+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa883000/0x0/0x4ffc00000, data 0xcffe68/0xdea000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92659712 unmapped: 1662976 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:05.640676+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 163 handle_osd_map epochs [163,164], i have 163, src has [1,164]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92667904 unmapped: 1654784 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:06.640852+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92667904 unmapped: 1654784 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:07.641099+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104682 data_alloc: 218103808 data_used: 380928
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92667904 unmapped: 1654784 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:08.641378+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa87f000/0x0/0x4ffc00000, data 0xd019e1/0xdee000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92667904 unmapped: 1654784 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:09.641626+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92676096 unmapped: 1646592 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.413195610s of 14.541606903s, submitted: 76
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:10.641838+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92676096 unmapped: 1646592 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:11.642095+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa87e000/0x0/0x4ffc00000, data 0xd01af6/0xdef000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92676096 unmapped: 1646592 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:12.642303+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106450 data_alloc: 218103808 data_used: 380928
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92676096 unmapped: 1646592 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:13.642522+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa87e000/0x0/0x4ffc00000, data 0xd01af6/0xdef000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92676096 unmapped: 1646592 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:14.642712+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92676096 unmapped: 1646592 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa87e000/0x0/0x4ffc00000, data 0xd01af6/0xdef000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:15.642923+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92676096 unmapped: 1646592 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:16.643144+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92676096 unmapped: 1646592 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:17.643364+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106610 data_alloc: 218103808 data_used: 385024
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92676096 unmapped: 1646592 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:18.643571+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa87e000/0x0/0x4ffc00000, data 0xd01af6/0xdef000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92676096 unmapped: 1646592 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa87e000/0x0/0x4ffc00000, data 0xd01af6/0xdef000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:19.643773+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92676096 unmapped: 1646592 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:20.644002+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92676096 unmapped: 1646592 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:21.644217+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92676096 unmapped: 1646592 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:22.644446+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106610 data_alloc: 218103808 data_used: 385024
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92676096 unmapped: 1646592 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa87e000/0x0/0x4ffc00000, data 0xd01af6/0xdef000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:23.644662+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.262932777s of 13.595630646s, submitted: 1
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92684288 unmapped: 1638400 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:24.644865+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92684288 unmapped: 1638400 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:25.645082+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92684288 unmapped: 1638400 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:26.645284+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0xd018eb/0xded000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92684288 unmapped: 1638400 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:27.645430+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103112 data_alloc: 218103808 data_used: 380928
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92684288 unmapped: 1638400 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:28.645587+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0xd018eb/0xded000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92684288 unmapped: 1638400 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:29.645816+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92684288 unmapped: 1638400 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:30.646083+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92700672 unmapped: 1622016 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:31.646335+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa87f000/0x0/0x4ffc00000, data 0xd01a21/0xdef000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92700672 unmapped: 1622016 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:32.646560+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106648 data_alloc: 218103808 data_used: 380928
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92700672 unmapped: 1622016 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:33.646705+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92700672 unmapped: 1622016 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:34.646963+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92700672 unmapped: 1622016 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:35.647184+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92700672 unmapped: 1622016 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:36.647386+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.171231270s of 12.513448715s, submitted: 4
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa87f000/0x0/0x4ffc00000, data 0xd01a21/0xdef000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 1597440 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:37.647588+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110470 data_alloc: 218103808 data_used: 389120
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 1597440 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:38.647766+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa87b000/0x0/0x4ffc00000, data 0xd03607/0xdf2000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 1597440 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:39.647974+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa87c000/0x0/0x4ffc00000, data 0xd0356c/0xdf1000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 1597440 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:40.648219+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 165 handle_osd_map epochs [165,166], i have 165, src has [1,166]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92749824 unmapped: 1572864 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:41.648415+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 1564672 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:42.648597+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa879000/0x0/0x4ffc00000, data 0xd04fcf/0xdf4000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114312 data_alloc: 218103808 data_used: 401408
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 1564672 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:43.648737+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 1564672 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:44.648961+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 1564672 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:45.649164+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 1564672 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:46.649362+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa879000/0x0/0x4ffc00000, data 0xd04fcf/0xdf4000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 1564672 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:47.649581+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114312 data_alloc: 218103808 data_used: 401408
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 1564672 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:48.649759+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa879000/0x0/0x4ffc00000, data 0xd04fcf/0xdf4000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.479110718s of 12.668789864s, submitted: 39
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92766208 unmapped: 1556480 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:49.649972+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92766208 unmapped: 1556480 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:50.650182+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa878000/0x0/0x4ffc00000, data 0xd0506a/0xdf5000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92774400 unmapped: 1548288 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:51.650335+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92774400 unmapped: 1548288 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:52.650464+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa87a000/0x0/0x4ffc00000, data 0xd04fcf/0xdf4000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113432 data_alloc: 218103808 data_used: 401408
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92774400 unmapped: 1548288 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:53.650584+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92774400 unmapped: 1548288 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:54.650688+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92782592 unmapped: 1540096 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:55.650800+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92782592 unmapped: 1540096 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:56.650929+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _renew_subs
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92815360 unmapped: 1507328 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:57.651048+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116740 data_alloc: 218103808 data_used: 409600
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92815360 unmapped: 1507328 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:58.651210+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa876000/0x0/0x4ffc00000, data 0xd06b1a/0xdf6000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92815360 unmapped: 1507328 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:59.651411+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92815360 unmapped: 1507328 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:00.651579+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92815360 unmapped: 1507328 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:01.651741+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92815360 unmapped: 1507328 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:02.651938+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa877000/0x0/0x4ffc00000, data 0xd06b1a/0xdf6000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.070169449s of 13.739535332s, submitted: 28
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119714 data_alloc: 218103808 data_used: 409600
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92823552 unmapped: 1499136 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:03.652248+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92823552 unmapped: 1499136 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:04.652467+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92823552 unmapped: 1499136 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:05.652626+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92823552 unmapped: 1499136 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:06.652739+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92823552 unmapped: 1499136 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:07.652847+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 168 heartbeat osd_stat(store_statfs(0x4fa874000/0x0/0x4ffc00000, data 0xd0857d/0xdf9000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119714 data_alloc: 218103808 data_used: 409600
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92823552 unmapped: 1499136 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:08.653021+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92823552 unmapped: 1499136 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:09.653142+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92823552 unmapped: 1499136 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:10.653319+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:11.653517+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92823552 unmapped: 1499136 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 168 heartbeat osd_stat(store_statfs(0x4fa874000/0x0/0x4ffc00000, data 0xd0857d/0xdf9000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:12.653649+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92831744 unmapped: 1490944 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 168 heartbeat osd_stat(store_statfs(0x4fa874000/0x0/0x4ffc00000, data 0xd0857d/0xdf9000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119714 data_alloc: 218103808 data_used: 409600
Oct 01 17:19:15 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Got map version 16
Oct 01 17:19:15 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:13.653784+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92856320 unmapped: 1466368 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.395549774s of 10.594060898s, submitted: 25
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 168 heartbeat osd_stat(store_statfs(0x4fa874000/0x0/0x4ffc00000, data 0xd0857d/0xdf9000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:14.653987+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92856320 unmapped: 1466368 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 168 heartbeat osd_stat(store_statfs(0x4fa873000/0x0/0x4ffc00000, data 0xd08618/0xdfa000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:15.654200+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92856320 unmapped: 1466368 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 168 handle_osd_map epochs [168,169], i have 168, src has [1,169]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:16.654379+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92864512 unmapped: 1458176 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 169 handle_osd_map epochs [169,170], i have 169, src has [1,170]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:17.654554+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92872704 unmapped: 1449984 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129276 data_alloc: 218103808 data_used: 417792
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:18.654680+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92872704 unmapped: 1449984 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 170 heartbeat osd_stat(store_statfs(0x4fa86b000/0x0/0x4ffc00000, data 0xd0be44/0xe00000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:19.654833+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92872704 unmapped: 1449984 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:20.655019+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92872704 unmapped: 1449984 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:21.655194+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92872704 unmapped: 1449984 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:22.655325+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92872704 unmapped: 1449984 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 170 heartbeat osd_stat(store_statfs(0x4fa86c000/0x0/0x4ffc00000, data 0xd0bda9/0xdff000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 170 handle_osd_map epochs [171,171], i have 170, src has [1,171]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129858 data_alloc: 218103808 data_used: 417792
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:23.655486+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92872704 unmapped: 1449984 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa86b000/0x0/0x4ffc00000, data 0xd0d82c/0xe02000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:24.655639+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92872704 unmapped: 1449984 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:25.655805+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92872704 unmapped: 1449984 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:26.655939+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92872704 unmapped: 1449984 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:27.656096+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92872704 unmapped: 1449984 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129858 data_alloc: 218103808 data_used: 417792
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:28.656305+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92880896 unmapped: 1441792 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:29.656480+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92880896 unmapped: 1441792 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa86b000/0x0/0x4ffc00000, data 0xd0d82c/0xe02000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:30.656672+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92880896 unmapped: 1441792 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:31.656829+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92880896 unmapped: 1441792 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:32.657977+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92880896 unmapped: 1441792 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129858 data_alloc: 218103808 data_used: 417792
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:33.658108+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92880896 unmapped: 1441792 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:34.658304+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92880896 unmapped: 1441792 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa86b000/0x0/0x4ffc00000, data 0xd0d82c/0xe02000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:35.658641+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92880896 unmapped: 1441792 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:36.658872+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92880896 unmapped: 1441792 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:37.659080+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92880896 unmapped: 1441792 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129858 data_alloc: 218103808 data_used: 417792
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:38.659200+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92880896 unmapped: 1441792 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa86b000/0x0/0x4ffc00000, data 0xd0d82c/0xe02000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:39.659447+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92880896 unmapped: 1441792 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:40.659577+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 92880896 unmapped: 1441792 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:41.659709+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 26.810922623s of 27.994199753s, submitted: 63
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 171 ms_handle_reset con 0x5626134a6000 session 0x5626135681e0
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93249536 unmapped: 1073152 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:42.659847+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93249536 unmapped: 1073152 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Got map version 17
Oct 01 17:19:15 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128978 data_alloc: 218103808 data_used: 417792
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:43.659986+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93265920 unmapped: 1056768 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa86c000/0x0/0x4ffc00000, data 0xd0d82c/0xe02000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa86c000/0x0/0x4ffc00000, data 0xd0d82c/0xe02000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:44.660126+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93265920 unmapped: 1056768 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 171 handle_osd_map epochs [171,172], i have 171, src has [1,172]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:45.660293+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93265920 unmapped: 1056768 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:46.660455+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93265920 unmapped: 1056768 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:47.660615+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93265920 unmapped: 1056768 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133152 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:48.660761+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93265920 unmapped: 1056768 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:49.661002+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fa868000/0x0/0x4ffc00000, data 0xd0f412/0xe05000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93265920 unmapped: 1056768 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:50.661292+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93265920 unmapped: 1056768 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fa868000/0x0/0x4ffc00000, data 0xd0f412/0xe05000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:51.661429+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93265920 unmapped: 1056768 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:52.661605+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93265920 unmapped: 1056768 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133152 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:53.661798+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fa868000/0x0/0x4ffc00000, data 0xd0f412/0xe05000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93265920 unmapped: 1056768 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:54.661938+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93265920 unmapped: 1056768 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:55.662059+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93265920 unmapped: 1056768 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:56.662199+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93265920 unmapped: 1056768 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 172 handle_osd_map epochs [172,173], i have 172, src has [1,173]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.232500076s of 15.341207504s, submitted: 203
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:57.662339+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93274112 unmapped: 1048576 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa865000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136126 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:58.662517+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93274112 unmapped: 1048576 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:59.662688+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93274112 unmapped: 1048576 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:00.662885+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93274112 unmapped: 1048576 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:01.663080+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93274112 unmapped: 1048576 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:02.663220+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93274112 unmapped: 1048576 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:03.663431+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136126 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93274112 unmapped: 1048576 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa865000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:04.663579+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93274112 unmapped: 1048576 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:05.663768+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93274112 unmapped: 1048576 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:06.663968+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93274112 unmapped: 1048576 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:07.664113+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93274112 unmapped: 1048576 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:08.664276+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136126 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93282304 unmapped: 1040384 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa865000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:09.664469+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93282304 unmapped: 1040384 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:10.664712+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93282304 unmapped: 1040384 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:11.664860+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93282304 unmapped: 1040384 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:12.665019+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93282304 unmapped: 1040384 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:13.665159+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136126 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa865000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93282304 unmapped: 1040384 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:14.665300+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93282304 unmapped: 1040384 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:15.665437+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93282304 unmapped: 1040384 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:16.665549+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93282304 unmapped: 1040384 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:17.665693+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93282304 unmapped: 1040384 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa865000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:18.665817+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136126 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93282304 unmapped: 1040384 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:19.665965+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93290496 unmapped: 1032192 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:20.666135+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93290496 unmapped: 1032192 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:21.666279+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93290496 unmapped: 1032192 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:22.666436+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93290496 unmapped: 1032192 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:23.666562+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa865000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136126 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93290496 unmapped: 1032192 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:24.666751+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93290496 unmapped: 1032192 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:25.666881+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93290496 unmapped: 1032192 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:26.667043+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93290496 unmapped: 1032192 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:27.667161+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa865000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93290496 unmapped: 1032192 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:28.667374+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136126 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93290496 unmapped: 1032192 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:29.667523+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93290496 unmapped: 1032192 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:30.667676+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa865000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93290496 unmapped: 1032192 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:31.667799+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93290496 unmapped: 1032192 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa865000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:32.667919+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93298688 unmapped: 1024000 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:33.668068+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136126 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93298688 unmapped: 1024000 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:34.668237+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93298688 unmapped: 1024000 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:35.668360+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93298688 unmapped: 1024000 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:36.668529+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93298688 unmapped: 1024000 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa865000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:37.668674+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93298688 unmapped: 1024000 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:38.668833+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136126 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa865000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93102080 unmapped: 1220608 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:39.669001+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa865000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93102080 unmapped: 1220608 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:40.669161+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93102080 unmapped: 1220608 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 44.292816162s of 44.330963135s, submitted: 13
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 ms_handle_reset con 0x56261301fc00 session 0x562612fca5a0
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:41.669285+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93339648 unmapped: 983040 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:42.669399+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93347840 unmapped: 974848 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Got map version 18
Oct 01 17:19:15 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:43.669584+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93347840 unmapped: 974848 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:44.669706+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93347840 unmapped: 974848 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:45.669817+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93347840 unmapped: 974848 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:46.669981+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93347840 unmapped: 974848 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:47.670159+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93347840 unmapped: 974848 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:48.670313+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93347840 unmapped: 974848 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:49.670422+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93347840 unmapped: 974848 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:50.670570+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93347840 unmapped: 974848 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:51.670678+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93347840 unmapped: 974848 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:52.670805+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93388800 unmapped: 933888 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: do_command 'config diff' '{prefix=config diff}'
Oct 01 17:19:15 compute-0 ceph-osd[90269]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 01 17:19:15 compute-0 ceph-osd[90269]: do_command 'config show' '{prefix=config show}'
Oct 01 17:19:15 compute-0 ceph-osd[90269]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 01 17:19:15 compute-0 ceph-osd[90269]: do_command 'counter dump' '{prefix=counter dump}'
Oct 01 17:19:15 compute-0 ceph-osd[90269]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:53.670949+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: do_command 'counter schema' '{prefix=counter schema}'
Oct 01 17:19:15 compute-0 ceph-osd[90269]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 1769472 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:54.671091+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93847552 unmapped: 1523712 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:55.671245+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: do_command 'log dump' '{prefix=log dump}'
Oct 01 17:19:15 compute-0 ceph-osd[90269]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93847552 unmapped: 12566528 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: do_command 'perf dump' '{prefix=perf dump}'
Oct 01 17:19:15 compute-0 ceph-osd[90269]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct 01 17:19:15 compute-0 ceph-osd[90269]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct 01 17:19:15 compute-0 ceph-osd[90269]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:56.671365+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: do_command 'perf schema' '{prefix=perf schema}'
Oct 01 17:19:15 compute-0 ceph-osd[90269]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93683712 unmapped: 12730368 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:57.671496+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93683712 unmapped: 12730368 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:58.671621+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93683712 unmapped: 12730368 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:59.671755+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93683712 unmapped: 12730368 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:00.671915+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93683712 unmapped: 12730368 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:01.672041+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93683712 unmapped: 12730368 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:02.672163+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93683712 unmapped: 12730368 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:03.672286+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93683712 unmapped: 12730368 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:04.672422+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93683712 unmapped: 12730368 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:05.672603+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93683712 unmapped: 12730368 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:06.672736+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93691904 unmapped: 12722176 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:07.672871+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93691904 unmapped: 12722176 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:08.676056+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93691904 unmapped: 12722176 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:09.676169+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93691904 unmapped: 12722176 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:10.676314+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93691904 unmapped: 12722176 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:11.676441+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93691904 unmapped: 12722176 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:12.676569+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93691904 unmapped: 12722176 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Got map version 19
Oct 01 17:19:15 compute-0 ceph-osd[90269]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:13.676684+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 12812288 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:14.676799+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 12812288 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:15.676934+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 12812288 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:16.677084+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 12812288 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:17.677201+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 12812288 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:18.677320+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 12812288 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:19.677658+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 12812288 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:20.677797+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 12812288 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:21.684944+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 12812288 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:22.685286+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:23.685472+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:24.685576+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:25.685704+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:26.685828+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:27.685948+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:28.686083+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:29.686228+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:30.686391+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:31.686524+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:32.686646+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:33.686774+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:34.686921+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:35.687104+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:36.687235+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:37.687384+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:38.687503+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:39.687620+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:40.687782+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:41.687959+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:42.688088+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:43.688206+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:44.688348+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:45.688482+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:46.688609+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:47.688804+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:48.689006+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:49.689150+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:50.689309+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:51.689461+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:52.689600+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:53.689806+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:54.690045+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:55.690231+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:56.690371+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:57.690566+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:58.690712+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:59.690859+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:00.691072+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:01.691259+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:02.691419+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:03.691567+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:04.691763+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:05.691950+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:06.692115+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:07.692268+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:08.692400+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:09.692528+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:10.692675+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:11.692798+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:12.692967+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:13.693102+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:14.693219+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:15.693398+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:16.693535+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:17.693698+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:18.693822+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:19.693934+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:20.694101+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:21.694242+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:22.694373+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:23.694602+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:24.694792+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:25.694931+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:26.695082+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93396992 unmapped: 13017088 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:27.695302+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93405184 unmapped: 13008896 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:28.695463+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93405184 unmapped: 13008896 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:29.695651+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93405184 unmapped: 13008896 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:30.695852+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93405184 unmapped: 13008896 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:31.695963+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93405184 unmapped: 13008896 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:32.696103+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93405184 unmapped: 13008896 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:33.696302+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93405184 unmapped: 13008896 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:34.696426+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93405184 unmapped: 13008896 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:35.696646+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93405184 unmapped: 13008896 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:36.696830+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93405184 unmapped: 13008896 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:37.696998+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93405184 unmapped: 13008896 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:38.697136+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93405184 unmapped: 13008896 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:39.697293+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93405184 unmapped: 13008896 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:40.697453+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93405184 unmapped: 13008896 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:41.697570+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93405184 unmapped: 13008896 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:42.697692+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93405184 unmapped: 13008896 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:43.697864+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93405184 unmapped: 13008896 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:44.697995+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93405184 unmapped: 13008896 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:45.698159+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93405184 unmapped: 13008896 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:46.698285+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93405184 unmapped: 13008896 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:47.698420+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93405184 unmapped: 13008896 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:48.698543+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93413376 unmapped: 13000704 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:49.698672+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93413376 unmapped: 13000704 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:50.698846+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93413376 unmapped: 13000704 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:51.699041+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93413376 unmapped: 13000704 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:52.699171+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93413376 unmapped: 13000704 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:53.699309+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93413376 unmapped: 13000704 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:54.699452+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93413376 unmapped: 13000704 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:55.699595+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93413376 unmapped: 13000704 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:56.699729+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93413376 unmapped: 13000704 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:57.699856+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93421568 unmapped: 12992512 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:58.700068+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93421568 unmapped: 12992512 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:59.700211+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93421568 unmapped: 12992512 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:00.700412+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93421568 unmapped: 12992512 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:01.700563+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93421568 unmapped: 12992512 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:02.700728+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93421568 unmapped: 12992512 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:03.700937+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93421568 unmapped: 12992512 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:04.701090+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93421568 unmapped: 12992512 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:05.701282+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93421568 unmapped: 12992512 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:06.701439+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93421568 unmapped: 12992512 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:07.701591+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93421568 unmapped: 12992512 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:08.701724+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93421568 unmapped: 12992512 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:09.701958+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93421568 unmapped: 12992512 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:10.702179+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93421568 unmapped: 12992512 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:11.702355+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93421568 unmapped: 12992512 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:12.702564+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93421568 unmapped: 12992512 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:13.702702+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93421568 unmapped: 12992512 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:14.702842+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93421568 unmapped: 12992512 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:15.702987+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93421568 unmapped: 12992512 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:16.703138+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93421568 unmapped: 12992512 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:17.703326+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93421568 unmapped: 12992512 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:18.703441+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93421568 unmapped: 12992512 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:19.703699+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:20.703947+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:21.704081+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:22.704250+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:23.704363+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:24.704531+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:25.704729+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:26.704927+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:27.705068+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:28.705196+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:29.705354+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:30.705561+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:31.705715+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:32.705971+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:33.706154+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:34.706319+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:35.706462+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:36.706606+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:37.706781+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:38.706951+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:39.707081+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:40.707287+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:41.707424+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:42.707550+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:43.707677+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:44.707843+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:45.707994+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:46.708115+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:47.708318+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:48.708542+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:49.708753+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:50.708985+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:51.709129+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:52.709316+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:53.709447+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:54.709571+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:55.709707+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:56.709862+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:57.710019+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:58.710210+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:59.710359+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:00.710572+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:01.710712+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:02.710879+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:03.711063+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:04.711248+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 12984320 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:05.711459+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93437952 unmapped: 12976128 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:06.711683+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93437952 unmapped: 12976128 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:07.711882+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93437952 unmapped: 12976128 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:08.712121+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93437952 unmapped: 12976128 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:09.712275+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93437952 unmapped: 12976128 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:10.712547+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93437952 unmapped: 12976128 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:11.714036+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93437952 unmapped: 12976128 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:12.714154+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:13.714330+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93437952 unmapped: 12976128 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:14.714498+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93437952 unmapped: 12976128 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:15.714803+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93437952 unmapped: 12976128 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:16.714967+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93437952 unmapped: 12976128 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:17.715223+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93437952 unmapped: 12976128 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:18.715391+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93437952 unmapped: 12976128 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:19.715662+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93437952 unmapped: 12976128 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:20.715959+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93437952 unmapped: 12976128 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:21.716143+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93437952 unmapped: 12976128 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:22.716287+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93437952 unmapped: 12976128 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:23.716437+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93437952 unmapped: 12976128 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:24.716669+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93437952 unmapped: 12976128 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:25.716854+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93437952 unmapped: 12976128 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:26.717018+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93437952 unmapped: 12976128 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:27.717226+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93437952 unmapped: 12976128 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:28.717418+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93437952 unmapped: 12976128 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:29.717718+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93437952 unmapped: 12976128 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:30.717963+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93437952 unmapped: 12976128 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:31.718209+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93437952 unmapped: 12976128 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:32.718411+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93437952 unmapped: 12976128 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:33.718641+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93437952 unmapped: 12976128 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:34.718981+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93437952 unmapped: 12976128 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:35.719268+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93446144 unmapped: 12967936 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:36.719571+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93446144 unmapped: 12967936 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:37.719810+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93446144 unmapped: 12967936 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:38.720011+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93446144 unmapped: 12967936 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:39.720260+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93446144 unmapped: 12967936 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:40.720624+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93446144 unmapped: 12967936 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:41.720990+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93446144 unmapped: 12967936 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:42.721252+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93446144 unmapped: 12967936 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:43.721404+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93446144 unmapped: 12967936 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:44.721585+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93446144 unmapped: 12967936 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:45.721744+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93446144 unmapped: 12967936 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 9360 writes, 34K keys, 9360 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 9360 writes, 2147 syncs, 4.36 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1877 writes, 5246 keys, 1877 commit groups, 1.0 writes per commit group, ingest: 4.05 MB, 0.01 MB/s
                                           Interval WAL: 1878 writes, 587 syncs, 3.20 writes per sync, written: 0.00 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:46.721926+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93446144 unmapped: 12967936 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:47.722125+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93446144 unmapped: 12967936 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:48.722401+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93446144 unmapped: 12967936 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:49.722598+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93446144 unmapped: 12967936 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:50.722831+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93446144 unmapped: 12967936 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:51.722998+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93446144 unmapped: 12967936 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:52.723193+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93446144 unmapped: 12967936 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:53.723363+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93446144 unmapped: 12967936 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:54.723569+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93446144 unmapped: 12967936 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:55.723792+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93446144 unmapped: 12967936 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:56.723963+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93446144 unmapped: 12967936 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:57.724277+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93446144 unmapped: 12967936 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:58.724409+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93446144 unmapped: 12967936 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:59.724649+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93446144 unmapped: 12967936 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:00.725005+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93446144 unmapped: 12967936 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:01.725171+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93446144 unmapped: 12967936 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:02.725422+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93454336 unmapped: 12959744 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:03.725679+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93454336 unmapped: 12959744 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:04.725969+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93454336 unmapped: 12959744 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:05.726122+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93454336 unmapped: 12959744 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:06.726325+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93454336 unmapped: 12959744 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:07.726534+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93454336 unmapped: 12959744 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:08.726696+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93454336 unmapped: 12959744 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:09.726938+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93454336 unmapped: 12959744 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:10.727179+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93454336 unmapped: 12959744 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:11.727503+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93454336 unmapped: 12959744 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:12.727703+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93454336 unmapped: 12959744 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:13.727959+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93454336 unmapped: 12959744 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:14.728081+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93454336 unmapped: 12959744 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:15.728314+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93454336 unmapped: 12959744 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:16.728512+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93454336 unmapped: 12959744 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:17.728810+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93454336 unmapped: 12959744 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:18.729005+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93454336 unmapped: 12959744 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:19.729202+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93454336 unmapped: 12959744 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:20.729517+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93454336 unmapped: 12959744 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:21.729678+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93454336 unmapped: 12959744 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:22.729839+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93454336 unmapped: 12959744 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:23.730003+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93454336 unmapped: 12959744 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:24.730182+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93454336 unmapped: 12959744 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:25.730399+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93454336 unmapped: 12959744 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:26.730568+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93454336 unmapped: 12959744 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:27.730718+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93454336 unmapped: 12959744 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:28.730882+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93454336 unmapped: 12959744 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:29.731104+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93454336 unmapped: 12959744 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:30.731334+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93454336 unmapped: 12959744 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:31.731536+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93454336 unmapped: 12959744 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:32.731674+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93454336 unmapped: 12959744 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:33.731873+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93462528 unmapped: 12951552 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:34.732135+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93462528 unmapped: 12951552 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:35.732385+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93462528 unmapped: 12951552 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:36.732625+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93462528 unmapped: 12951552 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:37.732816+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93462528 unmapped: 12951552 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:38.733012+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93462528 unmapped: 12951552 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 297.788513184s of 297.809570312s, submitted: 157
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:39.733125+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93462528 unmapped: 12951552 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:40.733356+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93478912 unmapped: 12935168 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:41.733490+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93478912 unmapped: 12935168 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:42.733625+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:43.733793+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:44.733997+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:45.734109+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:46.734287+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:47.734500+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:48.734652+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:49.734799+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:50.734981+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:51.735142+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:52.735332+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:53.735447+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:54.735553+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:55.735692+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:56.735788+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:57.735933+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:58.736030+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:59.736177+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:00.736303+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:01.736448+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:02.736623+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:03.736760+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:04.736845+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:05.736979+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:06.737114+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:07.737296+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:08.737425+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:09.737573+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:10.737739+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:11.737915+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:12.738044+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:13.738159+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:14.738292+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:15.738390+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:16.738529+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:17.738641+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:18.738770+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:19.738957+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:20.739116+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:21.739226+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:22.739359+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:23.739579+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:24.739704+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:25.739837+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:26.739976+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:27.740121+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:28.740239+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:29.740386+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:30.740546+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:31.740716+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:32.740976+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:33.741165+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:34.741314+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:35.741448+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:36.742137+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:37.742422+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:38.742572+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:39.742776+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:40.742953+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:41.743117+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:42.743267+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:43.743402+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:44.743526+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:45.743608+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:46.743775+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:47.744190+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:48.744305+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:49.744399+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:50.744554+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:51.744928+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:52.745258+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:53.745537+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:54.746049+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:55.746643+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:56.747169+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:57.747598+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:58.747818+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:59.747942+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:00.748105+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:01.748278+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:02.748470+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:03.748632+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:04.748782+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:05.748974+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:06.749170+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:07.749326+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:08.749514+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:09.749693+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:10.749846+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:11.749988+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:12.750150+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:13.750274+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:14.750466+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:15.750596+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:16.750821+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:17.751025+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:18.751207+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:19.751486+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:20.751715+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:21.751943+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:22.752088+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:23.752207+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:24.752332+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:25.752484+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:26.752652+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:27.752838+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:28.753096+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:29.753428+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:30.753702+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:31.753975+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:32.754170+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:33.754461+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:34.754689+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:35.754913+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:36.755053+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:37.755244+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:38.755519+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:39.755773+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:40.755970+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:41.756210+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:42.756432+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:43.756624+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:44.756816+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:45.757059+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:46.757257+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:47.757525+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:48.757709+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:49.757843+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:50.758043+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:51.758250+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:52.758414+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:53.758610+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:54.758789+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:55.758983+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:56.759190+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:57.759375+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:58.759572+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:59.759754+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:00.760001+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:01.760186+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:02.760298+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:03.760427+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:04.760587+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:05.760718+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:06.760874+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:07.761008+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:08.761218+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:09.761438+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:10.761663+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:11.761828+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93487104 unmapped: 12926976 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:12.762026+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93495296 unmapped: 12918784 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:13.762197+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93495296 unmapped: 12918784 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:14.762391+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93495296 unmapped: 12918784 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:15.762577+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93495296 unmapped: 12918784 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:16.762743+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93495296 unmapped: 12918784 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:17.762996+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93495296 unmapped: 12918784 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:18.763137+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93495296 unmapped: 12918784 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:19.763287+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93495296 unmapped: 12918784 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:20.763432+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93495296 unmapped: 12918784 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:21.763573+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93495296 unmapped: 12918784 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:22.763775+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93495296 unmapped: 12918784 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:23.763929+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93495296 unmapped: 12918784 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:24.764096+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93495296 unmapped: 12918784 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:25.764335+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93495296 unmapped: 12918784 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:26.764503+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93495296 unmapped: 12918784 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:27.764674+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93495296 unmapped: 12918784 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:28.764825+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93495296 unmapped: 12918784 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:29.764946+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93495296 unmapped: 12918784 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:30.765120+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93495296 unmapped: 12918784 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:31.765240+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93495296 unmapped: 12918784 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:32.765374+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93495296 unmapped: 12918784 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:33.765571+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93495296 unmapped: 12918784 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:34.765781+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93495296 unmapped: 12918784 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:35.765968+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:36.766171+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93495296 unmapped: 12918784 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:37.766358+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93495296 unmapped: 12918784 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:38.766473+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93495296 unmapped: 12918784 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:39.766604+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93495296 unmapped: 12918784 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:40.766811+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93495296 unmapped: 12918784 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa866000/0x0/0x4ffc00000, data 0xd10e75/0xe08000, compress 0x0/0x0/0x0, omap 0x637, meta 0x458f9c9), peers [0,1] op hist [])
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:41.766972+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93503488 unmapped: 12910592 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:42.767137+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93503488 unmapped: 12910592 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: do_command 'config diff' '{prefix=config diff}'
Oct 01 17:19:15 compute-0 ceph-osd[90269]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 01 17:19:15 compute-0 ceph-osd[90269]: do_command 'config show' '{prefix=config show}'
Oct 01 17:19:15 compute-0 ceph-osd[90269]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 01 17:19:15 compute-0 ceph-osd[90269]: do_command 'counter dump' '{prefix=counter dump}'
Oct 01 17:19:15 compute-0 ceph-osd[90269]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 01 17:19:15 compute-0 ceph-osd[90269]: do_command 'counter schema' '{prefix=counter schema}'
Oct 01 17:19:15 compute-0 ceph-osd[90269]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:43.767286+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93839360 unmapped: 12574720 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: tick
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_tickets
Oct 01 17:19:15 compute-0 ceph-osd[90269]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:44.767441+0000)
Oct 01 17:19:15 compute-0 ceph-osd[90269]: prioritycache tune_memory target: 4294967296 mapped: 93863936 unmapped: 12550144 heap: 106414080 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:15 compute-0 ceph-osd[90269]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:15 compute-0 ceph-osd[90269]: bluestore.MempoolThread(0x56260edf9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135246 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:15 compute-0 ceph-osd[90269]: do_command 'log dump' '{prefix=log dump}'
Oct 01 17:19:16 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14875 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 01 17:19:16 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/879731096' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 01 17:19:16 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14879 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:16 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:19:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 01 17:19:16 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3389051656' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 01 17:19:16 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14883 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:19:16 compute-0 ceph-mon[74273]: from='client.14873 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:16 compute-0 ceph-mon[74273]: from='client.14875 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:16 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/879731096' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 01 17:19:16 compute-0 ceph-mon[74273]: from='client.14879 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:16 compute-0 ceph-mon[74273]: pgmap v1516: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:19:16 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3389051656' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 01 17:19:16 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 01 17:19:16 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2007516144' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 01 17:19:17 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14887 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:19:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct 01 17:19:17 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2973207352' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 01 17:19:17 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:19:17 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Oct 01 17:19:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:19:17.803340) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 17:19:17 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Oct 01 17:19:17 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759339157803370, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 786, "num_deletes": 255, "total_data_size": 943095, "memory_usage": 958424, "flush_reason": "Manual Compaction"}
Oct 01 17:19:17 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Oct 01 17:19:17 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759339157809874, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 934175, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33396, "largest_seqno": 34181, "table_properties": {"data_size": 930107, "index_size": 1784, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9425, "raw_average_key_size": 19, "raw_value_size": 921727, "raw_average_value_size": 1904, "num_data_blocks": 79, "num_entries": 484, "num_filter_entries": 484, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759339098, "oldest_key_time": 1759339098, "file_creation_time": 1759339157, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Oct 01 17:19:17 compute-0 ceph-mon[74273]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 6591 microseconds, and 2949 cpu microseconds.
Oct 01 17:19:17 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 17:19:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:19:17.809927) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 934175 bytes OK
Oct 01 17:19:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:19:17.809945) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Oct 01 17:19:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:19:17.811381) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Oct 01 17:19:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:19:17.811398) EVENT_LOG_v1 {"time_micros": 1759339157811392, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 17:19:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:19:17.811417) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 17:19:17 compute-0 ceph-mon[74273]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 939018, prev total WAL file size 939018, number of live WAL files 2.
Oct 01 17:19:17 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 17:19:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:19:17.811871) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303033' seq:72057594037927935, type:22 .. '6C6F676D0031323534' seq:0, type:0; will stop at (end)
Oct 01 17:19:17 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 17:19:17 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(912KB)], [71(8668KB)]
Oct 01 17:19:17 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759339157811921, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 9810880, "oldest_snapshot_seqno": -1}
Oct 01 17:19:17 compute-0 ceph-mon[74273]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6258 keys, 9537885 bytes, temperature: kUnknown
Oct 01 17:19:17 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759339157868357, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 9537885, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9496206, "index_size": 24929, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15685, "raw_key_size": 159026, "raw_average_key_size": 25, "raw_value_size": 9384164, "raw_average_value_size": 1499, "num_data_blocks": 1010, "num_entries": 6258, "num_filter_entries": 6258, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336399, "oldest_key_time": 0, "file_creation_time": 1759339157, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3703b1af-85cb-46a0-a42e-c54c049b0356", "db_session_id": "Q91HFJNCEI5G0QGGY20B", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Oct 01 17:19:17 compute-0 ceph-mon[74273]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 17:19:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:19:17.868589) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 9537885 bytes
Oct 01 17:19:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:19:17.870870) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.6 rd, 168.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 8.5 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(20.7) write-amplify(10.2) OK, records in: 6780, records dropped: 522 output_compression: NoCompression
Oct 01 17:19:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:19:17.870885) EVENT_LOG_v1 {"time_micros": 1759339157870877, "job": 40, "event": "compaction_finished", "compaction_time_micros": 56522, "compaction_time_cpu_micros": 19980, "output_level": 6, "num_output_files": 1, "total_output_size": 9537885, "num_input_records": 6780, "num_output_records": 6258, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 17:19:17 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 17:19:17 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759339157871122, "job": 40, "event": "table_file_deletion", "file_number": 73}
Oct 01 17:19:17 compute-0 ceph-mon[74273]: from='client.14883 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:19:17 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2007516144' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 01 17:19:17 compute-0 ceph-mon[74273]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 17:19:17 compute-0 ceph-mon[74273]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759339157872409, "job": 40, "event": "table_file_deletion", "file_number": 71}
Oct 01 17:19:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:19:17.811800) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:19:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:19:17.872478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:19:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:19:17.872481) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:19:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:19:17.872483) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:19:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:19:17.872484) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:19:17 compute-0 ceph-mon[74273]: rocksdb: (Original Log Time 2025/10/01-17:19:17.872486) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 17:19:17 compute-0 ceph-mon[74273]: from='client.14887 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:19:17 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2973207352' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 01 17:19:17 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14895 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:19:17 compute-0 ceph-mgr[74571]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 01 17:19:17 compute-0 ceph-f44264e3-e26a-5bd3-9e84-b4ba651d9cf5-mgr-compute-0-pmbdpj[74567]: 2025-10-01T17:19:17.968+0000 7f816b913640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 01 17:19:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct 01 17:19:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1480760209' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 01 17:19:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct 01 17:19:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2136420747' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 01 17:19:18 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1517: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:19:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct 01 17:19:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2749453090' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 01 17:19:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct 01 17:19:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/377061324' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 01 17:19:18 compute-0 crontab[297154]: (root) LIST (root)
Oct 01 17:19:18 compute-0 ceph-mon[74273]: from='client.14895 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:19:18 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1480760209' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 01 17:19:18 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2136420747' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 01 17:19:18 compute-0 ceph-mon[74273]: pgmap v1517: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:19:18 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2749453090' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 01 17:19:18 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/377061324' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 01 17:19:18 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct 01 17:19:18 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2725609756' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 01 17:19:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct 01 17:19:19 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2083008590' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 01 17:19:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct 01 17:19:19 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2652219663' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 01 17:19:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Oct 01 17:19:19 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2693560672' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 01 17:19:19 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct 01 17:19:19 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3494046533' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 01 17:19:19 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2725609756' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 01 17:19:19 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2083008590' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 01 17:19:19 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2652219663' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 01 17:19:19 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2693560672' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 01 17:19:19 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3494046533' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 01 17:19:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:19:19.990 162304 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:19:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:19:19.990 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:19:19 compute-0 ovn_metadata_agent[162258]: 2025-10-01 17:19:19.991 162304 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:19:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Oct 01 17:19:20 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2108697836' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 483328 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:41.681720+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 475136 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:42.681980+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 475136 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:43.682146+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 475136 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:44.682301+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 475136 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:45.682467+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 475136 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:46.682642+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 475136 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:47.682784+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 475136 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:48.682878+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 475136 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:49.683050+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 475136 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:50.683185+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 475136 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:51.683369+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 466944 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:52.683481+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 466944 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:53.683606+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 458752 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:54.683747+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 458752 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:55.683979+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 458752 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:56.684125+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 450560 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:57.684241+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 450560 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:58.684358+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 442368 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:59.684479+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 442368 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:00.684596+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 434176 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:01.684716+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 434176 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:02.684855+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 425984 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:03.684942+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 425984 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:04.685105+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 417792 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:05.685329+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 417792 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:06.685471+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 417792 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:07.685660+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 409600 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:08.685829+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 409600 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:09.685978+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 401408 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:10.686105+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 401408 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:11.686246+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 393216 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:12.686482+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 385024 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:13.686681+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 385024 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:14.686861+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 376832 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:15.687374+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 376832 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:16.687539+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 368640 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:17.687698+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 368640 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:18.688111+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 360448 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:19.688256+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 360448 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:20.688457+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 352256 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:21.688833+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 352256 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:22.688968+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 344064 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:23.689103+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 344064 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:24.689259+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 335872 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:25.689448+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 335872 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:26.689616+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 335872 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:27.689767+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 327680 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:28.689905+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 327680 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:29.690067+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 319488 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:30.690237+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 319488 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:31.690388+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 319488 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:32.690572+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 319488 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:33.690797+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 319488 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:34.691001+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 311296 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:35.691231+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 311296 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:36.691369+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 303104 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:37.691565+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 303104 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:38.691707+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 303104 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:39.691842+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 294912 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:40.691948+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 294912 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:41.692076+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:42.692196+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:43.692344+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:44.692506+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:45.692657+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:46.692793+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:47.692999+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:48.693129+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:49.693245+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:50.693374+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:51.693524+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:52.693659+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:53.693795+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:54.693920+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:55.694072+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:56.694184+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:57.694295+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 278528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:58.694425+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:59.694563+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:00.694747+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:01.694875+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:02.694997+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:03.695126+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:04.695259+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:05.695402+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:06.695527+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:07.695653+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:08.695785+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:09.696003+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:10.696126+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:11.696241+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:12.696363+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:13.696495+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:14.696611+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:15.696766+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:16.696991+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:17.697144+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:18.697268+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:19.697490+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:20.697662+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:21.697811+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:22.697996+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:23.698155+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:24.698320+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:25.698511+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:26.698659+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:27.698803+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:28.698936+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:29.699059+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:30.699193+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:31.699332+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:32.699629+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:33.699779+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:34.699984+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 253952 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:35.700227+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 253952 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:36.700472+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 253952 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:37.700675+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 253952 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:38.700838+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 253952 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:39.701026+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 253952 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:40.701202+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 253952 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:41.701354+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 245760 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:42.701560+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 237568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:43.701712+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 237568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:44.701846+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 237568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:45.702078+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 237568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:46.702261+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 237568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:47.702442+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 237568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:48.702559+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 237568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:49.702773+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 237568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:50.702950+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 237568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:51.703124+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 237568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:52.703295+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 237568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:53.703478+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 237568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:54.703644+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:55.703856+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:56.704294+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:57.704752+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:58.704933+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:59.705654+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:00.705815+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:01.706014+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:02.706218+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:03.706396+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:04.706581+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:05.706788+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:06.707552+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:07.707722+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:08.707925+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:09.708148+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:10.708304+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:11.708454+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:12.708601+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:13.708735+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:14.709008+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:15.709180+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:16.709304+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 221184 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:17.709504+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 221184 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:18.709678+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 221184 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:19.709804+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 221184 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:20.709965+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 221184 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:21.710114+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 221184 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:22.710259+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 221184 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:23.710408+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 221184 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:24.710585+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 221184 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:25.710731+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 221184 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:26.710865+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 221184 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:27.711012+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:28.711199+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:29.711345+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:30.711510+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:31.711647+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:32.711816+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:33.711969+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:34.712089+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:35.712312+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:36.712465+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:37.712607+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:38.712760+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:39.712915+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:40.713064+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:41.713227+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:42.713408+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:43.713569+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:44.713725+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:45.713920+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:46.714092+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:47.714228+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:48.714392+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:49.714533+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:50.714728+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:51.714932+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:52.715090+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:53.715230+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:54.715390+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:55.715550+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:56.715705+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:57.715867+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:58.716058+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:59.716229+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:00.716470+0000)
Oct 01 17:19:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Oct 01 17:19:20 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3621541757' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:01.717286+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:02.717469+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:03.717681+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:04.717853+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:05.718115+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:06.718244+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:07.718486+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:08.718812+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:09.719085+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:10.719223+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:11.719374+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:12.719543+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:13.719686+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:14.720106+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:15.720314+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:16.720444+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:17.720563+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:18.720678+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:19.720816+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:20.721136+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:21.721275+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:22.721442+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:23.721631+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:24.721862+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:25.722086+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:26.722230+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:27.722378+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:28.722587+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:29.722726+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:30.722876+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:31.723039+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:32.723226+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:33.723397+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:34.723547+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:35.723733+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:36.723929+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:37.724084+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:38.724265+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:39.724431+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:40.724561+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:41.724714+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:42.724921+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:43.725046+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:44.725176+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 155648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:45.725354+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 155648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:46.725471+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 155648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:47.725625+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 155648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 ms_handle_reset con 0x5624c62e0400 session 0x5624c70741e0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: handle_auth_request added challenge on 0x5624c62e1800
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 ms_handle_reset con 0x5624c62e1c00 session 0x5624c6887a40
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: handle_auth_request added challenge on 0x5624c62e0400
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:48.725745+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 155648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:49.725916+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 155648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:50.726084+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 155648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:51.726240+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 155648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:52.726381+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 155648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:53.726566+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 155648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:54.726765+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 155648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:55.726959+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 155648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:56.727086+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 155648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:57.727252+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 155648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:58.727422+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77389824 unmapped: 147456 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:59.727567+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77389824 unmapped: 147456 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:00.727725+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77389824 unmapped: 147456 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:01.727921+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77389824 unmapped: 147456 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:02.728054+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77389824 unmapped: 147456 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:03.728189+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:04.728335+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:05.728490+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:06.728621+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:07.728835+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:08.728995+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:09.729175+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:10.729322+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:11.729439+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:12.729616+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:13.729748+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:14.729955+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:15.730172+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:16.730317+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:17.730467+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:18.730617+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:19.730757+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:20.730925+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:21.731083+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:22.731234+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:23.731391+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:24.731536+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:25.731741+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:26.731913+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:27.732060+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:28.732185+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:29.732341+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:30.732464+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:31.732629+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:32.732803+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:33.733016+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:34.733145+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:35.733283+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:36.733627+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:37.733769+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:38.733905+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:39.734029+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:40.734192+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:41.734418+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:42.734575+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:43.734746+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:44.734941+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:45.735117+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:46.735291+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:47.735466+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:48.735630+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:49.735795+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:50.735975+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:51.736133+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:52.736263+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:53.736391+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:54.736585+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:55.736807+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:56.736963+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:57.737122+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:58.737282+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:59.737450+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:00.737617+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:01.737799+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:02.737962+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:03.738117+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:04.738280+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:05.738472+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:06.738702+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:07.738881+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:08.739083+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:09.740058+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:10.740289+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:11.740466+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:12.740703+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:13.741074+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:14.741243+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:15.741409+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:16.741569+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:17.741747+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:18.741871+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:19.742047+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:20.742206+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:21.742388+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:22.742599+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:23.742795+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:24.743010+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:25.743252+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:26.743431+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:27.743623+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 114688 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:28.743793+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 106496 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:29.743960+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 106496 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:30.744073+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 106496 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:31.744219+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 106496 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:32.744376+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 106496 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:33.744627+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 106496 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:34.744804+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 106496 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:35.745004+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 106496 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:36.745997+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 106496 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:37.746115+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 106496 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:38.746258+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 106496 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:39.746387+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 106496 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:40.746512+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 106496 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:41.746732+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 90112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:42.746974+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 90112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:43.747106+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 90112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:44.747368+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 90112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:45.747587+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 90112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:46.747853+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 90112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:47.748141+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 90112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:48.748420+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 90112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:49.748595+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 90112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:50.748839+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 90112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:51.749113+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 90112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:52.749313+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 90112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:53.749522+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 90112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:54.749721+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 90112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:55.749918+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:56.750115+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:57.750364+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:58.750554+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:59.750652+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:00.750751+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:01.750952+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:02.751152+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:03.751315+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:04.751479+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:05.751663+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:06.751795+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:07.751936+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:08.752092+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:09.752240+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:10.752399+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:11.752578+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:12.752759+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:13.752942+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:14.753111+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:15.753288+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:16.753454+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:17.753615+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:18.753818+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:19.753999+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:20.754156+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:21.754314+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:22.754502+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:23.754681+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:24.754818+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:25.755042+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:26.755227+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:27.755425+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:28.755592+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:29.755762+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:30.755901+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:31.756157+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:32.756641+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:33.756776+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:34.756999+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:35.757221+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:36.757387+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:37.757549+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:38.757732+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:39.757889+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:40.758068+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:41.758227+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:42.758462+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:43.758707+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:44.758964+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 65536 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:45.759196+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 65536 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:46.759365+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 65536 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:47.759562+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 65536 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:48.759689+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 65536 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:49.759854+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 65536 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:50.760062+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 65536 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:51.760344+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 65536 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:52.760568+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 65536 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:53.760729+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 65536 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:54.760933+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 65536 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:55.761294+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 65536 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:56.761467+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 65536 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:57.761697+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 57344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:58.761938+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 57344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:59.762232+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 57344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:00.762472+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 57344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:01.762639+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 57344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:02.762774+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 57344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:03.762961+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 57344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:04.763151+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 57344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:05.763489+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 57344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:06.763708+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 57344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:07.763878+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 57344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:08.764110+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 57344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:09.764236+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 57344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:10.764492+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 57344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:11.764625+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 57344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:12.764864+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 57344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:13.765064+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:14.765214+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:15.765504+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:16.765712+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:17.766025+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:18.766240+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:19.766415+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:20.766560+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:21.766831+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:22.767020+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:23.767289+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:24.767463+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:25.768011+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:26.768423+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:27.768582+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:28.768791+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:29.768933+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:30.769109+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:31.769247+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:32.769392+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:33.769559+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:34.769782+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:35.770025+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:36.770295+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:37.770550+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:38.770738+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:39.770983+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:40.771172+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:41.771361+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:42.771592+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 40960 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:43.771984+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 40960 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:44.772239+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 40960 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:45.772535+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 40960 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:46.772796+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 40960 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:47.773074+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:48.773320+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:49.775265+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:50.775597+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:51.776028+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:52.776260+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:53.776527+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:54.776775+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:55.776994+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:56.777215+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:57.777644+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:58.778149+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:59.778461+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:00.778833+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:01.778995+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:02.779333+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:03.779625+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:04.780004+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:05.780245+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:06.780453+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:07.780651+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:08.780784+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:09.780929+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:10.781172+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:11.781279+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:12.781470+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:13.781648+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:14.781996+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:15.782252+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:16.782402+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:17.782560+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:18.782747+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:19.782966+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:20.783158+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:21.783350+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:22.783492+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:23.783657+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:24.783844+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:25.784092+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:26.784277+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:27.784449+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:28.784614+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:29.784771+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:30.785013+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:31.785175+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:32.785326+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 16384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:33.785468+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 16384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:34.785668+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 16384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:35.786006+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 16384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:36.786149+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 16384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:37.786310+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:38.786513+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 16384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:39.786654+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 16384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:40.786800+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 16384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 6849 writes, 28K keys, 6849 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6849 writes, 1288 syncs, 5.32 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 277 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f21090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f21090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f21090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5624c4f211f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:41.787021+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:42.787164+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:43.787310+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:44.787487+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:45.787679+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:46.787844+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:47.788017+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:48.788175+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:49.788354+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:50.788465+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:51.788615+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:52.788791+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:53.788994+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:54.789303+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:55.789523+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:56.789689+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:57.789935+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:58.790196+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:59.790359+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:00.800086+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:01.800282+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:02.800488+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:03.800682+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:04.800875+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:05.801169+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:06.801443+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:07.801636+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:08.801871+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:09.802215+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:10.802470+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:11.802658+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:12.802961+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:13.803132+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:14.803298+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:15.803492+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:16.803627+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:17.803808+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:18.804039+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:19.804190+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:20.804359+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:21.804600+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:22.804748+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:23.804942+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:24.805063+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:25.805205+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:26.805341+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:27.805484+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:28.805614+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:29.805752+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:30.805880+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:31.806179+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:32.806304+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:33.806406+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:34.806573+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:35.806781+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:36.806974+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:37.807114+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:38.807273+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 600.429077148s of 601.145690918s, submitted: 90
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:39.807415+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 1171456 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:40.807536+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 1163264 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:41.807671+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:42.807782+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:43.807928+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:44.808085+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:45.808262+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:46.808404+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:47.808526+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:48.808659+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:49.808822+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:50.808981+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:51.809149+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:52.810165+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:53.810315+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:54.810492+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:55.810725+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:56.811213+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:57.811375+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:58.811491+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:59.811617+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:00.811722+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:01.811934+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:02.812057+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:03.812212+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:04.812387+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:05.812603+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:06.812775+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:07.812929+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:08.813059+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:09.813196+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:10.813392+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:11.813562+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:12.813710+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:13.813829+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:14.814044+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:15.814235+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:16.814360+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:17.814672+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:18.814801+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:19.815016+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:20.815147+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:21.815285+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:22.815461+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:23.815599+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:24.815785+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:25.816008+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:26.816162+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:27.816318+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:28.816514+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:29.816760+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:30.816871+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:31.817052+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:32.817203+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:33.817346+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:34.817514+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:35.817677+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:36.817813+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:37.817943+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:38.818091+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:39.818237+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:40.818392+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 2203648 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:41.818559+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:42.818704+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:43.818927+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:44.819089+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:45.819326+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:46.819493+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:47.819688+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:48.819872+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:49.820080+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:50.820238+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:51.820389+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:52.820550+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:53.820752+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:54.821029+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:55.821233+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:56.821380+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:57.821524+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:58.821663+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:59.821838+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:00.821970+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:01.822124+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:02.822306+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:03.822552+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:04.822781+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:05.822988+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:06.823996+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:07.824211+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:08.824481+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:09.824786+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:10.824933+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:11.825138+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:12.825299+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:13.825533+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:14.825670+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:15.825861+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:16.826048+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:17.826224+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:18.826381+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:19.826565+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:20.826721+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:21.826936+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:22.827068+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:23.827188+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:24.827342+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:25.827545+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:26.827736+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:27.827962+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:28.828148+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:29.828385+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:30.828637+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:31.828803+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:32.829029+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:33.829218+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:34.829403+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:35.829598+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:36.829826+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:37.830039+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:38.830236+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:39.830379+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:40.830523+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 2195456 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:41.830732+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:42.830986+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:43.831190+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:44.831365+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:45.831544+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:46.831745+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:47.831988+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:48.832221+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:49.832389+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:50.832553+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:51.832744+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:52.832974+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:53.833160+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:54.833343+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:55.833592+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:56.833852+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:57.834010+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:58.834172+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:59.834357+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:00.834516+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:01.834643+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:02.834812+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:03.834968+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:04.835094+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:05.835281+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:06.835457+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:07.835635+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:08.835822+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:09.835965+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:10.836094+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:11.836223+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:12.836380+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:13.836533+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:14.836716+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:15.836957+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:16.837176+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:17.837315+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:18.837469+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:19.837632+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:20.837803+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:21.837931+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:22.838065+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:23.838221+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:24.838401+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:25.838586+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:26.838722+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:27.838874+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:28.839092+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:29.839244+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:30.839370+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:31.839501+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:32.839691+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:33.839804+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:34.839988+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:35.840173+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:36.840309+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:37.840442+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:38.840609+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:39.840794+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:40.840989+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:41.841187+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:42.841402+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:43.841609+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:44.841813+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:45.842014+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:46.843087+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:47.843302+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:48.843525+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:49.843667+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:50.843816+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:51.843954+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:52.844093+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:53.844273+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:54.844430+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:55.844635+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:56.844809+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:57.844987+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:58.845143+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:59.845323+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:00.845437+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:01.845580+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:02.845726+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:03.845881+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:04.846103+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:05.846313+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:06.846485+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:07.846635+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:08.846933+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:09.847234+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:10.847474+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:11.847653+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:12.847801+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:13.848022+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 2187264 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:14.848171+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 2179072 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:15.848361+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 2179072 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:16.848503+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 2179072 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:17.848684+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 2179072 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:18.848851+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x129cd9/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 2179072 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:19.849007+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 2179072 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:20.849158+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853752 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 2179072 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:21.849301+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: handle_auth_request added challenge on 0x5624c76f0800
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 222.424591064s of 223.227157593s, submitted: 90
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 2154496 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:22.849446+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 123 heartbeat osd_stat(store_statfs(0x4fca3b000/0x0/0x4ffc00000, data 0x12b85e/0x1e2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:23.849607+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 18939904 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 124 handle_osd_map epochs [124,125], i have 124, src has [1,125]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 125 ms_handle_reset con 0x5624c76f0800 session 0x5624c96925a0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:24.849742+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 17784832 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: handle_auth_request added challenge on 0x5624c89dec00
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:25.849978+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 17776640 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1065174 data_alloc: 218103808 data_used: 253952
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:26.850137+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 17743872 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 126 ms_handle_reset con 0x5624c89dec00 session 0x5624c93f25a0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fadc3000/0x0/0x4ffc00000, data 0x1d9f016/0x1e5b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:27.850273+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 17735680 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:28.850411+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 17735680 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:29.850580+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 17735680 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:30.851586+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 17735680 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073946 data_alloc: 218103808 data_used: 266240
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:31.851699+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 17727488 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:32.851849+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 17727488 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fadbb000/0x0/0x4ffc00000, data 0x1da2635/0x1e62000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:33.851981+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 17727488 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:34.852180+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 17727488 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:35.852402+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 17727488 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073946 data_alloc: 218103808 data_used: 266240
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:36.852562+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 17727488 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:37.852677+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 17727488 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:38.852834+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 17727488 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fadbb000/0x0/0x4ffc00000, data 0x1da2635/0x1e62000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fadbb000/0x0/0x4ffc00000, data 0x1da2635/0x1e62000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:39.852991+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 17727488 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:40.853149+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 17727488 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073946 data_alloc: 218103808 data_used: 266240
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:41.853278+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 17727488 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:42.853406+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 17727488 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:43.853562+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 17727488 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: handle_auth_request added challenge on 0x5624c89dfc00
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.713157654s of 21.930625916s, submitted: 48
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fadbb000/0x0/0x4ffc00000, data 0x1da2635/0x1e62000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:44.853719+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 78774272 unmapped: 17645568 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:45.853883+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 16547840 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Got map version 10
Oct 01 17:19:20 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: handle_auth_request added challenge on 0x5624c906f800
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076438 data_alloc: 218103808 data_used: 266240
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:46.854281+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 16777216 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:47.854399+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 16777216 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:48.854548+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 16556032 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:49.854699+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 16515072 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fada1000/0x0/0x4ffc00000, data 0x1dbb67c/0x1e7d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:50.854884+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 16465920 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080228 data_alloc: 218103808 data_used: 266240
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:51.855165+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 16465920 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:52.855311+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 16433152 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:53.855443+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 16392192 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Got map version 11
Oct 01 17:19:20 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.376805305s of 10.613768578s, submitted: 47
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:54.855581+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 16359424 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:55.855789+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 15228928 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fad8c000/0x0/0x4ffc00000, data 0x1dd04d9/0x1e92000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082408 data_alloc: 218103808 data_used: 266240
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:56.855976+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 15147008 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:57.856150+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 15147008 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:58.856300+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fad87000/0x0/0x4ffc00000, data 0x1dd57f8/0x1e97000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 15114240 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:59.856427+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 15073280 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:00.856553+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 15048704 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fad80000/0x0/0x4ffc00000, data 0x1ddc97f/0x1e9e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082632 data_alloc: 218103808 data_used: 266240
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:01.856676+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 15015936 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:02.856873+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 15015936 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:03.857206+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 15015936 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.274366379s of 10.007667542s, submitted: 35
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:04.857391+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 14942208 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fad70000/0x0/0x4ffc00000, data 0x1deae65/0x1eae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:05.857569+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 14884864 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088736 data_alloc: 218103808 data_used: 266240
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:06.857715+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 14786560 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:07.857943+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 14753792 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:08.858122+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 14614528 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:09.858297+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fad5e000/0x0/0x4ffc00000, data 0x1dfda54/0x1ec0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 14614528 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:10.858443+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 14475264 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1096132 data_alloc: 218103808 data_used: 274432
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:11.858594+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 14401536 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:12.858779+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 14376960 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:13.858955+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 14303232 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:14.859101+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.836385727s of 10.053128242s, submitted: 94
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 14254080 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 128 heartbeat osd_stat(store_statfs(0x4fad36000/0x0/0x4ffc00000, data 0x1e24f66/0x1ee8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:15.859253+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 14254080 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092610 data_alloc: 218103808 data_used: 274432
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:16.859383+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 14229504 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:17.859555+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 13131776 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:18.859701+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84336640 unmapped: 12083200 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:19.859878+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f9b85000/0x0/0x4ffc00000, data 0x1e3563a/0x1ef9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 12066816 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:20.860117+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 12042240 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f9b82000/0x0/0x4ffc00000, data 0x1e3aaac/0x1efc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1096620 data_alloc: 218103808 data_used: 282624
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:21.860296+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 12042240 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:22.860464+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 12017664 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f9b7d000/0x0/0x4ffc00000, data 0x1e3deac/0x1f00000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:23.860651+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 12017664 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:24.860817+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 12017664 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.922076225s of 10.262989998s, submitted: 41
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:25.861032+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 11952128 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099828 data_alloc: 218103808 data_used: 282624
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:26.861182+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 11952128 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f9b72000/0x0/0x4ffc00000, data 0x1e47d7b/0x1f0c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:27.861372+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 11952128 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:28.861519+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 11935744 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:29.861684+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 11935744 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:30.861951+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 11919360 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:31.862135+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097878 data_alloc: 218103808 data_used: 282624
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 11919360 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f9b72000/0x0/0x4ffc00000, data 0x1e48e86/0x1f0c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:32.862297+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 11788288 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f9b68000/0x0/0x4ffc00000, data 0x1e516b2/0x1f16000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:33.862463+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 11788288 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:34.862671+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 11788288 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.909614563s of 10.000017166s, submitted: 23
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:35.862834+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 11722752 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:36.862982+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1105184 data_alloc: 218103808 data_used: 290816
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 11706368 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:37.863163+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 11706368 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:38.863344+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 11706368 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 130 heartbeat osd_stat(store_statfs(0x4f9b5c000/0x0/0x4ffc00000, data 0x1e5b885/0x1f21000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:39.863500+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 11706368 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 130 heartbeat osd_stat(store_statfs(0x4f9b57000/0x0/0x4ffc00000, data 0x1e616a2/0x1f27000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:40.863692+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 11689984 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:41.863827+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103212 data_alloc: 218103808 data_used: 290816
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 11689984 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 130 heartbeat osd_stat(store_statfs(0x4f9b56000/0x0/0x4ffc00000, data 0x1e62682/0x1f28000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:42.863999+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 11689984 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:43.864135+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 11665408 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:44.864272+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 11665408 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.819531441s of 10.000153542s, submitted: 46
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:45.864504+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 130 heartbeat osd_stat(store_statfs(0x4f9b4b000/0x0/0x4ffc00000, data 0x1e6cc8a/0x1f33000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 11640832 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:46.864675+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111098 data_alloc: 218103808 data_used: 299008
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 11501568 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:47.864786+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 85172224 unmapped: 11247616 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:48.864967+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 11214848 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:49.866996+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 11255808 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:50.867127+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 11255808 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 131 heartbeat osd_stat(store_statfs(0x4f9b27000/0x0/0x4ffc00000, data 0x1e90a50/0x1f57000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:51.867276+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111162 data_alloc: 218103808 data_used: 299008
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 11206656 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 131 heartbeat osd_stat(store_statfs(0x4f9b23000/0x0/0x4ffc00000, data 0x1e94fb8/0x1f5b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:52.867567+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 11206656 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:53.867750+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 11206656 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:54.867952+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.782431602s of 10.000225067s, submitted: 42
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 11149312 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:55.868134+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 11091968 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:56.868319+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116414 data_alloc: 218103808 data_used: 303104
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 10895360 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 131 heartbeat osd_stat(store_statfs(0x4f9b05000/0x0/0x4ffc00000, data 0x1eb2a69/0x1f79000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:57.868526+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 10895360 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:58.868700+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 85622784 unmapped: 10797056 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:59.868850+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 9682944 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:00.869034+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 9494528 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 131 handle_osd_map epochs [132,132], i have 132, src has [1,132]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:01.869215+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121752 data_alloc: 218103808 data_used: 311296
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 9404416 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9adf000/0x0/0x4ffc00000, data 0x1ed6642/0x1f9e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:02.869381+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 9863168 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:03.869544+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 86679552 unmapped: 9740288 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:04.869683+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 3.387005329s of 10.018237114s, submitted: 70
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 86720512 unmapped: 9699328 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:05.869838+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9abc000/0x0/0x4ffc00000, data 0x1ef9b32/0x1fc2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,2])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 132 handle_osd_map epochs [133,133], i have 133, src has [1,133]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 9494528 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:06.870074+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132746 data_alloc: 218103808 data_used: 319488
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9aa3000/0x0/0x4ffc00000, data 0x1f0fc1a/0x1fda000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 9461760 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:07.870230+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 87023616 unmapped: 9396224 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Got map version 12
Oct 01 17:19:20 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:08.870372+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 9445376 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: handle_auth_request added challenge on 0x5624c906e000
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:09.870554+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 87105536 unmapped: 9314304 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:10.870735+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 87089152 unmapped: 9330688 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:11.870967+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1142894 data_alloc: 218103808 data_used: 327680
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 87195648 unmapped: 9224192 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:12.871129+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f963e000/0x0/0x4ffc00000, data 0x1f62185/0x202f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 87474176 unmapped: 8945664 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:13.871283+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 88375296 unmapped: 8044544 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:14.871426+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 2.033059359s of 10.011897087s, submitted: 159
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 8036352 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:15.871627+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 8019968 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:16.871751+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148250 data_alloc: 218103808 data_used: 339968
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 8175616 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:17.871936+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 88276992 unmapped: 8142848 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f95ff000/0x0/0x4ffc00000, data 0x1fa032f/0x206f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:18.872047+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 88276992 unmapped: 8142848 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f95ff000/0x0/0x4ffc00000, data 0x1fa035e/0x206e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:19.872187+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 8028160 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:20.872335+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 89571328 unmapped: 6848512 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:21.872460+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154230 data_alloc: 218103808 data_used: 348160
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 89587712 unmapped: 6832128 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:22.872792+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 89464832 unmapped: 6955008 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:23.872929+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 89464832 unmapped: 6955008 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f95e4000/0x0/0x4ffc00000, data 0x1fba00f/0x208a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:24.873053+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 6930432 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:25.873215+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 89628672 unmapped: 6791168 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.069796562s of 11.403741837s, submitted: 118
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:26.873280+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1156420 data_alloc: 218103808 data_used: 360448
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 89227264 unmapped: 7192576 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:27.873438+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 89227264 unmapped: 7192576 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:28.873600+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 89227264 unmapped: 7192576 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:29.873738+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f95c0000/0x0/0x4ffc00000, data 0x1fdb698/0x20ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 89227264 unmapped: 7192576 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:30.873848+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 89292800 unmapped: 7127040 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:31.873971+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160980 data_alloc: 218103808 data_used: 364544
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f95a6000/0x0/0x4ffc00000, data 0x1ff64a7/0x20c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 7020544 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:32.874104+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 90750976 unmapped: 5668864 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f9586000/0x0/0x4ffc00000, data 0x2015d8e/0x20e8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:33.874217+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 90750976 unmapped: 5668864 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:34.874450+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 90890240 unmapped: 5529600 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:35.874592+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 90423296 unmapped: 5996544 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:36.874701+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165594 data_alloc: 218103808 data_used: 364544
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.134466171s of 10.624387741s, submitted: 50
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 90587136 unmapped: 5832704 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f9549000/0x0/0x4ffc00000, data 0x20539a6/0x2125000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:37.875004+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 90365952 unmapped: 6053888 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:38.875115+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 90456064 unmapped: 5963776 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:39.875273+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 90701824 unmapped: 5718016 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:40.875408+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 90578944 unmapped: 5840896 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9511000/0x0/0x4ffc00000, data 0x208b773/0x215c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:41.875572+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169704 data_alloc: 218103808 data_used: 368640
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 90693632 unmapped: 5726208 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:42.875718+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 90955776 unmapped: 5464064 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f94f0000/0x0/0x4ffc00000, data 0x20acef8/0x217d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:43.875832+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 90824704 unmapped: 5595136 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:44.876009+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f94da000/0x0/0x4ffc00000, data 0x20c3552/0x2194000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 90873856 unmapped: 5545984 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:45.876176+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 92028928 unmapped: 4390912 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:46.876314+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178448 data_alloc: 218103808 data_used: 376832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.355510712s of 10.051280022s, submitted: 102
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 92069888 unmapped: 4349952 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:47.876491+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 92069888 unmapped: 4349952 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:48.876628+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 92069888 unmapped: 4349952 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:49.877084+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f94ad000/0x0/0x4ffc00000, data 0x20f0f7c/0x21c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,4])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 92135424 unmapped: 4284416 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:50.877204+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 4816896 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:51.877290+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180540 data_alloc: 218103808 data_used: 380928
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 4816896 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:52.877433+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f948b000/0x0/0x4ffc00000, data 0x2111fa4/0x21e3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 91947008 unmapped: 4472832 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:53.877588+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 91947008 unmapped: 4472832 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:54.877742+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 92299264 unmapped: 4120576 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:55.877961+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 92053504 unmapped: 4366336 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:56.878105+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186900 data_alloc: 218103808 data_used: 389120
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 92127232 unmapped: 4292608 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f943c000/0x0/0x4ffc00000, data 0x215ea3d/0x2231000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:57.878347+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 4210688 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:58.878721+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.927650452s of 12.131904602s, submitted: 70
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 92430336 unmapped: 3989504 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:59.879443+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 91717632 unmapped: 4702208 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:00.879986+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 91717632 unmapped: 4702208 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:01.880147+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191678 data_alloc: 218103808 data_used: 397312
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 91914240 unmapped: 4505600 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9407000/0x0/0x4ffc00000, data 0x2191fd3/0x2266000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:02.880345+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 3284992 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:03.880531+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93118464 unmapped: 3301376 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:04.880710+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93257728 unmapped: 3162112 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:05.881090+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f93df000/0x0/0x4ffc00000, data 0x21bbc75/0x228f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93257728 unmapped: 3162112 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f93df000/0x0/0x4ffc00000, data 0x21bbc75/0x228f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:06.881290+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192768 data_alloc: 218103808 data_used: 397312
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93257728 unmapped: 3162112 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:07.881505+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93413376 unmapped: 3006464 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:08.881721+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93413376 unmapped: 3006464 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:09.881980+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.396786690s of 10.760301590s, submitted: 51
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93446144 unmapped: 2973696 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:10.882219+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f93be000/0x0/0x4ffc00000, data 0x21dd92a/0x22b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93552640 unmapped: 2867200 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:11.882460+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197862 data_alloc: 218103808 data_used: 397312
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93552640 unmapped: 2867200 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:12.882639+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93585408 unmapped: 2834432 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Got map version 13
Oct 01 17:19:20 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:13.882817+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93552640 unmapped: 2867200 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:14.883015+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93552640 unmapped: 2867200 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:15.883237+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93552640 unmapped: 2867200 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:16.883427+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f939d000/0x0/0x4ffc00000, data 0x21feab8/0x22d1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199996 data_alloc: 218103808 data_used: 397312
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93691904 unmapped: 2727936 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:17.883616+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93691904 unmapped: 2727936 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f939d000/0x0/0x4ffc00000, data 0x21feb82/0x22d1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:18.883794+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93691904 unmapped: 2727936 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:19.883989+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.056099892s of 10.000339508s, submitted: 22
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93782016 unmapped: 2637824 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:20.884172+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93872128 unmapped: 2547712 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:21.884294+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208408 data_alloc: 218103808 data_used: 397312
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9364000/0x0/0x4ffc00000, data 0x2236003/0x230a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93880320 unmapped: 2539520 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:22.884473+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93683712 unmapped: 2736128 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:23.884627+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93683712 unmapped: 2736128 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:24.884755+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93798400 unmapped: 2621440 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:25.884985+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f933d000/0x0/0x4ffc00000, data 0x225b5b0/0x2330000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 93863936 unmapped: 2555904 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:26.885155+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206676 data_alloc: 218103808 data_used: 397312
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 2162688 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:27.885399+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 2162688 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:28.885547+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 94388224 unmapped: 2031616 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:29.885719+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.778255463s of 10.000174522s, submitted: 48
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 2007040 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:30.885869+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9305000/0x0/0x4ffc00000, data 0x2293118/0x2369000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 94568448 unmapped: 1851392 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:31.886009+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210244 data_alloc: 218103808 data_used: 397312
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 94806016 unmapped: 1613824 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:32.886124+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 94806016 unmapped: 1613824 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:33.886262+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 95059968 unmapped: 1359872 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:34.886422+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f92d7000/0x0/0x4ffc00000, data 0x22c2b58/0x2397000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 95232000 unmapped: 2236416 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f929a000/0x0/0x4ffc00000, data 0x22ff727/0x23d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:35.886598+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 95240192 unmapped: 2228224 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:36.886729+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218020 data_alloc: 218103808 data_used: 397312
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 95240192 unmapped: 2228224 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:37.886875+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 95846400 unmapped: 1622016 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:38.887060+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 95895552 unmapped: 1572864 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:39.887217+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.700897217s of 10.000336647s, submitted: 69
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 1531904 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:40.887404+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9263000/0x0/0x4ffc00000, data 0x2337974/0x240b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 142 ms_handle_reset con 0x5624c906e000 session 0x5624c9698780
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 770048 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:41.887550+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1220104 data_alloc: 218103808 data_used: 397312
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 96706560 unmapped: 761856 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:42.887695+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Got map version 14
Oct 01 17:19:20 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 96845824 unmapped: 1671168 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:43.887990+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9236000/0x0/0x4ffc00000, data 0x2364498/0x2438000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 97034240 unmapped: 1482752 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:44.888127+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 2531328 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f91fe000/0x0/0x4ffc00000, data 0x239c08a/0x2470000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:45.888290+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 2531328 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:46.888520+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231956 data_alloc: 218103808 data_used: 397312
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 96313344 unmapped: 2203648 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:47.888749+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 96313344 unmapped: 2203648 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:48.888954+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 96108544 unmapped: 2408448 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f91cb000/0x0/0x4ffc00000, data 0x23ce53a/0x24a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:49.889119+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.741598129s of 10.000253677s, submitted: 243
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 97427456 unmapped: 2138112 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:50.889250+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 97427456 unmapped: 2138112 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:51.889426+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f919b000/0x0/0x4ffc00000, data 0x23fd8a5/0x24d2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237066 data_alloc: 218103808 data_used: 397312
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 97583104 unmapped: 1982464 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:52.889604+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 98091008 unmapped: 1474560 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9176000/0x0/0x4ffc00000, data 0x2422384/0x24f6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:53.889771+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 98091008 unmapped: 1474560 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:54.889917+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 98107392 unmapped: 1458176 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:55.890108+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 97607680 unmapped: 1957888 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:56.890244+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238070 data_alloc: 218103808 data_used: 397312
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 97771520 unmapped: 1794048 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:57.890416+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 1777664 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:58.890586+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f912d000/0x0/0x4ffc00000, data 0x246cf23/0x2541000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 97976320 unmapped: 1589248 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:59.890717+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.649096489s of 10.000499725s, submitted: 77
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 99082240 unmapped: 483328 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:00.890935+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 99082240 unmapped: 483328 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:01.891135+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f90f4000/0x0/0x4ffc00000, data 0x24a58f7/0x257a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1241438 data_alloc: 218103808 data_used: 397312
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 99328000 unmapped: 1286144 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:02.891242+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 99426304 unmapped: 1187840 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:03.891396+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 99426304 unmapped: 1187840 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:04.891553+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 99270656 unmapped: 2392064 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:05.891731+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9089000/0x0/0x4ffc00000, data 0x2510b2b/0x25e5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 99278848 unmapped: 2383872 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:06.891870+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248896 data_alloc: 218103808 data_used: 397312
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 99328000 unmapped: 2334720 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:07.892063+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 99614720 unmapped: 2048000 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:08.892218+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 99614720 unmapped: 2048000 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:09.892413+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.705707550s of 10.003671646s, submitted: 73
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 100671488 unmapped: 991232 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:10.892527+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f903b000/0x0/0x4ffc00000, data 0x255cf13/0x2632000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 100671488 unmapped: 991232 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:11.892608+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9024000/0x0/0x4ffc00000, data 0x257514a/0x264a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250104 data_alloc: 218103808 data_used: 397312
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 100679680 unmapped: 983040 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:12.892766+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 2007040 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:13.892987+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9024000/0x0/0x4ffc00000, data 0x257514a/0x264a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 2179072 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:14.893116+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 100597760 unmapped: 2113536 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:15.893288+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 100597760 unmapped: 2113536 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:16.893453+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1257856 data_alloc: 218103808 data_used: 397312
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 100810752 unmapped: 1900544 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:17.893624+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 100810752 unmapped: 1900544 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f8f97000/0x0/0x4ffc00000, data 0x2601c98/0x26d6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:18.893729+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 100810752 unmapped: 1900544 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:19.893843+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.710945129s of 10.001224518s, submitted: 71
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 102039552 unmapped: 1720320 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:20.893978+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 102031360 unmapped: 1728512 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:21.894091+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f8f71000/0x0/0x4ffc00000, data 0x262894d/0x26fd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1268920 data_alloc: 218103808 data_used: 397312
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 102031360 unmapped: 1728512 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:22.894216+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 101146624 unmapped: 2613248 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:23.894385+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 101146624 unmapped: 2613248 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:24.894506+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 101146624 unmapped: 2613248 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:25.894687+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 101466112 unmapped: 2293760 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:26.894871+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264818 data_alloc: 218103808 data_used: 397312
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 101564416 unmapped: 2195456 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:27.895213+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f8eef000/0x0/0x4ffc00000, data 0x26aac43/0x277f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 103047168 unmapped: 712704 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:28.895577+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 103120896 unmapped: 638976 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:29.895843+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.532333374s of 10.003606796s, submitted: 97
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f8e9b000/0x0/0x4ffc00000, data 0x26ffb29/0x27d3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 843776 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:30.896072+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 102924288 unmapped: 1884160 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:31.896235+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278808 data_alloc: 218103808 data_used: 397312
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 103292928 unmapped: 1515520 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:32.896437+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f8e49000/0x0/0x4ffc00000, data 0x2751053/0x2825000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 103309312 unmapped: 1499136 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:33.896613+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 103309312 unmapped: 1499136 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:34.896772+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 104939520 unmapped: 917504 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:35.896972+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 1753088 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:36.897150+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293012 data_alloc: 218103808 data_used: 405504
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 1728512 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:37.897383+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f8ddc000/0x0/0x4ffc00000, data 0x27bbdb3/0x2891000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 2408448 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:38.897518+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 104554496 unmapped: 2351104 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:39.897738+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.513430595s of 10.029569626s, submitted: 121
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 104742912 unmapped: 2162688 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:40.897997+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 143 handle_osd_map epochs [145,145], i have 143, src has [1,145]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 143 handle_osd_map epochs [144,145], i have 143, src has [1,145]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 104873984 unmapped: 2031616 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:41.898195+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297696 data_alloc: 218103808 data_used: 413696
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 104996864 unmapped: 1908736 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:42.898407+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 106168320 unmapped: 737280 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:43.898574+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8d5d000/0x0/0x4ffc00000, data 0x2837930/0x2910000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 598016 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:44.898747+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 491520 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:45.898954+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 145 handle_osd_map epochs [146,146], i have 146, src has [1,146]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f8d04000/0x0/0x4ffc00000, data 0x288db98/0x2968000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,4])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 106479616 unmapped: 425984 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:46.899133+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305906 data_alloc: 218103808 data_used: 421888
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 106782720 unmapped: 122880 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:47.899302+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 106782720 unmapped: 122880 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:48.899471+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f8cea000/0x0/0x4ffc00000, data 0x28a9d25/0x2984000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 107175936 unmapped: 1826816 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:49.899618+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 2.949766874s of 10.132806778s, submitted: 104
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:50.899794+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 106651648 unmapped: 2351104 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:51.900026+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 106651648 unmapped: 2351104 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313234 data_alloc: 218103808 data_used: 421888
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:52.900219+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 106717184 unmapped: 2285568 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:53.900356+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108158976 unmapped: 843776 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f8c75000/0x0/0x4ffc00000, data 0x2921057/0x29f9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:54.900511+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 835584 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:55.900696+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108199936 unmapped: 1851392 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:56.901012+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 1540096 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f7a94000/0x0/0x4ffc00000, data 0x2960a9c/0x2a3a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x572f9c7), peers [0,2] op hist [0,0,0,1])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1318994 data_alloc: 218103808 data_used: 421888
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:57.901165+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108240896 unmapped: 1810432 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:58.901351+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 1802240 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:59.901660+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 1662976 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:00.901875+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 1662976 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.616237640s of 11.104003906s, submitted: 67
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:01.902108+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 1662976 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1317914 data_alloc: 218103808 data_used: 421888
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:02.902311+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 1662976 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f7a6a000/0x0/0x4ffc00000, data 0x298b94d/0x2a64000, compress 0x0/0x0/0x0, omap 0x639, meta 0x572f9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:03.902526+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 1630208 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:04.902766+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 1630208 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f68ba000/0x0/0x4ffc00000, data 0x299aed9/0x2a74000, compress 0x0/0x0/0x0, omap 0x639, meta 0x68cf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:05.902973+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 109387776 unmapped: 1712128 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:06.903201+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 2113536 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324516 data_alloc: 218103808 data_used: 421888
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:07.903369+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 2113536 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:08.903494+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108609536 unmapped: 2490368 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:09.903634+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108609536 unmapped: 2490368 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:10.903758+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108609536 unmapped: 2490368 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.896729469s of 10.008993149s, submitted: 23
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f6890000/0x0/0x4ffc00000, data 0x29c5ae8/0x2a9e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x68cf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:11.903876+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108609536 unmapped: 2490368 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1317916 data_alloc: 218103808 data_used: 421888
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:12.904112+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108617728 unmapped: 2482176 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:13.904245+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 2457600 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:14.904380+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 2457600 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:15.904561+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 2457600 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f688f000/0x0/0x4ffc00000, data 0x29c5b50/0x2a9e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x68cf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:16.904760+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 2457600 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1319508 data_alloc: 218103808 data_used: 421888
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:17.904938+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 2449408 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:18.905109+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 2449408 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:19.905280+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 2449408 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:20.905548+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 2449408 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.780126572s of 10.010715485s, submitted: 9
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:21.905748+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 2449408 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f6890000/0x0/0x4ffc00000, data 0x29c5b4e/0x2a9e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x68cf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1318994 data_alloc: 218103808 data_used: 421888
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:22.905956+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 2449408 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:23.906142+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 2449408 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:24.906327+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 2449408 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:25.906606+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 2449408 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f688f000/0x0/0x4ffc00000, data 0x29c5c22/0x2a9f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x68cf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:26.906750+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 2441216 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323974 data_alloc: 218103808 data_used: 421888
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:27.906993+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 2441216 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:28.907138+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 2441216 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:29.907275+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 2441216 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:30.907466+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 2441216 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:31.908253+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 2441216 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f688d000/0x0/0x4ffc00000, data 0x29c5d22/0x2aa1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x68cf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323974 data_alloc: 218103808 data_used: 421888
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:32.908384+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 2441216 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f688d000/0x0/0x4ffc00000, data 0x29c5d22/0x2aa1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x68cf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:33.908516+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 2441216 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:34.908712+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 2441216 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.401308060s of 13.708648682s, submitted: 8
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:35.908950+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 2433024 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:36.909136+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 2433024 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325566 data_alloc: 218103808 data_used: 421888
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:37.909269+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f688d000/0x0/0x4ffc00000, data 0x29c5d22/0x2aa1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x68cf9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 2433024 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:38.909458+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 2433024 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f688c000/0x0/0x4ffc00000, data 0x29c5e22/0x2aa2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x68cf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:39.909601+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 2433024 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:40.909747+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 2433024 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 10K writes, 41K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 2753 syncs, 3.79 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3588 writes, 13K keys, 3588 commit groups, 1.0 writes per commit group, ingest: 19.98 MB, 0.03 MB/s
                                           Interval WAL: 3588 writes, 1465 syncs, 2.45 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:41.910035+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108691456 unmapped: 2408448 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f688c000/0x0/0x4ffc00000, data 0x29c5e22/0x2aa2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x68cf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325566 data_alloc: 218103808 data_used: 421888
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:42.910260+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108691456 unmapped: 2408448 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:43.910459+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108691456 unmapped: 2408448 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:44.910688+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108691456 unmapped: 2408448 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.408632278s of 10.221953392s, submitted: 7
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:45.911517+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108691456 unmapped: 2408448 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:46.911665+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 146 ms_handle_reset con 0x5624c6308c00 session 0x5624c5d04f00
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: handle_auth_request added challenge on 0x5624c906e000
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108691456 unmapped: 2408448 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: mgrc ms_handle_reset ms_handle_reset con 0x5624c6309c00
Oct 01 17:19:20 compute-0 ceph-osd[89167]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3235544197
Oct 01 17:19:20 compute-0 ceph-osd[89167]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: get_auth_request con 0x5624c682fc00 auth_method 0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: mgrc handle_mgr_configure stats_period=5
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f688a000/0x0/0x4ffc00000, data 0x29c5f32/0x2aa4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x68cf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:47.911796+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328398 data_alloc: 218103808 data_used: 421888
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108797952 unmapped: 2301952 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 146 ms_handle_reset con 0x5624c62e1800 session 0x5624c923b4a0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: handle_auth_request added challenge on 0x5624c76f0c00
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 146 ms_handle_reset con 0x5624c62e0400 session 0x5624c966a5a0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: handle_auth_request added challenge on 0x5624c62e1800
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f688a000/0x0/0x4ffc00000, data 0x29c5f32/0x2aa4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x68cf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:48.911930+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108806144 unmapped: 2293760 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:49.912074+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108814336 unmapped: 2285568 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:50.912221+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108814336 unmapped: 2285568 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:51.912427+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108814336 unmapped: 2285568 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f647b000/0x0/0x4ffc00000, data 0x29c5fc6/0x2aa3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:52.929069+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327884 data_alloc: 218103808 data_used: 421888
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108814336 unmapped: 2285568 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:53.929371+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108838912 unmapped: 2260992 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:54.929507+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108838912 unmapped: 2260992 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.027173042s of 10.099073410s, submitted: 15
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:55.929703+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108838912 unmapped: 2260992 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f647b000/0x0/0x4ffc00000, data 0x29c6015/0x2aa2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:56.929983+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108838912 unmapped: 2260992 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:57.930100+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328946 data_alloc: 218103808 data_used: 421888
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108838912 unmapped: 2260992 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:58.930216+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108847104 unmapped: 2252800 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:59.930403+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108847104 unmapped: 2252800 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f647a000/0x0/0x4ffc00000, data 0x29c620c/0x2aa3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:00.930738+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108847104 unmapped: 2252800 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f647a000/0x0/0x4ffc00000, data 0x29c620c/0x2aa3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:01.930882+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108847104 unmapped: 2252800 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:02.931182+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327920 data_alloc: 218103808 data_used: 421888
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108847104 unmapped: 2252800 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:03.931286+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108847104 unmapped: 2252800 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:04.931538+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108847104 unmapped: 2252800 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.809282780s of 10.259056091s, submitted: 15
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f647c000/0x0/0x4ffc00000, data 0x29c629e/0x2aa2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:05.931842+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108871680 unmapped: 2228224 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:06.931953+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108871680 unmapped: 2228224 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:07.932133+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330894 data_alloc: 218103808 data_used: 421888
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108871680 unmapped: 2228224 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:08.932406+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108879872 unmapped: 2220032 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f6479000/0x0/0x4ffc00000, data 0x29c649b/0x2aa5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:09.932540+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108879872 unmapped: 2220032 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f6479000/0x0/0x4ffc00000, data 0x29c649b/0x2aa5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:10.932652+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108879872 unmapped: 2220032 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:11.932947+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108879872 unmapped: 2220032 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:12.933249+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335318 data_alloc: 218103808 data_used: 421888
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108879872 unmapped: 2220032 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:13.933394+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108879872 unmapped: 2220032 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f6477000/0x0/0x4ffc00000, data 0x29c65c8/0x2aa6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:14.933945+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f6477000/0x0/0x4ffc00000, data 0x29c65c8/0x2aa6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108879872 unmapped: 2220032 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.801453590s of 10.081443787s, submitted: 18
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:15.934158+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108879872 unmapped: 2220032 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:16.934325+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108879872 unmapped: 2220032 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:17.934437+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336524 data_alloc: 218103808 data_used: 421888
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108879872 unmapped: 2220032 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:18.934615+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 2203648 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:19.934744+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108904448 unmapped: 2195456 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f6475000/0x0/0x4ffc00000, data 0x29c66c4/0x2aa7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:20.934933+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108904448 unmapped: 2195456 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:21.935070+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108912640 unmapped: 2187264 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:22.935234+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335770 data_alloc: 218103808 data_used: 421888
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108912640 unmapped: 2187264 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f6479000/0x0/0x4ffc00000, data 0x29c65fe/0x2aa5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:23.935352+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108912640 unmapped: 2187264 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:24.935482+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108912640 unmapped: 2187264 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:25.935666+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108912640 unmapped: 3235840 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.637481689s of 10.894768715s, submitted: 29
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:26.935816+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 3252224 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:27.936003+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f6477000/0x0/0x4ffc00000, data 0x29c8283/0x2aa6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1338020 data_alloc: 218103808 data_used: 430080
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 3252224 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:28.936133+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108904448 unmapped: 3244032 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:29.936246+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108904448 unmapped: 3244032 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:30.936392+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108929024 unmapped: 3219456 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:31.936577+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108929024 unmapped: 3219456 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:32.936711+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1337544 data_alloc: 218103808 data_used: 430080
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108929024 unmapped: 3219456 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f6478000/0x0/0x4ffc00000, data 0x29c84e7/0x2aa6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:33.936848+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108953600 unmapped: 3194880 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:34.937003+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108961792 unmapped: 3186688 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:35.937196+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 3178496 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:36.937344+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 3178496 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f647c000/0x0/0x4ffc00000, data 0x29c860e/0x2aa2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.254523277s of 10.673042297s, submitted: 45
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 147 handle_osd_map epochs [148,148], i have 148, src has [1,148]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:37.937489+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341592 data_alloc: 218103808 data_used: 438272
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 108961792 unmapped: 3186688 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:38.937643+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 109035520 unmapped: 3112960 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:39.937832+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 110305280 unmapped: 1843200 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f6433000/0x0/0x4ffc00000, data 0x2a10233/0x2aeb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:40.937983+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 110305280 unmapped: 1843200 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:41.938135+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 110698496 unmapped: 1449984 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:42.938279+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1354162 data_alloc: 218103808 data_used: 446464
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 1441792 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:43.938431+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 111919104 unmapped: 1277952 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:44.938599+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 111927296 unmapped: 1269760 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f63d8000/0x0/0x4ffc00000, data 0x2a691ea/0x2b46000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [0,0,0,0,1])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 149 handle_osd_map epochs [149,150], i have 149, src has [1,150]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:45.938769+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 2555904 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:46.938883+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 112336896 unmapped: 1908736 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.776722908s of 10.055137634s, submitted: 245
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:47.939027+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1368320 data_alloc: 218103808 data_used: 454656
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 491520 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:48.939167+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 1540096 heap: 115294208 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:49.939313+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 1155072 heap: 115294208 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f630f000/0x0/0x4ffc00000, data 0x2b313b4/0x2c0f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:50.939498+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 114491392 unmapped: 802816 heap: 115294208 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:51.939634+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 1425408 heap: 115294208 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:52.939807+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1368654 data_alloc: 218103808 data_used: 454656
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 1425408 heap: 115294208 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:53.939959+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 1425408 heap: 115294208 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:54.940119+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 1343488 heap: 115294208 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:55.940292+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 113958912 unmapped: 1335296 heap: 115294208 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f62d4000/0x0/0x4ffc00000, data 0x2b6a1e8/0x2c49000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:56.940444+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 113958912 unmapped: 1335296 heap: 115294208 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:57.940664+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1374096 data_alloc: 218103808 data_used: 462848
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 113958912 unmapped: 1335296 heap: 115294208 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.353273392s of 10.966333389s, submitted: 54
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:58.940832+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 114024448 unmapped: 1269760 heap: 115294208 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:59.941065+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 1064960 heap: 115294208 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f62af000/0x0/0x4ffc00000, data 0x2b8fa42/0x2c6f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:00.941178+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 1048576 heap: 115294208 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f62af000/0x0/0x4ffc00000, data 0x2b8fa42/0x2c6f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:01.941264+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 1048576 heap: 115294208 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:02.941438+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1386208 data_alloc: 218103808 data_used: 471040
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 2203648 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:03.941622+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 2203648 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:04.941810+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 114327552 unmapped: 2015232 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:05.942010+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 114532352 unmapped: 1810432 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 153 heartbeat osd_stat(store_statfs(0x4f625f000/0x0/0x4ffc00000, data 0x2bdcffd/0x2cbf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:06.942162+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 153 heartbeat osd_stat(store_statfs(0x4f625f000/0x0/0x4ffc00000, data 0x2bdcffd/0x2cbf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 115572736 unmapped: 770048 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:07.942323+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1386664 data_alloc: 218103808 data_used: 471040
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 115572736 unmapped: 770048 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.617740631s of 10.144355774s, submitted: 76
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:08.942466+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 115105792 unmapped: 1236992 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:09.942641+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 115105792 unmapped: 1236992 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:10.942717+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 1228800 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:11.942851+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 115335168 unmapped: 1007616 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 155 heartbeat osd_stat(store_statfs(0x4f6214000/0x0/0x4ffc00000, data 0x2c24d78/0x2d09000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:12.942981+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1395922 data_alloc: 218103808 data_used: 479232
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 115359744 unmapped: 983040 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:13.943176+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 1024000 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:14.943325+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 115580928 unmapped: 2859008 heap: 118439936 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:15.943472+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 155 heartbeat osd_stat(store_statfs(0x4f61db000/0x0/0x4ffc00000, data 0x2c5e838/0x2d43000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 115630080 unmapped: 2809856 heap: 118439936 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:16.943638+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 115630080 unmapped: 2809856 heap: 118439936 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:17.943755+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1401148 data_alloc: 218103808 data_used: 487424
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 2580480 heap: 118439936 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:18.943938+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 115867648 unmapped: 2572288 heap: 118439936 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.538421631s of 10.944991112s, submitted: 85
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:19.944123+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f61c3000/0x0/0x4ffc00000, data 0x2c74d32/0x2d5b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 115834880 unmapped: 2605056 heap: 118439936 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:20.944274+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 115834880 unmapped: 2605056 heap: 118439936 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:21.944438+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 2596864 heap: 118439936 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:22.944591+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1407156 data_alloc: 218103808 data_used: 499712
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 2596864 heap: 118439936 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:23.944717+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 157 heartbeat osd_stat(store_statfs(0x4f6186000/0x0/0x4ffc00000, data 0x2cb06b7/0x2d97000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 2523136 heap: 118439936 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:24.944846+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 1474560 heap: 118439936 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 157 handle_osd_map epochs [157,158], i have 157, src has [1,158]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:25.945004+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 1523712 heap: 118439936 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:26.945198+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 1269760 heap: 118439936 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f6146000/0x0/0x4ffc00000, data 0x2cef02d/0x2dd7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:27.945374+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1414474 data_alloc: 218103808 data_used: 507904
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 118431744 unmapped: 1056768 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:28.945519+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117424128 unmapped: 2064384 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:29.945652+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.955653191s of 10.619499207s, submitted: 67
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 2834432 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:30.945800+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 2744320 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:31.946035+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 116801536 unmapped: 2686976 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:32.946167+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1419254 data_alloc: 218103808 data_used: 507904
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 2539520 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f60f0000/0x0/0x4ffc00000, data 0x2d469fb/0x2e2e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:33.946377+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 2539520 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:34.946539+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 2490368 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:35.946747+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 2539520 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:36.946980+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 2539520 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f60a4000/0x0/0x4ffc00000, data 0x2d90c10/0x2e7a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:37.947237+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426050 data_alloc: 218103808 data_used: 507904
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 2449408 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:38.947410+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 2572288 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:39.947579+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 4.613806725s of 10.206427574s, submitted: 34
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 2547712 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:40.947684+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f6079000/0x0/0x4ffc00000, data 0x2dbbd31/0x2ea4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [0,0,0,0,0,2])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 3588096 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:41.947820+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 3293184 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: handle_auth_request added challenge on 0x5624c67fc400
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:42.947953+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1438690 data_alloc: 218103808 data_used: 520192
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117407744 unmapped: 3129344 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:43.948113+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117407744 unmapped: 3129344 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Got map version 15
Oct 01 17:19:20 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:44.948273+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f6028000/0x0/0x4ffc00000, data 0x2e099f6/0x2ef5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117555200 unmapped: 2981888 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 159 handle_osd_map epochs [159,160], i have 159, src has [1,160]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:45.948433+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117571584 unmapped: 2965504 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:46.948573+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117579776 unmapped: 2957312 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:47.948724+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1445772 data_alloc: 218103808 data_used: 524288
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 2834432 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:48.948921+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5ff8000/0x0/0x4ffc00000, data 0x2e39ff5/0x2f25000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 2793472 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:49.949085+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.518175602s of 10.031496048s, submitted: 85
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117800960 unmapped: 2736128 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:50.949226+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117800960 unmapped: 2736128 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:51.949455+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117800960 unmapped: 2736128 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:52.949670+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1445478 data_alloc: 218103808 data_used: 524288
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117833728 unmapped: 2703360 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:53.949848+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117833728 unmapped: 2703360 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5fda000/0x0/0x4ffc00000, data 0x2e58a03/0x2f44000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:54.950000+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5fda000/0x0/0x4ffc00000, data 0x2e58a03/0x2f44000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [0,0,0,0,0,0,4])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 118038528 unmapped: 2498560 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:55.950233+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 118038528 unmapped: 2498560 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:56.950604+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 2555904 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:57.950781+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1444040 data_alloc: 218103808 data_used: 524288
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 2555904 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:58.950920+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 2555904 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:59.951120+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 3448832 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:00.951266+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.308528900s of 10.589768410s, submitted: 13
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5fa6000/0x0/0x4ffc00000, data 0x2e8d0b2/0x2f78000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 3448832 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:01.951402+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 3448832 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:02.951560+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1447192 data_alloc: 218103808 data_used: 524288
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 118145024 unmapped: 3440640 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:03.951705+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 118120448 unmapped: 3465216 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5f6d000/0x0/0x4ffc00000, data 0x2ec5c5a/0x2fb1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:04.951870+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 118169600 unmapped: 3416064 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5f2e000/0x0/0x4ffc00000, data 0x2f051ab/0x2ff0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:05.952120+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 118349824 unmapped: 3235840 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:06.952287+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 119406592 unmapped: 2179072 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:07.952448+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1452536 data_alloc: 218103808 data_used: 524288
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 119603200 unmapped: 1982464 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:08.952677+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 119668736 unmapped: 1916928 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:09.952887+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5efd000/0x0/0x4ffc00000, data 0x2f36505/0x3021000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 118702080 unmapped: 2883584 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:10.953158+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.688433647s of 10.379757881s, submitted: 35
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 118931456 unmapped: 2654208 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:11.953399+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 2400256 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:12.953570+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457500 data_alloc: 218103808 data_used: 524288
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 2301952 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5ec4000/0x0/0x4ffc00000, data 0x2f6eb7b/0x305a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:13.953722+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 119300096 unmapped: 2285568 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:14.953881+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5e78000/0x0/0x4ffc00000, data 0x2fbaf75/0x30a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 119611392 unmapped: 3022848 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:15.954126+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 119611392 unmapped: 3022848 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:16.954271+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 119627776 unmapped: 3006464 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:17.954438+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456464 data_alloc: 218103808 data_used: 524288
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 119808000 unmapped: 2826240 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:18.954656+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 120102912 unmapped: 2531328 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:19.954822+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 2048000 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:20.955005+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5e25000/0x0/0x4ffc00000, data 0x300dbd9/0x30f9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 1802240 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:21.955124+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.541758537s of 11.184603691s, submitted: 34
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 1777664 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:22.955251+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5e09000/0x0/0x4ffc00000, data 0x302a004/0x3115000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464780 data_alloc: 218103808 data_used: 524288
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 119980032 unmapped: 2654208 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:23.955423+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 1474560 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:24.955593+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 1368064 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:25.955807+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 1368064 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:26.955985+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5dd2000/0x0/0x4ffc00000, data 0x30608bd/0x314c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121380864 unmapped: 1253376 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:27.956115+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1465692 data_alloc: 218103808 data_used: 524288
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121397248 unmapped: 1236992 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:28.956281+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5dd2000/0x0/0x4ffc00000, data 0x30608bd/0x314c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121397248 unmapped: 1236992 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:29.956486+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121503744 unmapped: 2179072 heap: 123682816 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:30.956680+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121503744 unmapped: 2179072 heap: 123682816 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:31.956920+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5d9c000/0x0/0x4ffc00000, data 0x3095bce/0x3182000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.672442436s of 10.087901115s, submitted: 29
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121626624 unmapped: 2056192 heap: 123682816 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:32.957080+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1471104 data_alloc: 218103808 data_used: 524288
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121880576 unmapped: 1802240 heap: 123682816 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:33.957209+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 2367488 heap: 123682816 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:34.957347+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 2367488 heap: 123682816 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:35.957502+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5d47000/0x0/0x4ffc00000, data 0x30eb6ef/0x31d7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121528320 unmapped: 2154496 heap: 123682816 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:36.957631+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 3465216 heap: 123682816 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:37.957785+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5d29000/0x0/0x4ffc00000, data 0x3108d56/0x31f5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472848 data_alloc: 218103808 data_used: 524288
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 3448832 heap: 123682816 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:38.957965+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 3383296 heap: 123682816 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:39.958151+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 120111104 unmapped: 3571712 heap: 123682816 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:40.958315+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5d17000/0x0/0x4ffc00000, data 0x311af7f/0x3207000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121192448 unmapped: 3538944 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:41.958504+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5ce0000/0x0/0x4ffc00000, data 0x315238b/0x323e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.636823654s of 10.060282707s, submitted: 40
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121495552 unmapped: 3235840 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:42.958715+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473798 data_alloc: 218103808 data_used: 524288
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 3252224 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:43.958871+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f5c9c000/0x0/0x4ffc00000, data 0x31974b9/0x3282000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121495552 unmapped: 3235840 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:44.959117+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121184256 unmapped: 3547136 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:45.959397+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121184256 unmapped: 3547136 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:46.960412+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f5c8a000/0x0/0x4ffc00000, data 0x31a762d/0x3293000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 3465216 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:47.960607+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1486688 data_alloc: 218103808 data_used: 532480
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121397248 unmapped: 3334144 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:48.960784+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121397248 unmapped: 3334144 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:49.960955+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121397248 unmapped: 3334144 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:50.961191+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121503744 unmapped: 3227648 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:51.961355+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.885109901s of 10.018849373s, submitted: 67
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 3219456 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f5c22000/0x0/0x4ffc00000, data 0x320d09b/0x32fb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:52.961566+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1493854 data_alloc: 218103808 data_used: 540672
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 3211264 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:53.961727+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121536512 unmapped: 3194880 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:54.961950+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121536512 unmapped: 3194880 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:55.962134+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121667584 unmapped: 4112384 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:56.962388+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f5bf8000/0x0/0x4ffc00000, data 0x3237b0b/0x3326000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121757696 unmapped: 4022272 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:57.962553+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497076 data_alloc: 218103808 data_used: 540672
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121757696 unmapped: 4022272 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:58.962814+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 121757696 unmapped: 4022272 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:59.963975+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 123043840 unmapped: 2736128 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:00.964113+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 123052032 unmapped: 2727936 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:01.964431+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f5b95000/0x0/0x4ffc00000, data 0x3299583/0x3388000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 123052032 unmapped: 2727936 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:02.964760+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1505500 data_alloc: 218103808 data_used: 548864
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 123248640 unmapped: 2531328 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:03.965008+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 123248640 unmapped: 2531328 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:04.965172+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.575702667s of 12.447751045s, submitted: 99
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f5b4c000/0x0/0x4ffc00000, data 0x32e3e80/0x33d2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122421248 unmapped: 3358720 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:05.965630+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122535936 unmapped: 4292608 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:06.965827+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122535936 unmapped: 4292608 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:07.966067+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f5b48000/0x0/0x4ffc00000, data 0x32e5970/0x33d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1508010 data_alloc: 218103808 data_used: 557056
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122544128 unmapped: 4284416 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:08.966408+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122560512 unmapped: 4268032 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:09.966714+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122429440 unmapped: 4399104 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:10.966926+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 4325376 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:11.967234+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 4325376 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:12.967519+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f5b1f000/0x0/0x4ffc00000, data 0x330f6bc/0x33ff000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510426 data_alloc: 218103808 data_used: 557056
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 4325376 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:13.967750+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 4325376 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:14.967989+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.771459103s of 10.038169861s, submitted: 34
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122511360 unmapped: 4317184 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:15.968206+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 4325376 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:16.968413+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122601472 unmapped: 4227072 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:17.968554+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515650 data_alloc: 218103808 data_used: 557056
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122814464 unmapped: 4014080 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:18.968748+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f5ad6000/0x0/0x4ffc00000, data 0x3358e22/0x3448000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122822656 unmapped: 4005888 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:19.968953+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 4161536 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:20.969150+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122740736 unmapped: 4087808 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:21.969330+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122986496 unmapped: 3842048 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:22.969516+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1522438 data_alloc: 218103808 data_used: 557056
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 122986496 unmapped: 3842048 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:23.969702+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f5a87000/0x0/0x4ffc00000, data 0x33a506b/0x3497000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 123002880 unmapped: 3825664 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:24.969965+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 0.578993261s of 10.013916969s, submitted: 24
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 123068416 unmapped: 3760128 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:25.970172+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 3702784 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:26.970318+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 3448832 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:27.970517+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1525874 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 3448832 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:28.970659+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f5a31000/0x0/0x4ffc00000, data 0x33fac86/0x34ed000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 3448832 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:29.977839+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f5a1f000/0x0/0x4ffc00000, data 0x340cbeb/0x34ff000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [0,0,0,0,0,0,0,1,2])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 124526592 unmapped: 2301952 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:30.977989+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 124526592 unmapped: 3350528 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:31.978190+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 124600320 unmapped: 3276800 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:32.978399+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f59eb000/0x0/0x4ffc00000, data 0x343fb27/0x3533000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,2])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1534542 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:33.978585+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 3203072 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:34.978723+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 3203072 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 1.888821959s of 10.038423538s, submitted: 34
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:35.978961+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 124829696 unmapped: 3047424 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 164 handle_osd_map epochs [164,165], i have 164, src has [1,165]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 164 handle_osd_map epochs [165,165], i have 165, src has [1,165]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f5986000/0x0/0x4ffc00000, data 0x34a3485/0x3596000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:36.979152+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 125288448 unmapped: 2588672 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f5986000/0x0/0x4ffc00000, data 0x34a3485/0x3596000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:37.979415+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 125526016 unmapped: 2351104 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541706 data_alloc: 218103808 data_used: 569344
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:38.979588+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 125534208 unmapped: 2342912 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:39.979932+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 124542976 unmapped: 3334144 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:40.980139+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 124542976 unmapped: 3334144 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f5944000/0x0/0x4ffc00000, data 0x34e784c/0x35da000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:41.980455+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 124542976 unmapped: 3334144 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:42.980697+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f592d000/0x0/0x4ffc00000, data 0x34fc260/0x35f0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 3973120 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1545516 data_alloc: 218103808 data_used: 577536
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:43.980954+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 124010496 unmapped: 3866624 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:44.981132+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 124166144 unmapped: 3710976 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 4.039592266s of 10.316161156s, submitted: 69
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:45.981367+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 124526592 unmapped: 3350528 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:46.981559+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 124526592 unmapped: 3350528 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f58c2000/0x0/0x4ffc00000, data 0x3567939/0x365c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,2])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:47.981753+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 124526592 unmapped: 3350528 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1556924 data_alloc: 218103808 data_used: 581632
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:48.981953+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 124575744 unmapped: 3301376 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:49.982118+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 125706240 unmapped: 2170880 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:50.982276+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 125526016 unmapped: 2351104 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:51.982429+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 125558784 unmapped: 2318336 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:52.982558+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f5875000/0x0/0x4ffc00000, data 0x35b2f55/0x36a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [0,0,1])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 125648896 unmapped: 2228224 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1554298 data_alloc: 218103808 data_used: 581632
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:53.982692+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 125648896 unmapped: 2228224 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:54.982830+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 125886464 unmapped: 1990656 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 2.302689075s of 10.060779572s, submitted: 29
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:55.983006+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126935040 unmapped: 1990656 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:56.983128+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126935040 unmapped: 1990656 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f583b000/0x0/0x4ffc00000, data 0x35ec827/0x36e2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:57.983269+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126943232 unmapped: 1982464 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1558724 data_alloc: 218103808 data_used: 589824
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:58.983416+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126943232 unmapped: 1982464 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:59.983540+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f583c000/0x0/0x4ffc00000, data 0x35ec88c/0x36e2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126943232 unmapped: 1982464 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:00.983663+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126943232 unmapped: 1982464 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 167 handle_osd_map epochs [167,168], i have 167, src has [1,168]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:01.983769+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126943232 unmapped: 1982464 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:02.983888+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126943232 unmapped: 1982464 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1562016 data_alloc: 218103808 data_used: 598016
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:03.984085+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126943232 unmapped: 1982464 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:04.984214+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126959616 unmapped: 1966080 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 168 heartbeat osd_stat(store_statfs(0x4f5839000/0x0/0x4ffc00000, data 0x35ee31e/0x36e4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:05.984364+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126959616 unmapped: 1966080 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.955657005s of 11.050517082s, submitted: 60
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:06.984504+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 168 heartbeat osd_stat(store_statfs(0x4f5839000/0x0/0x4ffc00000, data 0x35ee31e/0x36e4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126976000 unmapped: 1949696 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:07.984688+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126976000 unmapped: 1949696 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1562904 data_alloc: 218103808 data_used: 598016
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:08.984811+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126976000 unmapped: 1949696 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:09.984959+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126976000 unmapped: 1949696 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:10.985081+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126976000 unmapped: 1949696 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 168 heartbeat osd_stat(store_statfs(0x4f5839000/0x0/0x4ffc00000, data 0x35ee483/0x36e5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:11.985221+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126976000 unmapped: 1949696 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:12.985412+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126976000 unmapped: 1949696 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Got map version 16
Oct 01 17:19:20 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1563982 data_alloc: 218103808 data_used: 598016
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:13.985551+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126992384 unmapped: 1933312 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:14.985663+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 126992384 unmapped: 1933312 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 168 heartbeat osd_stat(store_statfs(0x4f5838000/0x0/0x4ffc00000, data 0x35ee587/0x36e6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 168 handle_osd_map epochs [169,169], i have 168, src has [1,169]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:15.985816+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127008768 unmapped: 1916928 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.202703476s of 10.113478661s, submitted: 41
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 169 handle_osd_map epochs [170,170], i have 169, src has [1,170]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:16.985986+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 1892352 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:17.986101+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 1892352 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f5832000/0x0/0x4ffc00000, data 0x35f1d76/0x36ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1571080 data_alloc: 218103808 data_used: 602112
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:18.986263+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 1892352 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:19.986419+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 1892352 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:20.986584+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 1892352 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 170 handle_osd_map epochs [170,171], i have 170, src has [1,171]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:21.986692+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 1892352 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:22.986872+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 1892352 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f5831000/0x0/0x4ffc00000, data 0x35f38f2/0x36ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572596 data_alloc: 218103808 data_used: 610304
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:23.987083+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 1892352 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:24.987239+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 1892352 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:25.987443+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 1892352 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:26.987637+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 1892352 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:27.987844+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 1892352 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572596 data_alloc: 218103808 data_used: 610304
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:28.988184+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.914357185s of 12.439829826s, submitted: 50
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127049728 unmapped: 1875968 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f5831000/0x0/0x4ffc00000, data 0x35f38f2/0x36ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:29.988378+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127049728 unmapped: 1875968 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:30.988550+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127049728 unmapped: 1875968 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:31.988739+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127049728 unmapped: 1875968 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f5832000/0x0/0x4ffc00000, data 0x35f38f2/0x36ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:32.988875+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127049728 unmapped: 1875968 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f5832000/0x0/0x4ffc00000, data 0x35f38f2/0x36ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1571892 data_alloc: 218103808 data_used: 610304
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:33.989088+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127049728 unmapped: 1875968 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:34.989310+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127049728 unmapped: 1875968 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:35.989597+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f5832000/0x0/0x4ffc00000, data 0x35f38f2/0x36ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127049728 unmapped: 1875968 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:36.989780+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127049728 unmapped: 1875968 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f5832000/0x0/0x4ffc00000, data 0x35f38f2/0x36ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:37.989956+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127049728 unmapped: 1875968 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1573484 data_alloc: 218103808 data_used: 610304
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:38.990127+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f5831000/0x0/0x4ffc00000, data 0x35f398d/0x36ed000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127049728 unmapped: 1875968 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.773765564s of 10.584938049s, submitted: 4
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:39.990318+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127057920 unmapped: 1867776 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:40.990484+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127057920 unmapped: 1867776 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 171 ms_handle_reset con 0x5624c67fc400 session 0x5624c96994a0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:41.990612+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:42.990778+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Got map version 17
Oct 01 17:19:20 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:43.991009+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572810 data_alloc: 218103808 data_used: 610304
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f5831000/0x0/0x4ffc00000, data 0x35f3abc/0x36ed000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:44.991221+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:45.991415+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:46.991554+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:47.991718+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f5832000/0x0/0x4ffc00000, data 0x35f3a86/0x36ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 171 handle_osd_map epochs [172,172], i have 171, src has [1,172]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 171 handle_osd_map epochs [172,172], i have 172, src has [1,172]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _renew_subs
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 171 handle_osd_map epochs [172,172], i have 172, src has [1,172]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:48.991862+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1577160 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:49.992060+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:50.992285+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:51.992443+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 172 heartbeat osd_stat(store_statfs(0x4f582e000/0x0/0x4ffc00000, data 0x35f566c/0x36ef000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:52.992637+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:53.992885+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576792 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:54.993102+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:55.993304+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 172 heartbeat osd_stat(store_statfs(0x4f582e000/0x0/0x4ffc00000, data 0x35f566c/0x36ef000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:56.993467+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:57.993633+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 172 handle_osd_map epochs [173,173], i have 172, src has [1,173]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.244745255s of 18.642427444s, submitted: 223
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582b000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:58.993803+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579766 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:59.993971+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582b000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 2539520 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:00.994161+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 2531328 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:01.994300+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 2531328 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582b000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:02.994444+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 2531328 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:03.994650+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579766 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 2531328 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:04.994805+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 2531328 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:05.995039+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 2531328 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:06.995227+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582b000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 2531328 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:07.995413+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 2531328 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582b000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:08.995591+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579766 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 2531328 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:09.995764+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 2531328 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:10.996648+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 2531328 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:11.996878+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 2531328 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582b000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:12.997076+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582b000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 2531328 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:13.997228+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579766 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 2531328 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:14.997380+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 2531328 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:15.997558+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 2531328 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582b000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:16.997725+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 2531328 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:17.997929+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 2523136 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:18.998085+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579766 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 2523136 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:19.998245+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582b000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 2523136 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582b000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:20.998469+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 2523136 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:21.998628+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 2523136 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:22.998793+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 2523136 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:23.999009+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579766 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 2523136 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:24.999133+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 2523136 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:25.999314+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 2523136 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582b000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:26.999518+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 2523136 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:27.999639+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 2523136 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:28.999714+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579766 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 2523136 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582b000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:29.999948+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 2523136 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:31.000071+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 2523136 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:32.000207+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 2514944 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:33.000333+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 2514944 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:34.000451+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579766 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 2514944 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:35.000615+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 2514944 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582b000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:36.000804+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 2514944 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:37.000972+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 2514944 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582b000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:38.001149+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582b000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 2514944 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:39.001322+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579766 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 2514944 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:40.001468+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 2514944 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:41.001631+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 42.923511505s of 42.943687439s, submitted: 15
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 ms_handle_reset con 0x5624c906f800 session 0x5624c865c000
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 1998848 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:42.001823+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 129032192 unmapped: 1990656 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:43.001973+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 129032192 unmapped: 1990656 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Got map version 18
Oct 01 17:19:20 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:44.002136+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128475136 unmapped: 2547712 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:45.002317+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128475136 unmapped: 2547712 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:46.002531+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128475136 unmapped: 2547712 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:47.002672+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128475136 unmapped: 2547712 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:48.002834+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128475136 unmapped: 2547712 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:49.002966+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128475136 unmapped: 2547712 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:50.003504+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128475136 unmapped: 2547712 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:51.003624+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128475136 unmapped: 2547712 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:52.003767+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128475136 unmapped: 2547712 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:53.004009+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128475136 unmapped: 2547712 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:54.004152+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128475136 unmapped: 2547712 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:55.004264+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128475136 unmapped: 2547712 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:56.005968+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128475136 unmapped: 2547712 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:57.006095+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 2555904 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: do_command 'config diff' '{prefix=config diff}'
Oct 01 17:19:20 compute-0 ceph-osd[89167]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 01 17:19:20 compute-0 ceph-osd[89167]: do_command 'config show' '{prefix=config show}'
Oct 01 17:19:20 compute-0 ceph-osd[89167]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 01 17:19:20 compute-0 ceph-osd[89167]: do_command 'counter dump' '{prefix=counter dump}'
Oct 01 17:19:20 compute-0 ceph-osd[89167]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:58.006236+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: do_command 'counter schema' '{prefix=counter schema}'
Oct 01 17:19:20 compute-0 ceph-osd[89167]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128475136 unmapped: 2547712 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:59.006435+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128630784 unmapped: 2392064 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:00.006589+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: do_command 'log dump' '{prefix=log dump}'
Oct 01 17:19:20 compute-0 ceph-osd[89167]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128630784 unmapped: 13434880 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: do_command 'perf dump' '{prefix=perf dump}'
Oct 01 17:19:20 compute-0 ceph-osd[89167]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:01.006831+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct 01 17:19:20 compute-0 ceph-osd[89167]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct 01 17:19:20 compute-0 ceph-osd[89167]: do_command 'perf schema' '{prefix=perf schema}'
Oct 01 17:19:20 compute-0 ceph-osd[89167]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128303104 unmapped: 13762560 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:02.006968+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128303104 unmapped: 13762560 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:03.007079+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128303104 unmapped: 13762560 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:04.007221+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128303104 unmapped: 13762560 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:05.008246+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128303104 unmapped: 13762560 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:06.008399+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128303104 unmapped: 13762560 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:07.008518+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128303104 unmapped: 13762560 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:08.008686+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128303104 unmapped: 13762560 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:09.008814+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128303104 unmapped: 13762560 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:10.008979+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128303104 unmapped: 13762560 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:11.009307+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.873598099s of 30.019792557s, submitted: 201
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128303104 unmapped: 13762560 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:12.009497+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128303104 unmapped: 13762560 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:13.009675+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Got map version 19
Oct 01 17:19:20 compute-0 ceph-osd[89167]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128303104 unmapped: 13762560 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:14.009810+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128303104 unmapped: 13762560 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:15.009946+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128303104 unmapped: 13762560 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:16.010112+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128303104 unmapped: 13762560 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:17.010224+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128303104 unmapped: 13762560 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:18.010354+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128303104 unmapped: 13762560 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:19.010472+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128311296 unmapped: 13754368 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:20.010597+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128311296 unmapped: 13754368 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:21.010777+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128311296 unmapped: 13754368 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:22.010964+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128311296 unmapped: 13754368 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:23.011107+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128311296 unmapped: 13754368 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:24.011223+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128311296 unmapped: 13754368 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:25.011332+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128311296 unmapped: 13754368 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:26.011818+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128311296 unmapped: 13754368 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:27.011977+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128311296 unmapped: 13754368 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:28.012189+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128311296 unmapped: 13754368 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:29.012409+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128311296 unmapped: 13754368 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:30.012557+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128319488 unmapped: 13746176 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:31.012786+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128319488 unmapped: 13746176 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:32.012988+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128319488 unmapped: 13746176 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:33.013134+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128319488 unmapped: 13746176 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:34.013360+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128319488 unmapped: 13746176 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:35.013532+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128319488 unmapped: 13746176 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:36.013782+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128319488 unmapped: 13746176 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:37.014588+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128319488 unmapped: 13746176 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:38.014768+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128319488 unmapped: 13746176 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:39.014979+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128319488 unmapped: 13746176 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:40.015199+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128319488 unmapped: 13746176 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:41.015375+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128319488 unmapped: 13746176 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:42.015761+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128327680 unmapped: 13737984 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:43.015952+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128327680 unmapped: 13737984 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:44.016150+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128327680 unmapped: 13737984 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:45.016280+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128327680 unmapped: 13737984 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:46.016531+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128327680 unmapped: 13737984 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:47.016909+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128335872 unmapped: 13729792 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:48.017138+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128335872 unmapped: 13729792 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:49.017256+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128335872 unmapped: 13729792 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:50.017409+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128335872 unmapped: 13729792 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:51.017594+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128335872 unmapped: 13729792 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:52.017718+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128335872 unmapped: 13729792 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:53.017853+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128344064 unmapped: 13721600 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:54.018000+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128344064 unmapped: 13721600 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:55.018221+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128344064 unmapped: 13721600 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:56.018448+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128344064 unmapped: 13721600 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:57.018612+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128344064 unmapped: 13721600 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:58.018819+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128344064 unmapped: 13721600 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:59.019014+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128344064 unmapped: 13721600 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:00.019215+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128344064 unmapped: 13721600 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:01.019381+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128344064 unmapped: 13721600 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:02.019603+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128344064 unmapped: 13721600 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:03.019741+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128344064 unmapped: 13721600 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:04.019914+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128344064 unmapped: 13721600 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:05.020090+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128344064 unmapped: 13721600 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:06.020327+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128344064 unmapped: 13721600 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:07.020500+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128352256 unmapped: 13713408 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:08.020675+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128352256 unmapped: 13713408 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:09.020838+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128352256 unmapped: 13713408 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:10.021013+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128352256 unmapped: 13713408 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:11.021155+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128352256 unmapped: 13713408 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:12.021337+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128352256 unmapped: 13713408 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:13.021474+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128352256 unmapped: 13713408 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:14.021636+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128352256 unmapped: 13713408 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:15.021818+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128352256 unmapped: 13713408 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:16.022063+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128352256 unmapped: 13713408 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:17.022203+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128352256 unmapped: 13713408 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:18.022324+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128352256 unmapped: 13713408 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:19.022452+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128352256 unmapped: 13713408 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:20.022587+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128352256 unmapped: 13713408 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:21.022745+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128352256 unmapped: 13713408 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:22.022888+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128352256 unmapped: 13713408 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:23.023120+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128352256 unmapped: 13713408 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:24.023278+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128360448 unmapped: 13705216 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:25.023431+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128360448 unmapped: 13705216 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:26.023647+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128360448 unmapped: 13705216 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:27.023777+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128360448 unmapped: 13705216 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:28.023923+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128360448 unmapped: 13705216 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:29.024060+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128360448 unmapped: 13705216 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:30.024195+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128360448 unmapped: 13705216 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:31.024370+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128360448 unmapped: 13705216 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:32.024511+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128360448 unmapped: 13705216 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:33.024673+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128360448 unmapped: 13705216 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:34.024800+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128360448 unmapped: 13705216 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:35.024973+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128360448 unmapped: 13705216 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:36.025771+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128360448 unmapped: 13705216 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:37.026118+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128360448 unmapped: 13705216 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:38.026313+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128368640 unmapped: 13697024 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:39.026506+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128368640 unmapped: 13697024 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:40.026668+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128368640 unmapped: 13697024 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:41.026802+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128368640 unmapped: 13697024 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:42.026948+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128368640 unmapped: 13697024 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:43.027106+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128368640 unmapped: 13697024 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:44.027251+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128368640 unmapped: 13697024 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:45.027399+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128368640 unmapped: 13697024 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:46.027567+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128368640 unmapped: 13697024 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:47.027788+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128368640 unmapped: 13697024 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:48.027942+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128368640 unmapped: 13697024 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:49.028077+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128368640 unmapped: 13697024 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:50.028209+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128368640 unmapped: 13697024 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:51.028379+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128368640 unmapped: 13697024 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:52.028534+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128368640 unmapped: 13697024 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:53.028726+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128368640 unmapped: 13697024 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:54.028865+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128376832 unmapped: 13688832 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:55.028986+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128376832 unmapped: 13688832 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:56.029168+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128376832 unmapped: 13688832 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:57.029323+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128376832 unmapped: 13688832 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:58.029477+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128376832 unmapped: 13688832 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:59.029690+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128376832 unmapped: 13688832 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:00.029949+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128376832 unmapped: 13688832 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:01.030162+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128376832 unmapped: 13688832 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:02.030289+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128376832 unmapped: 13688832 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:03.030427+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128376832 unmapped: 13688832 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:04.030588+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128376832 unmapped: 13688832 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:05.030756+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128376832 unmapped: 13688832 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:06.030970+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128376832 unmapped: 13688832 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:07.031138+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128376832 unmapped: 13688832 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:08.031263+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128376832 unmapped: 13688832 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:09.031423+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128376832 unmapped: 13688832 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:10.031601+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128376832 unmapped: 13688832 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:11.031729+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128376832 unmapped: 13688832 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:12.032161+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128385024 unmapped: 13680640 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:13.032346+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128385024 unmapped: 13680640 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:14.032498+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128385024 unmapped: 13680640 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:15.032622+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128385024 unmapped: 13680640 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:16.032763+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128385024 unmapped: 13680640 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:17.032867+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:18.033037+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128385024 unmapped: 13680640 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:19.033254+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128385024 unmapped: 13680640 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:20.033459+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128385024 unmapped: 13680640 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:21.033660+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128385024 unmapped: 13680640 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:22.033843+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128385024 unmapped: 13680640 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:23.033995+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128385024 unmapped: 13680640 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:24.034170+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128385024 unmapped: 13680640 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:25.034356+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128385024 unmapped: 13680640 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:26.034516+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128385024 unmapped: 13680640 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:27.034645+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128385024 unmapped: 13680640 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:28.034812+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128385024 unmapped: 13680640 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:29.034950+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128385024 unmapped: 13680640 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:30.035087+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128385024 unmapped: 13680640 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:31.035260+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128385024 unmapped: 13680640 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:32.035439+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128385024 unmapped: 13680640 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:33.035656+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128385024 unmapped: 13680640 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:34.036010+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128385024 unmapped: 13680640 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:35.036201+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128385024 unmapped: 13680640 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:36.036399+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128385024 unmapped: 13680640 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:37.036535+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 13672448 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:38.036664+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 13672448 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:39.036798+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 13672448 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:40.036956+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 13672448 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:41.037082+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 13672448 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:42.037224+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 13672448 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:43.037347+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128401408 unmapped: 13664256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:44.037460+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128401408 unmapped: 13664256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:45.037637+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128401408 unmapped: 13664256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:46.037848+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128401408 unmapped: 13664256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:47.038060+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128401408 unmapped: 13664256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:48.038342+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128401408 unmapped: 13664256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:49.038501+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128409600 unmapped: 13656064 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:50.038659+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128409600 unmapped: 13656064 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:51.038819+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128409600 unmapped: 13656064 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:52.038954+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128409600 unmapped: 13656064 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:53.039104+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128409600 unmapped: 13656064 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:54.039254+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128409600 unmapped: 13656064 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:55.039392+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128409600 unmapped: 13656064 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:56.039596+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128409600 unmapped: 13656064 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:57.039728+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128409600 unmapped: 13656064 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:58.039840+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128409600 unmapped: 13656064 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:59.039998+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128409600 unmapped: 13656064 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:00.040146+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128417792 unmapped: 13647872 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:01.040302+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128417792 unmapped: 13647872 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:02.040522+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128417792 unmapped: 13647872 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:03.040736+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128417792 unmapped: 13647872 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:04.040947+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128417792 unmapped: 13647872 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:05.041088+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128417792 unmapped: 13647872 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:06.041327+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128417792 unmapped: 13647872 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:07.041478+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128417792 unmapped: 13647872 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:08.041698+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128417792 unmapped: 13647872 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:09.041829+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128417792 unmapped: 13647872 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:10.042004+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128417792 unmapped: 13647872 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:11.042161+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128417792 unmapped: 13647872 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:12.042466+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128417792 unmapped: 13647872 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:13.042588+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128417792 unmapped: 13647872 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:14.042708+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128417792 unmapped: 13647872 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:15.042881+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128417792 unmapped: 13647872 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:16.043151+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128417792 unmapped: 13647872 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:17.043329+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128417792 unmapped: 13647872 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:18.043476+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128417792 unmapped: 13647872 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:19.043634+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128425984 unmapped: 13639680 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:20.043787+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128425984 unmapped: 13639680 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:21.043938+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128425984 unmapped: 13639680 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:22.044074+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128425984 unmapped: 13639680 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:23.044230+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128425984 unmapped: 13639680 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:24.044438+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128425984 unmapped: 13639680 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:25.044601+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128425984 unmapped: 13639680 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:26.044830+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128425984 unmapped: 13639680 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:27.044987+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128425984 unmapped: 13639680 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:28.045120+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128434176 unmapped: 13631488 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:29.045257+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128434176 unmapped: 13631488 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:30.045389+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128434176 unmapped: 13631488 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:31.045611+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128434176 unmapped: 13631488 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:32.045787+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128434176 unmapped: 13631488 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:33.045975+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128434176 unmapped: 13631488 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:34.046190+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128434176 unmapped: 13631488 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:35.046390+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128434176 unmapped: 13631488 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:36.046667+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128434176 unmapped: 13631488 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:37.046835+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 13623296 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:38.047058+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 13623296 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:39.047315+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 13623296 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:40.047493+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 13623296 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:41.047670+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 13623296 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.2 total, 600.0 interval
                                           Cumulative writes: 13K writes, 51K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 13K writes, 3727 syncs, 3.56 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2847 writes, 9718 keys, 2847 commit groups, 1.0 writes per commit group, ingest: 13.06 MB, 0.02 MB/s
                                           Interval WAL: 2847 writes, 974 syncs, 2.92 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:42.047855+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 13623296 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:43.048015+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 13623296 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:44.048197+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 13623296 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:45.048448+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 13623296 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:46.048826+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 13623296 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:47.049731+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 13623296 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:48.049867+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 13623296 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:49.050077+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 13623296 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:50.050318+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 13623296 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:51.050490+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 13623296 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:52.050693+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 13623296 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:53.050885+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128450560 unmapped: 13615104 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:54.051094+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128450560 unmapped: 13615104 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:55.051612+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128450560 unmapped: 13615104 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:56.051979+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128450560 unmapped: 13615104 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:57.052162+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128450560 unmapped: 13615104 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:58.052421+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128450560 unmapped: 13615104 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:59.052580+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128450560 unmapped: 13615104 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:00.052762+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128450560 unmapped: 13615104 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:01.053024+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128450560 unmapped: 13615104 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:02.053235+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128450560 unmapped: 13615104 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:03.053702+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128450560 unmapped: 13615104 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:04.053960+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128450560 unmapped: 13615104 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:05.054127+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128450560 unmapped: 13615104 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:06.054301+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128450560 unmapped: 13615104 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:07.054436+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128458752 unmapped: 13606912 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:08.054628+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128458752 unmapped: 13606912 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:09.054861+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128458752 unmapped: 13606912 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:10.055051+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128458752 unmapped: 13606912 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:11.055290+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128458752 unmapped: 13606912 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:12.055448+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128458752 unmapped: 13606912 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:13.055631+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128458752 unmapped: 13606912 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:14.055820+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128458752 unmapped: 13606912 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:15.056013+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128458752 unmapped: 13606912 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:16.056236+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128458752 unmapped: 13606912 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:17.056426+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128458752 unmapped: 13606912 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:18.056668+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128458752 unmapped: 13606912 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:19.056943+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128458752 unmapped: 13606912 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:20.057088+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128458752 unmapped: 13606912 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:21.057280+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128458752 unmapped: 13606912 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:22.057443+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128458752 unmapped: 13606912 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:23.057566+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 13598720 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:24.057732+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 13598720 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:25.057878+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 13598720 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:26.058112+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 13598720 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:27.058268+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 13598720 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:28.058470+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 13598720 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:29.058667+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 13598720 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:30.058864+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 13598720 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:31.059140+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 13598720 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:32.059309+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 13598720 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:33.059533+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 13598720 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:34.059665+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 13598720 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:35.059980+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579062 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 13598720 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:36.060243+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 13598720 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:37.060550+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 13598720 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:38.060758+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 13598720 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:39.060885+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f582c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x6cdf9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 267.826690674s of 267.843902588s, submitted: 2
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127909888 unmapped: 14155776 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:40.061035+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:41.061195+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:42.061342+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:43.061504+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:44.061662+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:45.061821+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:46.061993+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:47.062157+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:48.062309+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:49.062508+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:50.062691+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:51.062835+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:52.062996+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:53.063126+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:54.063305+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:55.063416+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:56.063594+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:57.063725+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:58.063850+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:59.063949+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:00.064080+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:01.064200+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:02.064306+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:03.064456+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:04.064615+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:05.064830+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:06.065097+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:07.065276+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:08.065490+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:09.065656+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:10.065812+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:11.065967+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:12.066106+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:13.066244+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:14.066397+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:15.066555+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:16.066717+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:17.067013+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:18.067155+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:19.067293+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:20.067423+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:21.067554+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:22.067700+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:23.067800+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 14688256 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:24.068037+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127385600 unmapped: 14680064 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:25.068193+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127385600 unmapped: 14680064 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:26.068361+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127385600 unmapped: 14680064 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:27.068513+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127385600 unmapped: 14680064 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:28.068663+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127385600 unmapped: 14680064 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:29.068779+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127385600 unmapped: 14680064 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:30.068961+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127385600 unmapped: 14680064 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:31.069106+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127385600 unmapped: 14680064 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:32.069260+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127385600 unmapped: 14680064 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:33.069434+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127385600 unmapped: 14680064 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:34.069580+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127385600 unmapped: 14680064 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:35.069747+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127385600 unmapped: 14680064 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:36.069953+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127385600 unmapped: 14680064 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:37.070087+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127385600 unmapped: 14680064 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:38.070248+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127385600 unmapped: 14680064 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:39.070392+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127385600 unmapped: 14680064 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:40.070520+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127393792 unmapped: 14671872 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:41.070670+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127393792 unmapped: 14671872 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:42.070803+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127418368 unmapped: 14647296 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:43.070960+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127418368 unmapped: 14647296 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:44.071129+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127418368 unmapped: 14647296 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:45.071282+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127418368 unmapped: 14647296 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:46.071475+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127418368 unmapped: 14647296 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:47.071616+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127418368 unmapped: 14647296 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:48.071801+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127418368 unmapped: 14647296 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:49.071955+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127418368 unmapped: 14647296 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:50.072053+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127418368 unmapped: 14647296 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:51.072203+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127418368 unmapped: 14647296 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:52.073655+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127418368 unmapped: 14647296 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:53.074674+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127418368 unmapped: 14647296 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:54.075630+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127418368 unmapped: 14647296 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:55.076563+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127418368 unmapped: 14647296 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:56.077046+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127426560 unmapped: 14639104 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:57.077738+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127426560 unmapped: 14639104 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:58.078354+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127426560 unmapped: 14639104 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:59.078601+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127426560 unmapped: 14639104 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:00.078826+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127426560 unmapped: 14639104 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:01.079135+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127426560 unmapped: 14639104 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:02.079419+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127426560 unmapped: 14639104 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:03.079752+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127426560 unmapped: 14639104 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:04.079983+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127426560 unmapped: 14639104 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:05.080224+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127426560 unmapped: 14639104 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:06.080502+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127426560 unmapped: 14639104 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:07.080723+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127426560 unmapped: 14639104 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:08.080911+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127426560 unmapped: 14639104 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:09.081113+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127426560 unmapped: 14639104 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:10.081379+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127426560 unmapped: 14639104 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:11.081526+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127426560 unmapped: 14639104 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:12.081768+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127426560 unmapped: 14639104 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:13.081918+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127426560 unmapped: 14639104 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:14.082052+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127434752 unmapped: 14630912 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:15.082308+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127434752 unmapped: 14630912 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:16.082584+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127434752 unmapped: 14630912 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:17.082816+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127434752 unmapped: 14630912 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:18.082954+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127434752 unmapped: 14630912 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:19.083177+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127434752 unmapped: 14630912 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:20.083399+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127434752 unmapped: 14630912 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:21.083576+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127434752 unmapped: 14630912 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:22.083754+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127434752 unmapped: 14630912 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:23.083921+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127434752 unmapped: 14630912 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:24.084087+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127434752 unmapped: 14630912 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:25.084217+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127434752 unmapped: 14630912 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:26.084484+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127434752 unmapped: 14630912 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:27.084681+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127434752 unmapped: 14630912 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:28.084854+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127434752 unmapped: 14630912 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:29.085024+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127434752 unmapped: 14630912 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:30.085161+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127442944 unmapped: 14622720 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:31.085357+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127442944 unmapped: 14622720 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:32.085615+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127442944 unmapped: 14622720 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:33.085842+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127442944 unmapped: 14622720 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:34.086053+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127451136 unmapped: 14614528 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:35.086228+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127451136 unmapped: 14614528 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:36.086456+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127451136 unmapped: 14614528 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:37.086626+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127451136 unmapped: 14614528 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:38.086858+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127451136 unmapped: 14614528 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:39.087111+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127451136 unmapped: 14614528 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:40.087297+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127451136 unmapped: 14614528 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:41.087498+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127451136 unmapped: 14614528 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:42.087669+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127459328 unmapped: 14606336 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:43.087825+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127459328 unmapped: 14606336 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:44.087996+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:45.088247+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127459328 unmapped: 14606336 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:46.088463+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127459328 unmapped: 14606336 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:47.088639+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127459328 unmapped: 14606336 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:48.088818+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127459328 unmapped: 14606336 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:49.089067+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127459328 unmapped: 14606336 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:50.089293+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127459328 unmapped: 14606336 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:51.090634+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127467520 unmapped: 14598144 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:52.090808+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127467520 unmapped: 14598144 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:53.090954+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127467520 unmapped: 14598144 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:54.091183+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127467520 unmapped: 14598144 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:55.091430+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127467520 unmapped: 14598144 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:56.091718+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127467520 unmapped: 14598144 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:57.091993+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127467520 unmapped: 14598144 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:58.092201+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127467520 unmapped: 14598144 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:59.092465+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127467520 unmapped: 14598144 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:00.092637+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127467520 unmapped: 14598144 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:01.092978+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127467520 unmapped: 14598144 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:02.093193+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127467520 unmapped: 14598144 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:03.093382+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127467520 unmapped: 14598144 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:04.093541+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127467520 unmapped: 14598144 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:05.094367+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127467520 unmapped: 14598144 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:06.094656+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127467520 unmapped: 14598144 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:07.094871+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127475712 unmapped: 14589952 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:08.095140+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127475712 unmapped: 14589952 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:09.095338+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127475712 unmapped: 14589952 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:10.095516+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127475712 unmapped: 14589952 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:11.095683+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127475712 unmapped: 14589952 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:12.095957+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127475712 unmapped: 14589952 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:13.096157+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127475712 unmapped: 14589952 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:14.096331+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127475712 unmapped: 14589952 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:15.096543+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127475712 unmapped: 14589952 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:16.096775+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127475712 unmapped: 14589952 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:17.096945+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127475712 unmapped: 14589952 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:18.097117+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127475712 unmapped: 14589952 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:19.097386+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127475712 unmapped: 14589952 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:20.097550+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127475712 unmapped: 14589952 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:21.097859+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127475712 unmapped: 14589952 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:22.098175+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127475712 unmapped: 14589952 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:23.098393+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127475712 unmapped: 14589952 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:24.098596+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127475712 unmapped: 14589952 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:25.098765+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127483904 unmapped: 14581760 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:26.098978+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127483904 unmapped: 14581760 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:27.099124+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127483904 unmapped: 14581760 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:28.099670+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127483904 unmapped: 14581760 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:29.099841+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127483904 unmapped: 14581760 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:30.100009+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127483904 unmapped: 14581760 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:31.100295+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127483904 unmapped: 14581760 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:32.100426+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127483904 unmapped: 14581760 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:33.100579+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127483904 unmapped: 14581760 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:34.100771+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127483904 unmapped: 14581760 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:35.100957+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127483904 unmapped: 14581760 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:36.101296+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127483904 unmapped: 14581760 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:37.101430+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127483904 unmapped: 14581760 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:38.101637+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127483904 unmapped: 14581760 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:39.101775+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127483904 unmapped: 14581760 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:40.101944+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127483904 unmapped: 14581760 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:41.102085+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127483904 unmapped: 14581760 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:42.102232+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127483904 unmapped: 14581760 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:43.102381+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127483904 unmapped: 14581760 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:44.102504+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127483904 unmapped: 14581760 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:45.102667+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127483904 unmapped: 14581760 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:20 compute-0 ceph-osd[89167]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:20 compute-0 ceph-osd[89167]: bluestore.MempoolThread(0x5624c4fffb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578886 data_alloc: 218103808 data_used: 618496
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:46.102867+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127483904 unmapped: 14581760 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:47.103081+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127483904 unmapped: 14581760 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: do_command 'config diff' '{prefix=config diff}'
Oct 01 17:19:20 compute-0 ceph-osd[89167]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 01 17:19:20 compute-0 ceph-osd[89167]: do_command 'config show' '{prefix=config show}'
Oct 01 17:19:20 compute-0 ceph-osd[89167]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 01 17:19:20 compute-0 ceph-osd[89167]: do_command 'counter dump' '{prefix=counter dump}'
Oct 01 17:19:20 compute-0 ceph-osd[89167]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 01 17:19:20 compute-0 ceph-osd[89167]: do_command 'counter schema' '{prefix=counter schema}'
Oct 01 17:19:20 compute-0 ceph-osd[89167]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:48.103210+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127672320 unmapped: 14393344 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: tick
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_tickets
Oct 01 17:19:20 compute-0 ceph-osd[89167]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:49.103336+0000)
Oct 01 17:19:20 compute-0 ceph-osd[89167]: prioritycache tune_memory target: 4294967296 mapped: 127188992 unmapped: 14876672 heap: 142065664 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:20 compute-0 ceph-osd[89167]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f541c000/0x0/0x4ffc00000, data 0x35f70cf/0x36f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x70ef9c7), peers [0,2] op hist [])
Oct 01 17:19:20 compute-0 ceph-osd[89167]: do_command 'log dump' '{prefix=log dump}'
Oct 01 17:19:20 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 17:19:20 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:19:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Oct 01 17:19:20 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2779985238' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 01 17:19:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Oct 01 17:19:20 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2947647983' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 01 17:19:20 compute-0 rsyslogd[1001]: imjournal from <np0005464933:ceph-osd>: begin to drop messages due to rate-limiting
Oct 01 17:19:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 01 17:19:20 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/79162179' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 01 17:19:20 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Oct 01 17:19:20 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1170806948' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 01 17:19:20 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2108697836' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 01 17:19:20 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3621541757' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 01 17:19:20 compute-0 ceph-mon[74273]: pgmap v1518: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:19:20 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2779985238' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 01 17:19:20 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/2947647983' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 01 17:19:20 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/79162179' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 01 17:19:20 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1170806948' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 01 17:19:21 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Oct 01 17:19:21 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/929510438' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 01 17:19:21 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14929 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:19:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 17:19:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:19:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 17:19:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:19:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:19:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:19:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:19:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:19:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:19:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:19:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 01 17:19:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:19:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005739061380803542 of space, bias 4.0, pg target 0.6886873656964251 quantized to 16 (current 16)
Oct 01 17:19:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:19:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Oct 01 17:19:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:19:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 17:19:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:19:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 17:19:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:19:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:19:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:19:21 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 17:19:21 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14931 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:21 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14933 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:19:21 compute-0 podman[297566]: 2025-10-01 17:19:21.744850588 +0000 UTC m=+0.061689108 container health_status 82522023bb548d4feb1dcf108127d722ed9c5c264b6d2a05fcd6699b1db955fe (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 01 17:19:21 compute-0 podman[297568]: 2025-10-01 17:19:21.745383095 +0000 UTC m=+0.062442414 container health_status d955694ca599f86d0eb9b51b9ede8e6c8639d6066e67faeae28229fd7958ad5c (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 01 17:19:21 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14937 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:19:21 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/929510438' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 01 17:19:21 compute-0 ceph-mon[74273]: from='client.14929 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:19:21 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14935 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:22 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14939 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:19:22 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:19:22 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14943 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:19:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:19:22 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Oct 01 17:19:22 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1955364854' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 01 17:19:23 compute-0 ceph-mon[74273]: from='client.14931 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:23 compute-0 ceph-mon[74273]: from='client.14933 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:19:23 compute-0 ceph-mon[74273]: from='client.14937 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:19:23 compute-0 ceph-mon[74273]: from='client.14935 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:23 compute-0 ceph-mon[74273]: from='client.14939 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:19:23 compute-0 ceph-mon[74273]: pgmap v1519: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:19:23 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1955364854' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 01 17:19:23 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14947 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:19:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Oct 01 17:19:23 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1604407912' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 01 17:19:23 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14951 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:19:23 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 01 17:19:23 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1169775264' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 01 17:19:24 compute-0 ceph-mon[74273]: from='client.14943 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:19:24 compute-0 ceph-mon[74273]: from='client.14947 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:19:24 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1604407912' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 01 17:19:24 compute-0 ceph-mon[74273]: from='client.14951 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 17:19:24 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1169775264' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 01 17:19:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Oct 01 17:19:24 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/604023589' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 01 17:19:24 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 01 17:19:24 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 01 17:19:24 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1520: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:42.332955+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 204800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:43.333088+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 204800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:44.333209+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 188416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:45.333374+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 188416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:46.333562+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 188416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:47.333866+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 188416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:48.334019+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 188416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:49.334161+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 188416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:50.334304+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 188416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:51.334478+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 180224 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:52.334631+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 180224 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:53.334760+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 163840 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:54.334869+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 163840 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:55.334963+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 163840 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:56.335057+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71090176 unmapped: 155648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:57.335329+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71090176 unmapped: 155648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:58.335448+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71098368 unmapped: 147456 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:45:59.335620+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71098368 unmapped: 147456 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:00.335748+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71106560 unmapped: 139264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:01.336045+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71106560 unmapped: 139264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:02.336181+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71106560 unmapped: 139264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:03.336346+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 131072 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:04.336477+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 131072 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:05.336630+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 131072 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:06.336794+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 122880 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:07.336943+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 122880 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:08.337088+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71131136 unmapped: 114688 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:09.337304+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71131136 unmapped: 114688 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:10.337552+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71139328 unmapped: 106496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:11.337677+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71139328 unmapped: 106496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:12.337851+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 98304 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:13.337985+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 98304 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:14.338164+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 90112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:15.338285+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 81920 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:16.338418+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 81920 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:17.338569+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 73728 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:18.338707+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 73728 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:19.338868+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 57344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:20.339004+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 49152 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:21.339194+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 49152 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:22.339349+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71204864 unmapped: 40960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:23.339503+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71204864 unmapped: 40960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:24.339643+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 32768 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:25.339773+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 32768 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:26.339966+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 32768 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:27.340130+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 24576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:28.340349+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 24576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:29.340509+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 24576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:30.340663+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:31.340944+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:32.341114+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 8192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:33.341293+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 8192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:34.341522+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 0 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:35.341738+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 0 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:36.341926+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1040384 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:37.342049+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1040384 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:38.342189+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1040384 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:39.342313+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1024000 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:40.342513+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1024000 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:41.342764+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1024000 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:42.342958+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1024000 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:43.343138+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1024000 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:44.343346+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1024000 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:45.343483+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1024000 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:46.343606+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1024000 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:47.343751+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1024000 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:48.343881+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1015808 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:49.344056+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1015808 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:50.344243+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1015808 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:51.344415+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1015808 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:52.344571+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1015808 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:53.344703+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1015808 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:54.344868+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1015808 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:55.345044+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1015808 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:56.345201+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1015808 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:57.345309+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1015808 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:58.345467+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1015808 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:46:59.345653+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 991232 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:00.345818+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 991232 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:01.345959+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 991232 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:02.346091+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 991232 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:03.346259+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 983040 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:04.346453+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 983040 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:05.346653+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 983040 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:06.346798+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 983040 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:07.346965+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 983040 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:08.347089+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 983040 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:09.347270+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 983040 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:10.347709+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 983040 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:11.347988+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 983040 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:12.348155+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 983040 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:13.348319+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 983040 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:14.348465+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 983040 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:15.348625+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 983040 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:16.348798+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 983040 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:17.348960+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 974848 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:18.349166+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:19.349349+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:20.349569+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:21.349915+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:22.350071+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:23.350191+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:24.350340+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:25.350461+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:26.350591+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:27.350736+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:28.350943+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:29.351102+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:30.351487+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:31.352151+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:32.352348+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:33.352568+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:34.352958+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:35.353110+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:36.353381+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:37.353568+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:38.353734+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 942080 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:39.353994+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 942080 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:40.354135+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 942080 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:41.354324+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 942080 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:42.354514+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 942080 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:43.354661+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 933888 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:44.354787+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 933888 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:45.354952+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 925696 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:46.355097+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 925696 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:47.355204+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 925696 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:48.355352+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 925696 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:49.355507+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 925696 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:50.355637+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 925696 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:51.355953+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 925696 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:52.356097+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 925696 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:53.356284+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 917504 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:54.356454+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 917504 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:55.356690+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 917504 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:56.356848+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 917504 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:57.357054+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 917504 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:58.357207+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:47:59.357350+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:00.357789+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:01.357966+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:02.358087+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:03.358427+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:04.358548+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:05.359347+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:06.359506+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:07.359690+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:08.359885+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:09.360043+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:10.360260+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:11.360461+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:12.360599+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:13.360991+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 892928 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:14.361215+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 892928 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:15.361380+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 892928 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:16.361552+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 892928 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:17.361705+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 892928 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:18.361978+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 876544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:19.362136+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 876544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:20.362371+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 876544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:21.362697+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 876544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:22.362863+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 876544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:23.363070+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 876544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:24.363310+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 876544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:25.363437+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 876544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:26.363615+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 876544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:27.363826+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 876544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:28.363983+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:29.364155+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:30.364320+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:31.364517+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:32.364741+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:33.364989+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:34.365270+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:35.365435+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:36.365645+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:37.365828+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:38.366005+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 851968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:39.366228+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 851968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:40.366423+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71450624 unmapped: 843776 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:41.366702+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:42.366877+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:43.367065+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:44.367270+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:45.367485+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:46.367770+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:47.368025+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:48.368244+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:49.368542+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:50.368785+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:51.369043+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:52.369217+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:53.369443+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 827392 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:54.369581+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 827392 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:55.369700+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 827392 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:56.369827+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 827392 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:57.370023+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 827392 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:58.370166+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 811008 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:48:59.370366+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 811008 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:00.370612+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 811008 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:01.373325+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 811008 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:02.373535+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 811008 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:03.373845+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 811008 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:04.373979+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 811008 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:05.374133+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 811008 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:06.374969+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 811008 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:07.375095+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 811008 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:08.375477+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 892928 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:09.376012+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:10.376159+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 892928 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:11.376464+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 892928 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:12.376617+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 892928 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:13.376809+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 892928 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:14.377005+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 884736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:15.377122+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 884736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:16.377318+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 884736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:17.377441+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 884736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:18.377756+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 884736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:19.377911+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:20.378023+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:21.378226+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:22.378434+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:23.378656+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:24.378872+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:25.379179+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:26.379383+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:27.379581+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:28.379829+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 860160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:29.379935+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 860160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:30.380068+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 860160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:31.380277+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 860160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:32.380453+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 860160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:33.380634+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 860160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:34.380772+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 860160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:35.380951+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 860160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:36.381083+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 860160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:37.381231+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 860160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:38.381394+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 860160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:39.381577+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 860160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:40.381754+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 860160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:41.381959+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:42.382121+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:43.383793+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: mgrc ms_handle_reset ms_handle_reset con 0x559b45991c00
Oct 01 17:19:24 compute-0 ceph-osd[88140]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3235544197
Oct 01 17:19:24 compute-0 ceph-osd[88140]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: get_auth_request con 0x559b49514400 auth_method 0
Oct 01 17:19:24 compute-0 ceph-osd[88140]: mgrc handle_mgr_configure stats_period=5
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:44.383925+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 540672 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:45.384010+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 540672 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:46.384139+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 540672 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:47.384298+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 540672 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 ms_handle_reset con 0x559b47e9d000 session 0x559b47b32780
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: handle_auth_request added challenge on 0x559b49c30000
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:48.384431+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:49.384610+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:50.384765+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:51.384960+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:52.385125+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:53.385290+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:54.385408+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:55.385555+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:56.385799+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:57.385942+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:58.386163+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:49:59.386381+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:00.386556+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:01.386853+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:02.387156+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:03.387372+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:04.387557+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:05.387809+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:06.388106+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:07.388349+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:08.388575+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:09.388854+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:10.389078+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:11.389255+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:12.389428+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:13.389651+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:14.389829+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:15.389994+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 532480 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:16.390139+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71770112 unmapped: 524288 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:17.390292+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:18.390458+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:19.390627+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:20.390815+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:21.390996+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:22.391278+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:23.391444+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:24.391615+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:25.391952+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:26.392158+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:27.392677+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:28.392858+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:29.393142+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:30.393382+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:31.393989+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:32.394136+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:33.394303+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:34.394450+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:35.394580+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:36.394746+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:37.394961+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:38.395155+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:39.395335+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:40.395452+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:41.395671+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:42.395821+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:43.395999+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:44.396131+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:45.396268+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:46.396417+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:47.396578+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:48.396750+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:49.397064+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:50.397254+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:51.397453+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:52.397639+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:53.397803+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:54.397986+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:55.398157+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 516096 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:56.398298+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 507904 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:57.398429+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 507904 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:58.398608+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 507904 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:50:59.398780+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 507904 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:00.398983+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 507904 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:01.399208+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 507904 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:02.399487+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 507904 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:03.399745+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 507904 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:04.400012+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 507904 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:05.400269+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 507904 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:06.400522+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 507904 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:07.400829+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 507904 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:08.401077+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 507904 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:09.401379+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 507904 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:10.401614+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 507904 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:11.401951+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 507904 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:12.402206+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 507904 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:13.402649+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 507904 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:14.403257+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:15.403619+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:16.403798+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:17.404485+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:18.404725+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:19.405036+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:20.405265+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:21.405468+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:22.406085+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:23.406260+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:24.406404+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:25.406685+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:26.406834+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:27.407030+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:28.407187+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:29.407427+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:30.407585+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:31.407825+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:32.407992+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:33.408141+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 483328 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:34.408299+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 466944 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:35.408446+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 466944 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:36.408601+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 466944 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:37.408750+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 466944 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:38.408914+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 466944 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:39.409045+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 466944 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:40.409164+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 466944 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:41.409334+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71852032 unmapped: 442368 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:42.409453+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71852032 unmapped: 442368 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:43.409600+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71852032 unmapped: 442368 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:44.409698+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:45.409869+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:46.410079+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:47.410233+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:48.410315+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:49.410482+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:50.410661+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:51.410821+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:52.414542+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:53.414661+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:54.414830+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:55.414995+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:56.415156+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:57.415314+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:58.415442+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:51:59.415597+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:00.415758+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:01.415988+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:02.416143+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:03.416230+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:04.416403+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:05.416544+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:06.416747+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:07.416924+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:08.417039+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:09.417185+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:10.417316+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:11.417457+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:12.417648+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:13.417832+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:14.417960+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:15.418135+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:16.418270+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:17.418447+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 434176 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:18.418606+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:19.418772+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:20.418939+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:21.419112+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:22.419277+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:23.419434+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:24.419584+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:25.419730+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:26.419940+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:27.420145+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:28.420434+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:29.420602+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:30.420768+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:31.420942+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:32.421093+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:33.421294+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71884800 unmapped: 409600 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:34.421447+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71884800 unmapped: 409600 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:35.421616+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71884800 unmapped: 409600 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:36.421770+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71884800 unmapped: 409600 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:37.421928+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71884800 unmapped: 409600 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:38.422089+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:39.422235+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:40.422346+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:41.422501+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:42.422663+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:43.428990+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:44.429134+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:45.429257+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:46.429393+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:47.429959+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:48.430113+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:49.430276+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:50.430441+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:51.430619+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:52.430768+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:53.430881+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:54.431037+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:55.431184+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:56.431380+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:57.431534+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:58.431767+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:52:59.431972+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:00.432092+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:01.432283+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:02.432418+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:03.432555+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:04.432750+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:05.432931+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:06.433091+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:07.433280+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:08.433452+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:09.433604+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:10.433760+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:11.434005+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:12.434155+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:13.434314+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:14.434460+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:15.434565+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:16.434681+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:17.434795+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:18.434914+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:19.435026+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:20.435141+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:21.435263+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:22.435400+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:23.435556+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:24.435713+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:25.435878+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:26.436074+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:27.436262+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:28.436444+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:29.436577+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:30.436706+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:31.436826+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:32.437155+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:33.437317+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:34.437441+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:35.437582+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:36.437772+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:37.437953+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:38.438103+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:39.438288+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:40.438428+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 425984 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:41.438624+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71884800 unmapped: 409600 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:42.438851+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71884800 unmapped: 409600 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:43.439107+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:44.439348+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:45.439513+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:46.439758+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:47.439964+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:48.440171+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:49.440395+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:50.441570+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:51.442982+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:52.443156+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:53.443616+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:54.443779+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:55.444526+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:56.445185+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:57.445326+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:58.445505+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:53:59.445961+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:00.446109+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:01.446597+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:02.446785+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 393216 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:03.447171+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:04.447518+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:05.447785+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:06.448056+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:07.448280+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:08.448482+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:09.448638+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:10.448785+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:11.448984+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:12.449150+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:13.449328+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:14.449497+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:15.449674+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:16.449821+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:17.449996+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:18.450172+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:19.450376+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:20.450539+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:21.450745+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:22.450948+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:23.451103+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 376832 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:24.451247+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 360448 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:25.451417+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 360448 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:26.451552+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 360448 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:27.451701+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 360448 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:28.451837+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 360448 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:29.451972+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 360448 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:30.452202+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 360448 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:31.452365+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 360448 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:32.452480+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 360448 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:33.452667+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 360448 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:34.452852+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 360448 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:35.453017+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 360448 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:36.453215+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 360448 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5726 writes, 24K keys, 5726 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5726 writes, 938 syncs, 6.10 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559b4583add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:37.453324+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 327680 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:38.453443+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 327680 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:39.453632+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 327680 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:40.453840+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 327680 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:41.454117+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:42.454280+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:43.454433+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:44.454569+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:45.454739+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:46.454977+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:47.455151+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:48.455330+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:49.455511+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:50.455670+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:51.455838+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:52.456017+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:53.456202+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:54.456364+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:55.456524+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:56.456705+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:57.457076+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:58.457210+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:54:59.457335+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:00.457452+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:01.457619+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:02.457785+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:03.457971+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:04.458116+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:05.458306+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:06.458427+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:07.458588+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:08.458727+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:09.458871+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:10.459033+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:11.459188+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:12.459297+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:13.459439+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:14.459605+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:15.459777+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:16.459955+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:17.460108+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:18.460227+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:19.460409+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:20.460546+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:21.460771+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:22.460956+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:23.461094+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:24.461254+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:25.461417+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:26.461589+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:27.461782+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:28.461973+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:29.462149+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:30.462305+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:31.462473+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:32.462586+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:33.462745+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:34.462939+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:35.463085+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:36.463237+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:37.463412+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:38.463576+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 600.159301758s of 601.157226562s, submitted: 106
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:39.463724+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:40.464129+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:41.464329+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:42.464517+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:43.464638+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:44.464790+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:45.464963+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:46.465138+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:47.465283+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:48.465486+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:49.465751+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:50.465924+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:51.466099+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:52.466265+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:53.466441+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:54.466579+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:55.466747+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:56.466851+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:57.467014+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:58.467167+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:55:59.467319+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:00.467442+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:01.467655+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:02.467853+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:03.467999+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:04.468153+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:05.468272+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:06.468407+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:07.468612+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:08.468722+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:09.468885+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:10.469054+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:11.469250+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:12.469374+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:13.469575+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:14.469761+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:15.469968+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:16.470105+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:17.470254+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:18.470386+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:19.470535+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:20.470687+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:21.470995+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:22.471157+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:23.471296+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:24.471456+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:25.471598+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:26.471761+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:27.471979+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:28.472110+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:29.472268+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:30.472400+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:31.472561+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:32.472743+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:33.472958+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:34.473150+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:35.473277+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:36.473412+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:37.473546+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 245760 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:38.473714+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:39.473846+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:40.474009+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:41.474196+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:42.474327+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:43.474464+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:44.474648+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:45.474789+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:46.475002+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:47.475171+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:48.475326+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:49.475519+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:50.476705+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:51.477069+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:52.477225+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:53.477419+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:54.477582+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:55.477728+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:56.478037+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:57.478240+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:58.478413+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:56:59.478567+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:00.478742+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:01.478927+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:02.479111+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:03.479522+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:04.479727+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:05.479963+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:06.480209+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:07.480427+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:08.480648+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:09.481030+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:10.481166+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:11.481397+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:12.481588+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:13.481788+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:14.481985+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:15.482179+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:16.482378+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:17.482567+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:18.482742+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:19.482967+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:20.483144+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:21.483374+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:22.483543+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:23.483774+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:24.483925+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:25.484172+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:26.484399+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:27.484668+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:28.484863+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:29.485174+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:30.485400+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:31.485623+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:32.485788+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:33.485936+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:34.486111+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:35.486300+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:36.486458+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:37.486661+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 229376 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:38.486810+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 221184 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:39.486977+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 221184 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:40.487121+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 221184 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:41.487310+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:42.487496+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:43.487633+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:44.487795+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:45.487944+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:46.488075+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:47.488246+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:48.488439+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:49.488606+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:50.488761+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:51.488950+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:52.489125+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:53.489343+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:54.489517+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:55.489703+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:56.489941+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:57.490103+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:58.490277+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:57:59.490441+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:00.490603+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:01.490766+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:02.491000+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:03.491214+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:04.491357+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:05.491541+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:06.491679+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:07.491842+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:08.492037+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:09.492180+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:10.492400+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:11.492585+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:12.492698+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:13.492813+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:14.492998+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:15.493163+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 204800 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:16.493337+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 196608 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:17.493501+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 196608 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:18.493666+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:19.493801+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:20.493949+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:21.494112+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:22.494248+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:23.494378+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:24.494504+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:25.494651+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:26.494773+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:27.494967+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:28.495127+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:29.495267+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:30.495430+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:31.495597+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:32.495789+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:33.495961+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:34.496179+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:35.496340+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:36.496497+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:37.496660+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 180224 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:38.496789+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:39.497056+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:40.497247+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:41.497430+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:42.497599+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:43.497766+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:44.497974+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:45.498135+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:46.498354+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:47.498526+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:48.498681+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:49.498951+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:50.499220+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:51.499467+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:52.499633+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:53.499825+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:54.499979+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:55.500146+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:56.500304+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:57.500551+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:58.500698+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 139264 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:58:59.500947+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 139264 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:00.501092+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 139264 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:01.501299+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 139264 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:02.501410+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 139264 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:03.501561+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 139264 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:04.501716+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 139264 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:05.501997+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 139264 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:06.502140+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 139264 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:07.502285+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 139264 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:08.502438+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 139264 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:09.502613+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 139264 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:10.502792+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:11.503016+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:12.503157+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:13.503331+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:14.503522+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:15.503695+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:16.503865+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:17.504052+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:18.504216+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xb98ab/0x171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 114688 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:19.504392+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 114688 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:20.504508+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 114688 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:21.504681+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862388 data_alloc: 218103808 data_used: 200704
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 114688 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:22.504808+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 222.721786499s of 223.958694458s, submitted: 106
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: handle_auth_request added challenge on 0x559b49c30400
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _renew_subs
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 24576 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:23.504978+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _renew_subs
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 125 ms_handle_reset con 0x559b49c30400 session 0x559b47e55c20
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fcaa7000/0x0/0x4ffc00000, data 0xbcff9/0x177000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72417280 unmapped: 925696 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:24.505147+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72417280 unmapped: 925696 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:25.505310+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fcaa3000/0x0/0x4ffc00000, data 0xbeb92/0x17a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: handle_auth_request added challenge on 0x559b49c30800
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:26.505485+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72499200 unmapped: 10158080 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 125 handle_osd_map epochs [125,126], i have 125, src has [1,126]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910146 data_alloc: 218103808 data_used: 217088
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 126 ms_handle_reset con 0x559b49c30800 session 0x559b48c72780
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:27.505661+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 10108928 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:28.505800+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 10108928 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:29.505983+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 10108928 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:30.506165+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 10108928 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fc62f000/0x0/0x4ffc00000, data 0x53074e/0x5ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fc62b000/0x0/0x4ffc00000, data 0x5321b1/0x5f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:31.506331+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 10108928 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913032 data_alloc: 218103808 data_used: 221184
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:32.506455+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 10108928 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:33.506651+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 10108928 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:34.506837+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 10100736 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fc62c000/0x0/0x4ffc00000, data 0x5321b1/0x5f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:35.506973+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 10100736 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:36.507126+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 10100736 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913032 data_alloc: 218103808 data_used: 221184
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:37.507304+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 10100736 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:38.507441+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 10100736 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:39.507581+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 10100736 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:40.507698+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 10100736 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fc62c000/0x0/0x4ffc00000, data 0x5321b1/0x5f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:41.507853+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 10076160 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913192 data_alloc: 218103808 data_used: 225280
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:42.508012+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 10076160 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:43.508157+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 10076160 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:44.508300+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 10076160 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:45.508439+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 10076160 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Got map version 10
Oct 01 17:19:24 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:46.508586+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 10067968 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fc62c000/0x0/0x4ffc00000, data 0x5321b1/0x5f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913192 data_alloc: 218103808 data_used: 225280
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:47.508722+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 10067968 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:48.508858+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 10067968 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:49.509039+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 10067968 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:50.509218+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 10067968 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:51.509400+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 10067968 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fc62c000/0x0/0x4ffc00000, data 0x5321b1/0x5f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913192 data_alloc: 218103808 data_used: 225280
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:52.509529+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 10067968 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:53.509658+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 10067968 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fc62c000/0x0/0x4ffc00000, data 0x5321b1/0x5f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Got map version 11
Oct 01 17:19:24 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:54.509802+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 10010624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.660118103s of 32.033905029s, submitted: 47
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:55.509965+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 10010624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:56.510134+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 10010624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912520 data_alloc: 218103808 data_used: 225280
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:57.510282+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 10010624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fc62d000/0x0/0x4ffc00000, data 0x5321b1/0x5f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:58.510431+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 10010624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T16:59:59.510582+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 10010624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:00.510815+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 10010624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fc62d000/0x0/0x4ffc00000, data 0x5321b1/0x5f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:01.511096+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 10010624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fc62d000/0x0/0x4ffc00000, data 0x5321b1/0x5f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912520 data_alloc: 218103808 data_used: 225280
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:02.511279+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 10010624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:03.511433+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 10010624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:04.511572+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 10010624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fc62d000/0x0/0x4ffc00000, data 0x5321b1/0x5f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:05.511700+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 10010624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.007043839s of 11.026364326s, submitted: 4
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:06.511829+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 10010624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fc62d000/0x0/0x4ffc00000, data 0x5321b1/0x5f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912504 data_alloc: 218103808 data_used: 225280
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:07.511979+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 10010624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:08.512119+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 10010624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fc62d000/0x0/0x4ffc00000, data 0x5321b1/0x5f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:09.512281+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 10010624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fc62d000/0x0/0x4ffc00000, data 0x5321b1/0x5f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:10.512419+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 10010624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _renew_subs
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:11.512617+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fc629000/0x0/0x4ffc00000, data 0x533d97/0x5f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916662 data_alloc: 218103808 data_used: 233472
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:12.512777+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:13.512944+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fc629000/0x0/0x4ffc00000, data 0x533d97/0x5f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:14.513091+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fc62a000/0x0/0x4ffc00000, data 0x533d97/0x5f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:15.513260+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:16.513492+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:17.513660+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915798 data_alloc: 218103808 data_used: 233472
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:18.513875+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.649345398s of 13.016798973s, submitted: 26
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:19.514085+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fc62a000/0x0/0x4ffc00000, data 0x533d97/0x5f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:20.514239+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:21.514482+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:22.514622+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915798 data_alloc: 218103808 data_used: 233472
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:23.514742+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:24.514860+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fc626000/0x0/0x4ffc00000, data 0x5357fa/0x5f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:25.516054+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:26.516171+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:27.516322+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919940 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:28.516491+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:29.516635+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:30.516777+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fc626000/0x0/0x4ffc00000, data 0x5357fa/0x5f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:31.517074+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:32.517261+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920116 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:33.517404+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fc626000/0x0/0x4ffc00000, data 0x5357fa/0x5f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:34.517535+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:35.517711+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:36.517968+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 10018816 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fc626000/0x0/0x4ffc00000, data 0x5357fa/0x5f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.035810471s of 18.060047150s, submitted: 15
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:37.518147+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922914 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72433664 unmapped: 10223616 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:38.518378+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72433664 unmapped: 10223616 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:39.518515+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72433664 unmapped: 10223616 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:40.518697+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72433664 unmapped: 10223616 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:41.518945+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72450048 unmapped: 10207232 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:42.519127+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922914 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72450048 unmapped: 10207232 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fc623000/0x0/0x4ffc00000, data 0x5373e0/0x5fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:43.519296+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72450048 unmapped: 10207232 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:44.519498+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72450048 unmapped: 10207232 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 130 handle_osd_map epochs [130,131], i have 130, src has [1,131]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:45.519704+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72458240 unmapped: 10199040 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:46.519850+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72458240 unmapped: 10199040 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 131 heartbeat osd_stat(store_statfs(0x4fc620000/0x0/0x4ffc00000, data 0x538e43/0x5fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:47.519965+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925888 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72458240 unmapped: 10199040 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:48.520116+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 10190848 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.896615982s of 11.985222816s, submitted: 30
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 131 heartbeat osd_stat(store_statfs(0x4fc620000/0x0/0x4ffc00000, data 0x538e43/0x5fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:49.520251+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 10182656 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:50.520452+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 10182656 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:51.520637+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 10182656 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:52.520802+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925216 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 10182656 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:53.520980+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 10182656 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 131 heartbeat osd_stat(store_statfs(0x4fc621000/0x0/0x4ffc00000, data 0x538e43/0x5fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:54.521146+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 10182656 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:55.521326+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 10182656 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: handle_auth_request added challenge on 0x559b49c30c00
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:56.521518+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 10182656 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 131 heartbeat osd_stat(store_statfs(0x4fc620000/0x0/0x4ffc00000, data 0x538ede/0x5fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:57.521648+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 926984 data_alloc: 218103808 data_used: 245760
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 10182656 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:58.521796+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 10182656 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:00:59.521956+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.157895088s of 10.438511848s, submitted: 5
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 10182656 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:00.522136+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 10182656 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:01.522366+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 131 handle_osd_map epochs [131,132], i have 131, src has [1,132]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fc620000/0x0/0x4ffc00000, data 0x538ede/0x5fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 10182656 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:02.522551+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930950 data_alloc: 218103808 data_used: 253952
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 10182656 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fc61c000/0x0/0x4ffc00000, data 0x53aac4/0x601000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:03.522711+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 10182656 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:04.522929+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 10182656 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:05.523118+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72482816 unmapped: 10174464 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:06.523253+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72482816 unmapped: 10174464 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fc61d000/0x0/0x4ffc00000, data 0x53ab3a/0x601000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: handle_auth_request added challenge on 0x559b49c31800
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:07.523395+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935146 data_alloc: 218103808 data_used: 262144
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72482816 unmapped: 10174464 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:08.523552+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Got map version 12
Oct 01 17:19:24 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 10108928 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:09.523705+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 10108928 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:10.523920+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.907539845s of 11.074452400s, submitted: 45
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 10108928 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:11.524107+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 10108928 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fc614000/0x0/0x4ffc00000, data 0x53e3c3/0x608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:12.524274+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940452 data_alloc: 218103808 data_used: 270336
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 10108928 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fc617000/0x0/0x4ffc00000, data 0x53e328/0x607000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [0,0,0,0,1])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:13.524435+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 10100736 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:14.524568+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 10067968 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:15.524689+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 9003008 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:16.524803+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 9003008 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:17.525012+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942864 data_alloc: 218103808 data_used: 278528
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 9003008 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:18.525250+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc614000/0x0/0x4ffc00000, data 0x53fefb/0x609000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 9003008 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:19.525383+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 9003008 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:20.525521+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 8953856 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc611000/0x0/0x4ffc00000, data 0x54197e/0x60c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc611000/0x0/0x4ffc00000, data 0x54197e/0x60c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.943125725s of 10.805793762s, submitted: 102
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:21.525671+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 8929280 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:22.525802+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948620 data_alloc: 218103808 data_used: 278528
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 8929280 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:23.525945+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc60e000/0x0/0x4ffc00000, data 0x543594/0x60f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 8929280 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:24.526409+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 8929280 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc60e000/0x0/0x4ffc00000, data 0x543594/0x60f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:25.526786+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 8929280 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:26.527028+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc60a000/0x0/0x4ffc00000, data 0x5450b2/0x613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 8929280 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc60a000/0x0/0x4ffc00000, data 0x5450b2/0x613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:27.527178+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953842 data_alloc: 218103808 data_used: 290816
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 8929280 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:28.527297+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc60a000/0x0/0x4ffc00000, data 0x5450b2/0x613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 8929280 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:29.527434+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 8929280 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:30.527574+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 8929280 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:31.527767+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 8929280 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:32.527949+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953842 data_alloc: 218103808 data_used: 290816
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 8929280 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:33.528101+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 8921088 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc60a000/0x0/0x4ffc00000, data 0x5450b2/0x613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:34.528231+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 8921088 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.841822624s of 13.927069664s, submitted: 33
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:35.528373+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 8921088 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:36.528456+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 8921088 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:37.528532+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc60a000/0x0/0x4ffc00000, data 0x5450b2/0x613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954018 data_alloc: 218103808 data_used: 290816
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 8921088 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:38.528626+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 8921088 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:39.528729+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 8912896 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:40.528855+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 8912896 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _renew_subs
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:41.529064+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 8904704 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:42.529185+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960900 data_alloc: 218103808 data_used: 299008
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fc606000/0x0/0x4ffc00000, data 0x546d63/0x617000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 8896512 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:43.529371+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 8896512 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:44.529527+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 8896512 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:45.529660+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.690116882s of 10.124419212s, submitted: 31
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 8896512 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc607000/0x0/0x4ffc00000, data 0x546d63/0x617000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:46.529829+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 8896512 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:47.530031+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962462 data_alloc: 218103808 data_used: 307200
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 8896512 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:48.530198+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 8896512 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:49.530347+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc605000/0x0/0x4ffc00000, data 0x5486b0/0x618000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 8896512 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:50.530529+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc605000/0x0/0x4ffc00000, data 0x5486b0/0x618000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 7831552 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:51.530990+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 7831552 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:52.531274+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964230 data_alloc: 218103808 data_used: 307200
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 7831552 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc604000/0x0/0x4ffc00000, data 0x54874b/0x619000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:53.531458+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 7823360 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc606000/0x0/0x4ffc00000, data 0x5486b0/0x618000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:54.531622+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 7815168 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc606000/0x0/0x4ffc00000, data 0x5486b0/0x618000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:55.531745+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 7815168 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _renew_subs
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.231688499s of 10.758177757s, submitted: 20
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:56.531925+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 7806976 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:57.532073+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc602000/0x0/0x4ffc00000, data 0x54a296/0x61b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965772 data_alloc: 218103808 data_used: 315392
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 7806976 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:58.532949+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 7806976 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:01:59.533180+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 7806976 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:00.533505+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 7806976 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:01.533847+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc601000/0x0/0x4ffc00000, data 0x54a331/0x61c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 7790592 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:02.534156+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970130 data_alloc: 218103808 data_used: 315392
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 7790592 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:03.534428+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 7790592 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:04.534776+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 7790592 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:05.535063+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 7790592 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.882214546s of 10.021253586s, submitted: 37
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:06.535318+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 7757824 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:07.535722+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 968736 data_alloc: 218103808 data_used: 315392
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 7757824 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:08.535934+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 7757824 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:09.536152+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 7757824 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:10.536367+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 7757824 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:11.536600+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 7757824 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:12.536822+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 968736 data_alloc: 218103808 data_used: 315392
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 7757824 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:13.536963+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Got map version 13
Oct 01 17:19:24 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 7757824 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:14.537158+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 7757824 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:15.537422+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 7757824 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.978836060s of 10.002865791s, submitted: 4
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:16.537576+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:17.537831+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 968736 data_alloc: 218103808 data_used: 315392
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:18.538012+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:19.538223+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:20.538432+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:21.538751+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:22.538967+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 968736 data_alloc: 218103808 data_used: 315392
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:23.539126+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:24.539293+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:25.539450+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:26.539643+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:27.539764+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969056 data_alloc: 218103808 data_used: 323584
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:28.540035+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:29.540238+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:30.540416+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.988643646s of 14.997574806s, submitted: 2
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:31.540605+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:32.540765+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969072 data_alloc: 218103808 data_used: 323584
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:33.540979+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:34.541109+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:35.541269+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:36.541458+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:37.541602+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969088 data_alloc: 218103808 data_used: 323584
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:38.541705+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:39.541804+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:40.542021+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x54bdc2/0x61f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 7741440 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.036031723s of 10.064247131s, submitted: 7
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 142 ms_handle_reset con 0x559b49c31800 session 0x559b470cef00
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:41.542224+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:42.542399+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Got map version 14
Oct 01 17:19:24 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970648 data_alloc: 218103808 data_used: 323584
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:43.542534+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:44.542778+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:45.542997+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:46.543112+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:47.543304+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969958 data_alloc: 218103808 data_used: 323584
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:48.543522+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:49.543776+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:50.543956+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:51.544199+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.928445816s of 10.967930794s, submitted: 139
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:52.544337+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969974 data_alloc: 218103808 data_used: 323584
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:53.544500+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:54.544692+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:55.544978+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:56.545148+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:57.545328+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969974 data_alloc: 218103808 data_used: 323584
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:58.545510+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:02:59.545700+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:00.545834+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:01.545981+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:02.546156+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969990 data_alloc: 218103808 data_used: 323584
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:03.546311+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.004209518s of 12.012298584s, submitted: 2
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:04.546476+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:05.546632+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:06.546794+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:07.584459+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969974 data_alloc: 218103808 data_used: 323584
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:08.584608+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:09.584806+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:10.584988+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:11.585169+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:12.585345+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969974 data_alloc: 218103808 data_used: 323584
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:13.585452+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.988098145s of 10.001517296s, submitted: 3
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:14.585575+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:15.585715+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x54bdc1/0x61f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:16.585865+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 7217152 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:17.586030+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971726 data_alloc: 218103808 data_used: 323584
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 7184384 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:18.586170+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 7315456 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:19.586297+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 7315456 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:20.586518+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 7315456 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:21.586664+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5ff000/0x0/0x4ffc00000, data 0x54bdbf/0x61f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 7315456 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:22.586831+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x54bcf9/0x61e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969958 data_alloc: 218103808 data_used: 323584
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 7315456 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:23.586961+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 7290880 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:24.587145+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 7290880 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:25.587298+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 7290880 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:26.587445+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 7290880 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:27.587616+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971742 data_alloc: 218103808 data_used: 323584
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 7290880 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.930398941s of 14.013147354s, submitted: 9
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:28.587845+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5ff000/0x0/0x4ffc00000, data 0x54bd94/0x61f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 7290880 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:29.588157+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 7290880 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:30.588265+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 7290880 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:31.588466+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 7274496 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:32.588617+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5fd000/0x0/0x4ffc00000, data 0x54beca/0x621000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974910 data_alloc: 218103808 data_used: 323584
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 7266304 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:33.588826+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 7266304 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:34.588983+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 7266304 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:35.589193+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc5f9000/0x0/0x4ffc00000, data 0x54dab0/0x624000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 7266304 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:36.589350+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 7258112 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:37.589637+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978826 data_alloc: 218103808 data_used: 331776
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 7258112 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:38.589788+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 7258112 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:39.589951+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x54da15/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 7258112 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.953672409s of 12.076897621s, submitted: 31
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:40.590088+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 7258112 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _renew_subs
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:41.590282+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 7258112 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:42.590497+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984944 data_alloc: 218103808 data_used: 344064
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 7225344 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:43.590661+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fc5f6000/0x0/0x4ffc00000, data 0x55103a/0x628000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 7184384 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:44.590848+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 7184384 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:45.591050+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fc5f6000/0x0/0x4ffc00000, data 0x55103a/0x628000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 7176192 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:46.591221+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 7176192 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:47.591398+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989158 data_alloc: 218103808 data_used: 352256
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 7176192 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:48.591570+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 7176192 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:49.591722+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f2000/0x0/0x4ffc00000, data 0x552abf/0x62b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 7176192 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.864019394s of 10.115748405s, submitted: 47
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:50.591935+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 7168000 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:51.592093+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 7168000 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:52.592275+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988454 data_alloc: 218103808 data_used: 352256
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 7168000 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:53.592446+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f3000/0x0/0x4ffc00000, data 0x552abd/0x62b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 7168000 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:54.592608+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:55.592751+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:56.592868+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:57.592998+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989500 data_alloc: 218103808 data_used: 352256
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f2000/0x0/0x4ffc00000, data 0x552abe/0x62b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:58.593116+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:03:59.593234+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f2000/0x0/0x4ffc00000, data 0x552abe/0x62b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:00.593360+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.076109886s of 10.867831230s, submitted: 11
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:01.593512+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:02.594025+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987588 data_alloc: 218103808 data_used: 352256
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:03.594316+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f4000/0x0/0x4ffc00000, data 0x5529f7/0x62a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:04.594514+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f4000/0x0/0x4ffc00000, data 0x5529f7/0x62a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:05.594727+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f4000/0x0/0x4ffc00000, data 0x5529f7/0x62a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:06.594991+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f4000/0x0/0x4ffc00000, data 0x5529f7/0x62a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f4000/0x0/0x4ffc00000, data 0x5529f7/0x62a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:07.595195+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987588 data_alloc: 218103808 data_used: 352256
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:08.595327+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f4000/0x0/0x4ffc00000, data 0x5529f7/0x62a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:09.595460+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:10.595583+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.988055229s of 10.014366150s, submitted: 5
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:11.595718+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:12.595878+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987604 data_alloc: 218103808 data_used: 352256
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:13.596076+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f4000/0x0/0x4ffc00000, data 0x5529f7/0x62a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:14.596284+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f4000/0x0/0x4ffc00000, data 0x5529f7/0x62a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:15.596419+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:16.596627+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f4000/0x0/0x4ffc00000, data 0x5529f7/0x62a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:17.596770+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987604 data_alloc: 218103808 data_used: 352256
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:18.596957+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f4000/0x0/0x4ffc00000, data 0x5529f7/0x62a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:19.597112+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:20.597350+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.957857132s of 10.065895081s, submitted: 4
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:21.597639+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:22.597788+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f2000/0x0/0x4ffc00000, data 0x552abf/0x62b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989484 data_alloc: 218103808 data_used: 352256
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:23.597966+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 7143424 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:24.598096+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 7127040 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:25.598286+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 7127040 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:26.598490+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 7127040 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f3000/0x0/0x4ffc00000, data 0x5529f7/0x62a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:27.598662+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 7127040 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989308 data_alloc: 218103808 data_used: 352256
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:28.598849+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 7102464 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f1000/0x0/0x4ffc00000, data 0x552b5b/0x62c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:29.599022+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 7102464 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:30.599222+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 7094272 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:31.599440+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 7094272 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.224013329s of 10.345972061s, submitted: 12
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:32.599590+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75571200 unmapped: 7086080 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992364 data_alloc: 218103808 data_used: 352256
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:33.599729+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75571200 unmapped: 7086080 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:34.599880+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75571200 unmapped: 7086080 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f1000/0x0/0x4ffc00000, data 0x552c22/0x62d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:35.600082+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75571200 unmapped: 7086080 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f1000/0x0/0x4ffc00000, data 0x552c22/0x62d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:36.600231+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 7061504 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f1000/0x0/0x4ffc00000, data 0x552c22/0x62d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 6923 writes, 27K keys, 6923 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6923 writes, 1355 syncs, 5.11 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1197 writes, 3013 keys, 1197 commit groups, 1.0 writes per commit group, ingest: 1.69 MB, 0.00 MB/s
                                           Interval WAL: 1197 writes, 417 syncs, 2.87 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:37.600389+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 7061504 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992364 data_alloc: 218103808 data_used: 352256
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:38.600557+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 7061504 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:39.600694+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 7061504 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:40.601006+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 7061504 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:41.601242+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f1000/0x0/0x4ffc00000, data 0x552bf6/0x62d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 7061504 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:42.601362+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 7061504 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992364 data_alloc: 218103808 data_used: 352256
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:43.601592+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 7061504 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: mgrc ms_handle_reset ms_handle_reset con 0x559b49514400
Oct 01 17:19:24 compute-0 ceph-osd[88140]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3235544197
Oct 01 17:19:24 compute-0 ceph-osd[88140]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: get_auth_request con 0x559b49c2f800 auth_method 0
Oct 01 17:19:24 compute-0 ceph-osd[88140]: mgrc handle_mgr_configure stats_period=5
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:44.601732+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 6963200 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:45.601963+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 6963200 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:46.602151+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 6963200 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f1000/0x0/0x4ffc00000, data 0x552bf6/0x62d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:47.602324+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 6963200 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 146 ms_handle_reset con 0x559b49c30000 session 0x559b490d3c20
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: handle_auth_request added challenge on 0x559b47de8c00
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992364 data_alloc: 218103808 data_used: 352256
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:48.602454+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 6963200 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.831445694s of 16.878786087s, submitted: 5
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:49.602610+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 6938624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f1000/0x0/0x4ffc00000, data 0x552bf6/0x62d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:50.627732+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 6938624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:51.627959+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 6938624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:52.628165+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 6922240 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f0000/0x0/0x4ffc00000, data 0x552cbd/0x62e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993956 data_alloc: 218103808 data_used: 352256
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:53.628287+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 6922240 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:54.628502+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 6922240 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:55.628691+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 6922240 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:56.628844+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 6922240 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:57.629019+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 6914048 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991674 data_alloc: 218103808 data_used: 352256
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:58.629199+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 6905856 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f2000/0x0/0x4ffc00000, data 0x552b59/0x62c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:04:59.629346+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 6905856 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:00.629548+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 6905856 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f2000/0x0/0x4ffc00000, data 0x552b59/0x62c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:01.629774+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 6905856 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.806101799s of 13.457287788s, submitted: 17
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:02.629952+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 6905856 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5f2000/0x0/0x4ffc00000, data 0x552b59/0x62c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990984 data_alloc: 218103808 data_used: 352256
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:03.630100+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 6897664 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:04.630258+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 6889472 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:05.630434+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 79634432 unmapped: 3022848 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc5e4000/0x0/0x4ffc00000, data 0x56076e/0x63a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:06.630610+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 79634432 unmapped: 3022848 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:07.631024+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 2744320 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005552 data_alloc: 218103808 data_used: 352256
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:08.631182+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 1564672 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:09.631341+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 1531904 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:10.631506+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 1220608 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:11.631757+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 794624 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fb3c4000/0x0/0x4ffc00000, data 0x5e1481/0x6ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [1])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.616258621s of 10.176497459s, submitted: 71
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:12.631993+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 1818624 heap: 83705856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:13.632152+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006662 data_alloc: 218103808 data_used: 352256
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 81256448 unmapped: 2449408 heap: 83705856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:14.632290+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 81256448 unmapped: 2449408 heap: 83705856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:15.632478+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 434176 heap: 83705856 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:16.632600+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 999424 heap: 84754432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:17.632727+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 999424 heap: 84754432 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fb30e000/0x0/0x4ffc00000, data 0x697588/0x770000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:18.632951+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1017350 data_alloc: 218103808 data_used: 352256
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 1998848 heap: 85803008 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:19.633106+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 1490944 heap: 85803008 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:20.633244+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fb285000/0x0/0x4ffc00000, data 0x71fc5d/0x7f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 303104 heap: 85803008 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:21.633427+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 85606400 unmapped: 196608 heap: 85803008 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:22.633588+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.204580307s of 10.285860062s, submitted: 86
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 2121728 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:23.633712+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033122 data_alloc: 218103808 data_used: 352256
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 85794816 unmapped: 2105344 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:24.634052+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 85803008 unmapped: 2097152 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:25.634255+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 2121728 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _renew_subs
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:26.634389+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb1c9000/0x0/0x4ffc00000, data 0x7d9a78/0x8b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 86769664 unmapped: 2179072 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:27.634563+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2105344 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:28.634698+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1036590 data_alloc: 218103808 data_used: 360448
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 86810624 unmapped: 2138112 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:29.634826+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 86818816 unmapped: 2129920 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:30.634975+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 87203840 unmapped: 1744896 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:31.635201+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 87842816 unmapped: 2154496 heap: 89997312 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb11c000/0x0/0x4ffc00000, data 0x886afa/0x962000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [0,0,0,1])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:32.635320+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.199892998s of 10.026391983s, submitted: 138
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 2138112 heap: 89997312 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:33.635537+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1056982 data_alloc: 218103808 data_used: 360448
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88039424 unmapped: 1957888 heap: 89997312 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:34.635846+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88121344 unmapped: 1875968 heap: 89997312 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:35.635984+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88121344 unmapped: 1875968 heap: 89997312 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _renew_subs
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:36.636135+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 1638400 heap: 89997312 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Oct 01 17:19:24 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/659991805' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:37.637058+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88498176 unmapped: 1499136 heap: 89997312 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb076000/0x0/0x4ffc00000, data 0x92c598/0xa07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:38.637245+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052752 data_alloc: 218103808 data_used: 368640
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 87367680 unmapped: 2629632 heap: 89997312 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb053000/0x0/0x4ffc00000, data 0x950fea/0xa2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:39.637491+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 87171072 unmapped: 2826240 heap: 89997312 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:40.637608+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fac3e000/0x0/0x4ffc00000, data 0x9561fb/0xa30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88301568 unmapped: 1695744 heap: 89997312 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:41.637754+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 1638400 heap: 89997312 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:42.637884+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 1638400 heap: 89997312 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fac3a000/0x0/0x4ffc00000, data 0x957de1/0xa33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:43.638067+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1054754 data_alloc: 218103808 data_used: 376832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 1638400 heap: 89997312 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:44.638237+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fac3a000/0x0/0x4ffc00000, data 0x957de1/0xa33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88367104 unmapped: 1630208 heap: 89997312 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.679636955s of 12.825030327s, submitted: 178
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:45.638485+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88367104 unmapped: 1630208 heap: 89997312 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:46.638660+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _renew_subs
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88375296 unmapped: 1622016 heap: 89997312 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac35000/0x0/0x4ffc00000, data 0x95990b/0xa37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:47.638806+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88375296 unmapped: 1622016 heap: 89997312 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:48.638980+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059254 data_alloc: 218103808 data_used: 385024
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89432064 unmapped: 1613824 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:49.639158+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 2662400 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:50.639337+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 2662400 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:51.639580+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 2662400 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:52.639771+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 2662400 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac35000/0x0/0x4ffc00000, data 0x959844/0xa36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:53.639948+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059286 data_alloc: 218103808 data_used: 385024
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 2654208 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:54.640133+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 2654208 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:55.640303+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x959844/0xa36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 2646016 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:56.640476+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 2646016 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:57.640639+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 2646016 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.679360390s of 12.896973610s, submitted: 41
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:58.640806+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064318 data_alloc: 218103808 data_used: 401408
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 2703360 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:05:59.640979+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 2703360 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:00.641138+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 2703360 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x95b4c5/0xa3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 151 handle_osd_map epochs [152,152], i have 152, src has [1,152]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:01.641358+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 2703360 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:02.641552+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 2703360 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:03.641781+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066548 data_alloc: 218103808 data_used: 409600
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 2703360 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fac30000/0x0/0x4ffc00000, data 0x95cf28/0xa3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 152 handle_osd_map epochs [153,153], i have 153, src has [1,153]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:04.641927+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 2695168 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:05.642107+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 2695168 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:06.642289+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x95ea73/0xa3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 2695168 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:07.642446+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 2695168 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.969439507s of 10.173136711s, submitted: 37
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:08.642656+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1070792 data_alloc: 218103808 data_used: 409600
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 2695168 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:09.642821+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 2695168 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fac2d000/0x0/0x4ffc00000, data 0x95eb0e/0xa40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:10.642981+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 2695168 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:11.643193+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 2695168 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:12.643409+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 2695168 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x9621b3/0xa46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:13.643530+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077044 data_alloc: 218103808 data_used: 409600
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 2695168 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:14.643678+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 2695168 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:15.643875+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 155 handle_osd_map epochs [155,156], i have 155, src has [1,156]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 2703360 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fac25000/0x0/0x4ffc00000, data 0x963b9b/0xa48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:16.644050+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 2703360 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:17.644192+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 2703360 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:18.644329+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079168 data_alloc: 218103808 data_used: 417792
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 2703360 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fac25000/0x0/0x4ffc00000, data 0x963b9b/0xa48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:19.644494+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 2703360 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fac25000/0x0/0x4ffc00000, data 0x963b9b/0xa48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 156 handle_osd_map epochs [156,157], i have 156, src has [1,157]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.558800697s of 11.713101387s, submitted: 45
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:20.644618+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 2686976 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:21.644786+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 2686976 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:22.644984+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 2686976 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:23.645131+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083894 data_alloc: 218103808 data_used: 417792
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88367104 unmapped: 2678784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:24.645298+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 157 heartbeat osd_stat(store_statfs(0x4fac21000/0x0/0x4ffc00000, data 0x96581c/0xa4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88367104 unmapped: 2678784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:25.645433+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88367104 unmapped: 2678784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:26.645594+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88367104 unmapped: 2678784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:27.645742+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88367104 unmapped: 2678784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:28.645876+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088604 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88367104 unmapped: 2678784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:29.646014+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88367104 unmapped: 2678784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fac1d000/0x0/0x4ffc00000, data 0x96731a/0xa50000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:30.646168+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.325958252s of 10.345420837s, submitted: 35
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88367104 unmapped: 2678784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:31.646330+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88367104 unmapped: 2678784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fac1e000/0x0/0x4ffc00000, data 0x96731a/0xa50000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:32.646488+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88367104 unmapped: 2678784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:33.646686+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088802 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88367104 unmapped: 2678784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:34.646864+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88375296 unmapped: 2670592 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fac1e000/0x0/0x4ffc00000, data 0x96731a/0xa50000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:35.647012+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fac1e000/0x0/0x4ffc00000, data 0x96731a/0xa50000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88375296 unmapped: 2670592 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fac1e000/0x0/0x4ffc00000, data 0x96731a/0xa50000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:36.647204+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 2662400 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:37.647458+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 2662400 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:38.647624+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092042 data_alloc: 218103808 data_used: 425984
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 2662400 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:39.647789+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 2662400 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:40.647967+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.259268761s of 10.255467415s, submitted: 8
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fac1d000/0x0/0x4ffc00000, data 0x9673b5/0xa51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 2662400 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:41.648116+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _renew_subs
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 2646016 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:42.648274+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: handle_auth_request added challenge on 0x559b47de9000
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88408064 unmapped: 2637824 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:43.648392+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1096476 data_alloc: 218103808 data_used: 434176
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fac19000/0x0/0x4ffc00000, data 0x969045/0xa54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88408064 unmapped: 2637824 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:44.648492+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Got map version 15
Oct 01 17:19:24 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 2572288 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:45.648630+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fac1b000/0x0/0x4ffc00000, data 0x968faa/0xa53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 2572288 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:46.648773+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fac1a000/0x0/0x4ffc00000, data 0x969041/0xa54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 159 handle_osd_map epochs [160,160], i have 160, src has [1,160]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89530368 unmapped: 1515520 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac17000/0x0/0x4ffc00000, data 0x96a9af/0xa56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:47.648972+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89530368 unmapped: 1515520 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:48.649121+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101190 data_alloc: 218103808 data_used: 442368
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 1507328 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:49.649265+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 1507328 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:50.649396+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 1507328 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:51.649561+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 1507328 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:52.649714+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 1507328 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac17000/0x0/0x4ffc00000, data 0x96a9af/0xa56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:53.649841+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101190 data_alloc: 218103808 data_used: 442368
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 1507328 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:54.650010+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 1507328 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:55.650163+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac17000/0x0/0x4ffc00000, data 0x96a9af/0xa56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 1507328 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:56.650303+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.995437622s of 16.123609543s, submitted: 43
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89546752 unmapped: 1499136 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:57.650428+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89546752 unmapped: 1499136 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:58.650561+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103118 data_alloc: 218103808 data_used: 446464
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89546752 unmapped: 1499136 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:06:59.650738+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89546752 unmapped: 1499136 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:00.651276+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac16000/0x0/0x4ffc00000, data 0x96aa4a/0xa57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89546752 unmapped: 1499136 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:01.651498+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89546752 unmapped: 1499136 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:02.651648+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89546752 unmapped: 1499136 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:03.651865+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103118 data_alloc: 218103808 data_used: 446464
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89546752 unmapped: 1499136 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:04.652045+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89546752 unmapped: 1499136 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:05.652240+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac16000/0x0/0x4ffc00000, data 0x96aa4a/0xa57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89546752 unmapped: 1499136 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:06.652367+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89546752 unmapped: 1499136 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:07.652540+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89546752 unmapped: 1499136 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac17000/0x0/0x4ffc00000, data 0x96aa4a/0xa57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:08.652704+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102062 data_alloc: 218103808 data_used: 446464
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89546752 unmapped: 1499136 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.603413582s of 12.616387367s, submitted: 3
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:09.652875+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89563136 unmapped: 1482752 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:10.653150+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89563136 unmapped: 1482752 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:11.653328+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89563136 unmapped: 1482752 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:12.653507+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac18000/0x0/0x4ffc00000, data 0x96a9af/0xa56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89563136 unmapped: 1482752 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:13.653679+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101356 data_alloc: 218103808 data_used: 446464
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89563136 unmapped: 1482752 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:14.653842+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89571328 unmapped: 1474560 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:15.654015+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89571328 unmapped: 1474560 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:16.654200+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac18000/0x0/0x4ffc00000, data 0x96a9af/0xa56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89571328 unmapped: 1474560 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:17.654395+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac18000/0x0/0x4ffc00000, data 0x96a9af/0xa56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89571328 unmapped: 1474560 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:18.654566+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101372 data_alloc: 218103808 data_used: 446464
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89587712 unmapped: 1458176 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:19.654724+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89587712 unmapped: 1458176 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:20.654931+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.980280876s of 11.014110565s, submitted: 7
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac17000/0x0/0x4ffc00000, data 0x96aa4a/0xa57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89587712 unmapped: 1458176 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:21.655146+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac17000/0x0/0x4ffc00000, data 0x96aa4a/0xa57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89587712 unmapped: 1458176 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:22.655322+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89587712 unmapped: 1458176 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:23.655480+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103156 data_alloc: 218103808 data_used: 446464
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89595904 unmapped: 1449984 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac17000/0x0/0x4ffc00000, data 0x96aa4a/0xa57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:24.655649+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89604096 unmapped: 1441792 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:25.655818+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89604096 unmapped: 1441792 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:26.655954+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89604096 unmapped: 1441792 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:27.656088+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89604096 unmapped: 1441792 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:28.656247+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101356 data_alloc: 218103808 data_used: 446464
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89604096 unmapped: 1441792 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:29.656413+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89604096 unmapped: 1441792 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac18000/0x0/0x4ffc00000, data 0x96a9af/0xa56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:30.656580+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac18000/0x0/0x4ffc00000, data 0x96a9af/0xa56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89604096 unmapped: 1441792 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:31.656782+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89604096 unmapped: 1441792 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:32.657048+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89604096 unmapped: 1441792 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:33.657199+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101516 data_alloc: 218103808 data_used: 450560
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89604096 unmapped: 1441792 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:34.657407+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.004671097s of 14.022926331s, submitted: 4
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89604096 unmapped: 1441792 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:35.657549+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90652672 unmapped: 1441792 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:36.657737+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac19000/0x0/0x4ffc00000, data 0x96a918/0xa55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90652672 unmapped: 1441792 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac19000/0x0/0x4ffc00000, data 0x96a918/0xa55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:37.657869+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90652672 unmapped: 1441792 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:38.658049+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100682 data_alloc: 218103808 data_used: 446464
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90652672 unmapped: 1441792 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:39.658237+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90652672 unmapped: 1441792 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:40.658387+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90652672 unmapped: 1441792 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:41.658645+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90660864 unmapped: 1433600 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:42.658831+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac19000/0x0/0x4ffc00000, data 0x96a918/0xa55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89612288 unmapped: 2482176 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:43.658969+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100698 data_alloc: 218103808 data_used: 446464
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89612288 unmapped: 2482176 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:44.659195+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 160 handle_osd_map epochs [160,161], i have 160, src has [1,161]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.442216873s of 10.615092278s, submitted: 4
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 2457600 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:45.659391+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 2449408 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:46.659531+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 2449408 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:47.659677+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fac16000/0x0/0x4ffc00000, data 0x96c493/0xa57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 2449408 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:48.659802+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103684 data_alloc: 218103808 data_used: 454656
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 2449408 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:49.660064+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 2449408 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:50.660177+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fac16000/0x0/0x4ffc00000, data 0x96c493/0xa57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 161 handle_osd_map epochs [161,162], i have 161, src has [1,162]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89718784 unmapped: 2375680 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:51.660367+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89718784 unmapped: 2375680 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:52.660511+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89718784 unmapped: 2375680 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:53.660776+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106658 data_alloc: 218103808 data_used: 454656
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89726976 unmapped: 2367488 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:54.660937+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89726976 unmapped: 2367488 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:55.661189+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89726976 unmapped: 2367488 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:56.661337+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fac13000/0x0/0x4ffc00000, data 0x96df16/0xa5a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.489933014s of 11.828451157s, submitted: 35
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89726976 unmapped: 2367488 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:57.661512+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89726976 unmapped: 2367488 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:58.661696+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107706 data_alloc: 218103808 data_used: 458752
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89735168 unmapped: 2359296 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:07:59.661838+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89751552 unmapped: 2342912 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:00.662016+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fac13000/0x0/0x4ffc00000, data 0x96dfb1/0xa5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 162 handle_osd_map epochs [163,163], i have 163, src has [1,163]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 2334720 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:01.662192+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 2334720 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:02.662351+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 2334720 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:03.662503+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110824 data_alloc: 218103808 data_used: 466944
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 2334720 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:04.662663+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fac10000/0x0/0x4ffc00000, data 0x96fbc7/0xa5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 2318336 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:05.662821+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 2318336 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:06.662977+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _renew_subs
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 2318336 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:07.663632+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 2318336 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:08.663827+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fac0b000/0x0/0x4ffc00000, data 0x97175b/0xa62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118398 data_alloc: 218103808 data_used: 479232
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 2318336 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:09.664018+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 2318336 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:10.664237+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 2318336 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:11.664524+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 2318336 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fac0b000/0x0/0x4ffc00000, data 0x97175b/0xa62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:12.664818+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 2318336 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:13.665041+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.872739792s of 16.991382599s, submitted: 73
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117660 data_alloc: 218103808 data_used: 479232
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89784320 unmapped: 2310144 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:14.665275+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89784320 unmapped: 2310144 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:15.665460+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 2301952 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:16.665622+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fac0d000/0x0/0x4ffc00000, data 0x97164a/0xa61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 2301952 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:17.665800+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fac0d000/0x0/0x4ffc00000, data 0x97164a/0xa61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 2301952 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:18.665968+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115750 data_alloc: 218103808 data_used: 479232
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 2301952 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:19.666109+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fac0d000/0x0/0x4ffc00000, data 0x97164a/0xa61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 2301952 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fac0d000/0x0/0x4ffc00000, data 0x97164a/0xa61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:20.666282+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 2301952 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:21.666464+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 2301952 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:22.666670+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 2301952 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:23.666843+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117518 data_alloc: 218103808 data_used: 479232
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 2301952 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:24.667003+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.731521606s of 10.673498154s, submitted: 3
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fac0c000/0x0/0x4ffc00000, data 0x97175c/0xa62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 2301952 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:25.667221+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fac0c000/0x0/0x4ffc00000, data 0x97175c/0xa62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 2301952 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:26.667360+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 2301952 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:27.667539+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 2301952 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:28.667701+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116668 data_alloc: 218103808 data_used: 479232
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 2301952 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:29.667838+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fac0d000/0x0/0x4ffc00000, data 0x97164a/0xa61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 2301952 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:30.668037+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 2301952 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:31.668202+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89800704 unmapped: 2293760 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:32.668338+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fac0d000/0x0/0x4ffc00000, data 0x97164a/0xa61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89800704 unmapped: 2293760 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:33.668472+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116684 data_alloc: 218103808 data_used: 479232
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89800704 unmapped: 2293760 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:34.668626+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fac0d000/0x0/0x4ffc00000, data 0x97164a/0xa61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.121912003s of 10.435431480s, submitted: 5
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:35.668802+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89800704 unmapped: 2293760 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fac0d000/0x0/0x4ffc00000, data 0x97164a/0xa61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:36.668942+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 89800704 unmapped: 2293760 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:37.669072+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90873856 unmapped: 1220608 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:38.669236+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90873856 unmapped: 1220608 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119962 data_alloc: 218103808 data_used: 487424
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:39.669407+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90873856 unmapped: 1220608 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fac0a000/0x0/0x4ffc00000, data 0x973230/0xa64000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:40.669597+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90873856 unmapped: 1220608 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fac0a000/0x0/0x4ffc00000, data 0x973230/0xa64000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:41.669764+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90873856 unmapped: 1220608 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _renew_subs
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:42.669939+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90873856 unmapped: 1220608 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:43.670076+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90882048 unmapped: 1212416 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124280 data_alloc: 218103808 data_used: 499712
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:44.670315+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90882048 unmapped: 1212416 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:45.670446+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90882048 unmapped: 1212416 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fac06000/0x0/0x4ffc00000, data 0x974c93/0xa67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:46.670627+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90882048 unmapped: 1212416 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:47.670808+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90882048 unmapped: 1212416 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:48.670967+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90882048 unmapped: 1212416 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124280 data_alloc: 218103808 data_used: 499712
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fac06000/0x0/0x4ffc00000, data 0x974c93/0xa67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:49.671175+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90882048 unmapped: 1212416 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fac06000/0x0/0x4ffc00000, data 0x974c93/0xa67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:50.671382+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90882048 unmapped: 1212416 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:51.671543+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90882048 unmapped: 1212416 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.257966042s of 16.866178513s, submitted: 33
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:52.671719+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90882048 unmapped: 1212416 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:53.671865+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90898432 unmapped: 1196032 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fac06000/0x0/0x4ffc00000, data 0x974c93/0xa67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1123416 data_alloc: 218103808 data_used: 499712
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:54.671993+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 90898432 unmapped: 1196032 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:55.672135+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 91103232 unmapped: 991232 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 166 handle_osd_map epochs [166,167], i have 166, src has [1,167]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:56.678815+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 91144192 unmapped: 950272 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:57.678940+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 91365376 unmapped: 1777664 heap: 93143040 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:58.679592+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 91365376 unmapped: 1777664 heap: 93143040 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136408 data_alloc: 218103808 data_used: 512000
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:08:59.679851+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 91430912 unmapped: 1712128 heap: 93143040 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fabad000/0x0/0x4ffc00000, data 0x9cbe07/0xac0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:00.679997+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 91553792 unmapped: 2637824 heap: 94191616 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:01.680140+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 2588672 heap: 94191616 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:02.680253+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 2482176 heap: 94191616 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _renew_subs
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.750692844s of 11.498521805s, submitted: 44
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:03.680423+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 91234304 unmapped: 2957312 heap: 94191616 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141242 data_alloc: 218103808 data_used: 520192
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:04.680574+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 91234304 unmapped: 2957312 heap: 94191616 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 168 heartbeat osd_stat(store_statfs(0x4fab76000/0x0/0x4ffc00000, data 0xa00db3/0xaf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:05.680700+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 91529216 unmapped: 2662400 heap: 94191616 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:06.680819+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 91586560 unmapped: 2605056 heap: 94191616 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:07.680941+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 168 heartbeat osd_stat(store_statfs(0x4fab20000/0x0/0x4ffc00000, data 0xa57783/0xb4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 2408448 heap: 94191616 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 168 heartbeat osd_stat(store_statfs(0x4fab1c000/0x0/0x4ffc00000, data 0xa5ba22/0xb52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:08.681107+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 91971584 unmapped: 2220032 heap: 94191616 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1142474 data_alloc: 218103808 data_used: 520192
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:09.681263+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 91340800 unmapped: 2850816 heap: 94191616 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 168 heartbeat osd_stat(store_statfs(0x4fab1d000/0x0/0x4ffc00000, data 0xa5b987/0xb51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:10.681430+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 92479488 unmapped: 1712128 heap: 94191616 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 168 heartbeat osd_stat(store_statfs(0x4fab0b000/0x0/0x4ffc00000, data 0xa6d8a5/0xb63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:11.692409+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 92643328 unmapped: 1548288 heap: 94191616 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:12.692527+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 92643328 unmapped: 1548288 heap: 94191616 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 168 heartbeat osd_stat(store_statfs(0x4faae9000/0x0/0x4ffc00000, data 0xa8f7c7/0xb85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Got map version 16
Oct 01 17:19:24 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.284783363s of 10.012758255s, submitted: 44
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:13.692649+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 92741632 unmapped: 1449984 heap: 94191616 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148846 data_alloc: 218103808 data_used: 520192
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:14.692784+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93052928 unmapped: 1138688 heap: 94191616 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:15.692958+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93265920 unmapped: 1974272 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _renew_subs
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 168 handle_osd_map epochs [169,169], i have 168, src has [1,169]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:16.693100+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2007040 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _renew_subs
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 169 handle_osd_map epochs [170,170], i have 169, src has [1,170]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:17.693220+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93429760 unmapped: 1810432 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:18.693333+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93724672 unmapped: 1515520 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 170 heartbeat osd_stat(store_statfs(0x4faa0e000/0x0/0x4ffc00000, data 0xb67e2d/0xc5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161862 data_alloc: 218103808 data_used: 528384
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:19.693482+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93724672 unmapped: 1515520 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:20.693627+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 2236416 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:21.693758+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 2236416 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:22.693944+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 2236416 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 170 handle_osd_map epochs [171,171], i have 170, src has [1,171]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.111546516s of 10.004213333s, submitted: 84
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:23.694291+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 2236416 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162712 data_alloc: 218103808 data_used: 536576
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:24.694441+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 2236416 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fa9f8000/0x0/0x4ffc00000, data 0xb7c762/0xc75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:25.694627+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93339648 unmapped: 1900544 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fa9cc000/0x0/0x4ffc00000, data 0xba8a5b/0xca1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:26.694804+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93519872 unmapped: 1720320 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:27.694998+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93519872 unmapped: 1720320 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:28.695175+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93552640 unmapped: 1687552 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169384 data_alloc: 218103808 data_used: 544768
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:29.695339+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93634560 unmapped: 1605632 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:30.695506+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93741056 unmapped: 1499136 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fa9a9000/0x0/0x4ffc00000, data 0xbcb911/0xcc5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:31.695861+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93741056 unmapped: 1499136 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:32.695984+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93822976 unmapped: 1417216 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:33.696134+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93831168 unmapped: 1409024 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169084 data_alloc: 218103808 data_used: 544768
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:34.696359+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93831168 unmapped: 1409024 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.286650658s of 11.565922737s, submitted: 16
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:35.696567+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 93569024 unmapped: 2719744 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:36.696771+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fa971000/0x0/0x4ffc00000, data 0xc0404a/0xcfd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94912512 unmapped: 1376256 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:37.696920+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 1359872 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fa94a000/0x0/0x4ffc00000, data 0xc2a4c0/0xd24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:38.697058+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95207424 unmapped: 1081344 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fa94a000/0x0/0x4ffc00000, data 0xc2a4c0/0xd24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:39.697213+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176732 data_alloc: 218103808 data_used: 544768
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 1359872 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fa94c000/0x0/0x4ffc00000, data 0xc2a38a/0xd22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:40.697341+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 1359872 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:41.697578+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 171 ms_handle_reset con 0x559b47de9000 session 0x559b48c370e0
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95526912 unmapped: 761856 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:42.697704+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95526912 unmapped: 761856 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Got map version 17
Oct 01 17:19:24 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:43.697854+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95543296 unmapped: 745472 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fa90a000/0x0/0x4ffc00000, data 0xc6c291/0xd64000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:44.697988+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174258 data_alloc: 218103808 data_used: 544768
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fa90a000/0x0/0x4ffc00000, data 0xc6c291/0xd64000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95543296 unmapped: 745472 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.350504875s of 10.000372887s, submitted: 157
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:45.698126+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fa90a000/0x0/0x4ffc00000, data 0xc6c291/0xd64000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 171 handle_osd_map epochs [172,172], i have 171, src has [1,172]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 802816 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:46.698271+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 802816 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:47.698459+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95674368 unmapped: 614400 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:48.698655+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 172 heartbeat osd_stat(store_statfs(0x4fa8d9000/0x0/0x4ffc00000, data 0xc9a6f9/0xd94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95674368 unmapped: 614400 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:49.698830+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181420 data_alloc: 218103808 data_used: 552960
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95674368 unmapped: 614400 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 172 heartbeat osd_stat(store_statfs(0x4fa8d9000/0x0/0x4ffc00000, data 0xc9a6f9/0xd94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,1])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:50.698986+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 172 heartbeat osd_stat(store_statfs(0x4fa8d9000/0x0/0x4ffc00000, data 0xc9a6f9/0xd94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94920704 unmapped: 1368064 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:51.699144+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94920704 unmapped: 1368064 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:52.699302+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94920704 unmapped: 1368064 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:53.699419+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 172 heartbeat osd_stat(store_statfs(0x4fa8c9000/0x0/0x4ffc00000, data 0xcab1f7/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94994432 unmapped: 1294336 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:54.699561+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179540 data_alloc: 218103808 data_used: 552960
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94994432 unmapped: 1294336 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.382527351s of 10.000631332s, submitted: 27
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 172 heartbeat osd_stat(store_statfs(0x4fa8c9000/0x0/0x4ffc00000, data 0xcab1f7/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:55.699711+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:56.699857+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:57.699985+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 172 handle_osd_map epochs [173,173], i have 172, src has [1,173]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c3000/0x0/0x4ffc00000, data 0xcb185a/0xdab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:58.700148+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94822400 unmapped: 1466368 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8bf000/0x0/0x4ffc00000, data 0xcb32bd/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:09:59.700285+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183666 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:00.700419+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:01.700586+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:02.700768+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:03.700963+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:04.701115+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183666 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8bf000/0x0/0x4ffc00000, data 0xcb32bd/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:05.701252+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8bf000/0x0/0x4ffc00000, data 0xcb32bd/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:06.701457+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:07.701611+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:08.701812+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:09.701977+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183666 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8bf000/0x0/0x4ffc00000, data 0xcb32bd/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:10.702113+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:11.702276+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:12.702407+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:13.702594+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:14.702743+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183666 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8bf000/0x0/0x4ffc00000, data 0xcb32bd/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:15.702915+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:16.703072+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:17.703394+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8bf000/0x0/0x4ffc00000, data 0xcb32bd/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:18.703618+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:19.703764+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183666 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:20.703931+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:21.704152+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8bf000/0x0/0x4ffc00000, data 0xcb32bd/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:22.704287+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:23.704482+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:24.704658+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183666 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:25.704822+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:26.704976+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:27.705073+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8bf000/0x0/0x4ffc00000, data 0xcb32bd/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:28.705182+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:29.705306+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183666 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:30.705465+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:31.705634+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8bf000/0x0/0x4ffc00000, data 0xcb32bd/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:32.705759+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8bf000/0x0/0x4ffc00000, data 0xcb32bd/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:33.705846+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8bf000/0x0/0x4ffc00000, data 0xcb32bd/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:34.706001+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183666 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:35.706101+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8bf000/0x0/0x4ffc00000, data 0xcb32bd/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:36.706184+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:37.706306+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8bf000/0x0/0x4ffc00000, data 0xcb32bd/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:38.706474+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:39.706640+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183666 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8bf000/0x0/0x4ffc00000, data 0xcb32bd/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:40.706834+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 94773248 unmapped: 1515520 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 46.179630280s of 46.202053070s, submitted: 11
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 ms_handle_reset con 0x559b49c30c00 session 0x559b49106960
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:41.707071+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:42.707248+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Got map version 18
Oct 01 17:19:24 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:43.707423+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:44.707606+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182994 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb3443/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:45.707739+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:46.707872+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:47.708123+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb3443/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:48.708266+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:49.708386+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182994 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:50.708493+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:51.708634+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb3443/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:52.708748+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:53.708866+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:54.708986+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182994 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb3443/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:55.709102+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:56.709208+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:57.709362+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:58.709474+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:10:59.709629+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182994 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:00.709800+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb3443/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:01.709990+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95059968 unmapped: 1228800 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: do_command 'config diff' '{prefix=config diff}'
Oct 01 17:19:24 compute-0 ceph-osd[88140]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 01 17:19:24 compute-0 ceph-osd[88140]: do_command 'config show' '{prefix=config show}'
Oct 01 17:19:24 compute-0 ceph-osd[88140]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 01 17:19:24 compute-0 ceph-osd[88140]: do_command 'counter dump' '{prefix=counter dump}'
Oct 01 17:19:24 compute-0 ceph-osd[88140]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 01 17:19:24 compute-0 ceph-osd[88140]: do_command 'counter schema' '{prefix=counter schema}'
Oct 01 17:19:24 compute-0 ceph-osd[88140]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:02.710113+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95158272 unmapped: 2179072 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:03.710272+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 1949696 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:04.710430+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: do_command 'log dump' '{prefix=log dump}'
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb3443/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182994 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95125504 unmapped: 13254656 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: do_command 'perf dump' '{prefix=perf dump}'
Oct 01 17:19:24 compute-0 ceph-osd[88140]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct 01 17:19:24 compute-0 ceph-osd[88140]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct 01 17:19:24 compute-0 ceph-osd[88140]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct 01 17:19:24 compute-0 ceph-osd[88140]: do_command 'perf schema' '{prefix=perf schema}'
Oct 01 17:19:24 compute-0 ceph-osd[88140]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:05.710558+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 12820480 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb3443/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:06.710677+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb3443/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 12820480 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:07.710813+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 12820480 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:08.710951+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 12820480 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:09.711065+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182994 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 12820480 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:10.711177+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 12820480 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 29.981096268s of 30.019861221s, submitted: 147
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:11.711309+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb3443/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 12820480 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:12.711455+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Got map version 19
Oct 01 17:19:24 compute-0 ceph-osd[88140]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3235544197,v1:192.168.122.100:6801/3235544197]
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:13.711584+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:14.711703+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:15.711872+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:16.712088+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:17.712269+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:18.712380+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:19.712497+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:20.714931+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:21.715102+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:22.715219+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:23.715342+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:24.715486+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:25.715594+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:26.715715+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:27.715854+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:28.716021+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:29.716151+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:30.716302+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:31.716515+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:32.716642+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:33.716786+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:34.716943+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:35.717100+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:36.717265+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:37.717443+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:38.717595+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:39.717716+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:40.717865+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:41.718119+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:42.718354+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:43.718560+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:44.718760+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:45.718942+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:46.719083+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:47.719243+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:48.719394+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:49.719559+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:50.719744+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:51.719960+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:52.720080+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95567872 unmapped: 12812288 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:53.720213+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:54.720389+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:55.720596+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:56.720752+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:57.720914+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:58.721064+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:11:59.721212+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:00.721372+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:01.721579+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:02.721741+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:03.721879+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:04.722067+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:05.722248+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:06.722371+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:07.722520+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:08.722704+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:09.722885+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:10.723068+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:11.723489+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:12.723646+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:13.723792+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:14.723951+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:15.724171+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:16.724318+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:17.724446+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:18.724559+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:19.724691+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:20.724865+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:21.725036+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:22.725175+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:23.725304+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:24.725517+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:25.725655+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:26.725817+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:27.725955+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:28.726116+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:29.726252+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:30.726369+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:31.726522+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:32.726930+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:33.727074+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:34.727196+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:35.727364+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:36.727548+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:37.727726+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:38.727963+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:39.728093+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:40.728223+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:41.728369+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:42.728488+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:43.728664+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:44.728780+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:45.728933+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:46.729047+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:47.729190+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:48.729307+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:49.729508+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:50.729672+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:51.729880+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:52.730065+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:53.730191+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:54.730318+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:55.730465+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:56.730673+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:57.730821+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:58.730975+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:12:59.731107+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:00.731255+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:01.731438+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:02.731566+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:03.731710+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:04.731878+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:05.732085+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:06.732229+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:07.732354+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:08.732493+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:09.732648+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:10.732774+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:11.732924+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:12.733083+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:13.733202+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:14.733312+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:15.733451+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:16.733581+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:17.733704+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:18.733967+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95551488 unmapped: 12828672 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:19.734128+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 12820480 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:20.734261+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 12820480 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:21.734429+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 12820480 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:22.734549+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 12820480 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:23.734970+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 12820480 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:24.735136+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 12820480 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:25.735257+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 12820480 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:26.735426+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 12820480 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:27.735558+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 12820480 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:28.735672+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 12820480 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:29.735844+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:30.735986+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 12820480 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:31.736165+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 12820480 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:32.736361+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 12820480 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:33.736544+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 12820480 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:34.736677+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 12820480 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:35.736825+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 12820480 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:36.736978+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 12820480 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:37.737111+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 12820480 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:38.737293+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 12820480 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:39.737444+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 12820480 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:40.737561+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 12820480 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:41.737791+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 12820480 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:42.737945+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 12820480 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:43.738121+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 12820480 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:44.738321+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 12820480 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:45.738561+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 12820480 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:46.738716+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 12820480 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:47.739034+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 12820480 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:48.739227+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95559680 unmapped: 12820480 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:49.739406+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 13008896 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:50.739621+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 13008896 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:51.739822+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 13008896 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:52.739996+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 13008896 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:53.740205+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 13008896 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:54.740345+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 13008896 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:55.740604+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 13008896 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:56.740808+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 13008896 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:57.741049+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 13008896 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:58.741260+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 13008896 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:13:59.741413+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 13008896 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:00.741557+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 13008896 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:01.741741+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 13008896 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:02.741872+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 13008896 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:03.742050+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 13008896 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:04.742251+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 13008896 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:05.742528+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 13008896 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:06.742709+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 13008896 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:07.742974+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 13008896 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:08.743109+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 13008896 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:09.743282+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 13008896 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:10.743409+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 13008896 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:11.743628+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 13008896 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:12.743748+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 13008896 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:13.743878+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 13008896 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:14.744075+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 13008896 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:15.744216+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 13008896 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:16.744362+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 13008896 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:17.744499+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 13008896 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:18.744674+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 13008896 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:19.744853+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 13008896 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:20.745010+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 13000704 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:21.745175+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 13000704 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:22.745352+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 13000704 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:23.745550+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 13000704 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:24.745753+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 13000704 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:25.746400+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 13000704 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:26.746629+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 12992512 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:27.746780+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 12992512 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:28.747025+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 12992512 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:29.747237+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 12992512 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:30.747369+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 12992512 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:31.747657+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 12992512 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:32.747883+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 12992512 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:33.748138+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 12992512 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:34.748368+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 12992512 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:35.748564+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 12992512 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:36.748817+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 12992512 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 9121 writes, 33K keys, 9121 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 9121 writes, 2154 syncs, 4.23 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2198 writes, 6778 keys, 2198 commit groups, 1.0 writes per commit group, ingest: 9.44 MB, 0.02 MB/s
                                           Interval WAL: 2198 writes, 799 syncs, 2.75 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:37.749073+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 12992512 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:38.749264+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 12992512 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:39.749482+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 12992512 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:40.749814+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 12992512 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:41.750146+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 12992512 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:42.750398+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 12992512 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:43.750600+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 12992512 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:44.750754+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 12992512 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:45.750944+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 12992512 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:46.751164+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 12992512 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:47.751350+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 12992512 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:48.751582+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 12992512 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:49.751830+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 12992512 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:50.751948+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 12992512 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:51.752129+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 12992512 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:52.752359+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 12992512 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:53.752534+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 12992512 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:54.752731+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 12992512 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:55.752958+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 12992512 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:56.753105+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 12992512 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:57.753278+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 12992512 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:58.753450+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 12992512 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:14:59.753604+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 12984320 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:00.753737+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 12984320 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:01.754023+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 12984320 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:02.754253+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 12984320 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:03.754451+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 12984320 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:04.754729+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 12976128 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:05.754950+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 12976128 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:06.755087+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 12976128 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:07.755238+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 12976128 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:08.755402+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 12976128 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:09.755577+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 12976128 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:10.755757+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 12976128 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:11.755981+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 12976128 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:12.756140+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 12976128 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:13.756386+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 12976128 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:14.756544+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 12976128 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:15.756718+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 12976128 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:16.757020+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 12976128 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:17.757192+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 12976128 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:18.757339+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 12976128 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:19.757538+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 12976128 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:20.757721+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:21.757963+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:22.758103+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:23.758228+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:24.758387+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:25.758570+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:26.758729+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:27.758981+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:28.759103+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:29.759244+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:30.759400+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:31.759572+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:32.759709+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:33.759840+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:34.760025+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:35.760219+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183010 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:36.760393+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:37.760587+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:38.760693+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 267.902801514s of 267.914978027s, submitted: 1
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:39.760878+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95338496 unmapped: 13041664 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:40.761040+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95338496 unmapped: 13041664 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:41.761267+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95338496 unmapped: 13041664 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:42.761503+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95338496 unmapped: 13041664 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:43.761690+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95338496 unmapped: 13041664 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:44.761945+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95338496 unmapped: 13041664 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:45.762071+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95338496 unmapped: 13041664 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:46.762278+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95338496 unmapped: 13041664 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:47.762396+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95338496 unmapped: 13041664 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:48.762532+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95338496 unmapped: 13041664 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:49.762686+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95338496 unmapped: 13041664 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:50.762865+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95338496 unmapped: 13041664 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:51.763083+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95338496 unmapped: 13041664 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:52.763254+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95338496 unmapped: 13041664 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:53.763385+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95354880 unmapped: 13025280 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:54.763507+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95354880 unmapped: 13025280 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:55.763648+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95354880 unmapped: 13025280 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:56.763792+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95354880 unmapped: 13025280 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:57.763952+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95354880 unmapped: 13025280 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:58.764018+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95354880 unmapped: 13025280 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:15:59.764152+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95354880 unmapped: 13025280 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:00.764280+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95354880 unmapped: 13025280 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:01.764434+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95354880 unmapped: 13025280 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:02.764529+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95354880 unmapped: 13025280 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:03.764646+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95354880 unmapped: 13025280 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:04.764753+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95354880 unmapped: 13025280 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:05.764872+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95354880 unmapped: 13025280 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:06.764969+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95354880 unmapped: 13025280 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:07.765096+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95363072 unmapped: 13017088 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:08.765249+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95363072 unmapped: 13017088 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:09.765384+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95363072 unmapped: 13017088 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:10.765545+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95363072 unmapped: 13017088 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:11.765748+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95363072 unmapped: 13017088 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:12.765928+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95363072 unmapped: 13017088 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:13.766070+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 13000704 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:14.766217+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 13000704 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:15.766289+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 13000704 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:16.766427+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 13000704 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:17.766541+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 13000704 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:18.766776+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 13000704 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:19.766942+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 13000704 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:20.767051+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 13000704 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:21.767216+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 13000704 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:22.767382+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 13000704 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:23.767518+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 13000704 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:24.767709+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 13000704 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:25.767869+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 13000704 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:26.768010+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 13000704 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:27.768182+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 13000704 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:28.768346+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 13000704 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:29.768535+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 13000704 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:30.768681+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 13000704 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:31.768927+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 13000704 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:32.769109+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 13000704 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:33.769267+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 12984320 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:34.769409+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 12984320 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:35.769514+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 12984320 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:36.769679+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 12984320 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:37.769843+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 12984320 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:38.769973+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 12984320 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:39.770110+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 12984320 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:40.770243+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 12984320 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:41.770430+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 12984320 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:42.770580+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 12984320 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:43.770686+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 12984320 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:44.770880+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 12984320 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:45.771124+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 12984320 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:46.771228+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 12984320 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:47.771373+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 12984320 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:48.771661+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 12984320 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:49.771839+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 12984320 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:50.771976+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 12984320 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:51.772511+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 12984320 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:52.772706+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 12984320 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:53.773035+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:54.773241+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:55.774433+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:56.775869+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:57.776295+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:58.777772+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:16:59.778025+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:00.778168+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:01.778336+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:02.779221+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:03.780161+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:04.780357+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:05.780538+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:06.780708+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:07.781208+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:08.781366+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:09.781617+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:10.781762+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:11.781996+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:12.782242+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 12967936 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:13.782433+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95428608 unmapped: 12951552 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:14.782635+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95428608 unmapped: 12951552 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:15.782819+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95428608 unmapped: 12951552 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:16.782986+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95428608 unmapped: 12951552 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:17.783133+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95428608 unmapped: 12951552 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:18.783255+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95428608 unmapped: 12951552 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:19.783469+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95428608 unmapped: 12951552 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:20.783841+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95428608 unmapped: 12951552 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:21.784048+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95428608 unmapped: 12951552 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:22.784194+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95428608 unmapped: 12951552 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:23.784432+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95428608 unmapped: 12951552 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:24.784586+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95428608 unmapped: 12951552 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:25.784748+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95428608 unmapped: 12951552 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:26.784936+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95428608 unmapped: 12951552 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:27.785130+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95428608 unmapped: 12951552 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:28.785302+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95428608 unmapped: 12951552 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:29.785511+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95428608 unmapped: 12951552 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:30.785714+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95428608 unmapped: 12951552 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:31.785988+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95428608 unmapped: 12951552 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:32.786172+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95428608 unmapped: 12951552 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:33.786358+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95444992 unmapped: 12935168 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:34.786537+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95444992 unmapped: 12935168 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:35.786675+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95444992 unmapped: 12935168 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:36.786802+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95444992 unmapped: 12935168 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:37.787075+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95444992 unmapped: 12935168 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:38.787274+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95444992 unmapped: 12935168 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:39.787424+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95444992 unmapped: 12935168 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:40.787587+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95444992 unmapped: 12935168 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:41.787773+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95444992 unmapped: 12935168 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:42.787938+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95444992 unmapped: 12935168 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:43.788131+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95444992 unmapped: 12935168 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:44.788299+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95444992 unmapped: 12935168 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:45.788454+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95444992 unmapped: 12935168 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:46.788573+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95444992 unmapped: 12935168 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:47.788692+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95444992 unmapped: 12935168 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:48.788858+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95444992 unmapped: 12935168 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:49.789034+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95444992 unmapped: 12935168 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:50.789164+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:51.789317+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95444992 unmapped: 12935168 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:52.789470+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95453184 unmapped: 12926976 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:53.789602+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95453184 unmapped: 12926976 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:54.789762+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95469568 unmapped: 12910592 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:55.790071+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95469568 unmapped: 12910592 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:56.790214+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95469568 unmapped: 12910592 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:57.790385+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95469568 unmapped: 12910592 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:58.790581+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95469568 unmapped: 12910592 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:17:59.790736+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95469568 unmapped: 12910592 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:00.790854+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95469568 unmapped: 12910592 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:01.791028+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95469568 unmapped: 12910592 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:02.791161+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95469568 unmapped: 12910592 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:03.791292+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95469568 unmapped: 12910592 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:04.791466+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95469568 unmapped: 12910592 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:05.791614+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95469568 unmapped: 12910592 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:06.791763+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95469568 unmapped: 12910592 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:07.791882+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95469568 unmapped: 12910592 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:08.792037+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95469568 unmapped: 12910592 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:09.792180+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95469568 unmapped: 12910592 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:10.792325+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95469568 unmapped: 12910592 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:11.792521+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95469568 unmapped: 12910592 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:12.792655+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95469568 unmapped: 12910592 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:13.792805+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95469568 unmapped: 12910592 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:14.792968+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 12894208 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:15.793115+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 12894208 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:16.793315+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 12894208 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:17.793514+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 12894208 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:18.793695+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 12894208 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:19.793954+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 12894208 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:20.794143+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 12894208 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:21.794371+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 12894208 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:22.794601+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 12894208 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:23.794768+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 12894208 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:24.794932+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95494144 unmapped: 12886016 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:25.795094+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95494144 unmapped: 12886016 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:26.795274+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95494144 unmapped: 12886016 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:27.795444+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95494144 unmapped: 12886016 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:28.795583+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95494144 unmapped: 12886016 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:29.795768+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95494144 unmapped: 12886016 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:30.795963+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95494144 unmapped: 12886016 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:31.796268+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95494144 unmapped: 12886016 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:32.796436+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95494144 unmapped: 12886016 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:33.796683+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95494144 unmapped: 12886016 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:34.796822+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95510528 unmapped: 12869632 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:35.796950+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95510528 unmapped: 12869632 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:36.797086+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95510528 unmapped: 12869632 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:37.797290+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95510528 unmapped: 12869632 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:38.797410+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95510528 unmapped: 12869632 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:39.797540+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95510528 unmapped: 12869632 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:40.797660+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95510528 unmapped: 12869632 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:41.797845+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95510528 unmapped: 12869632 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:42.798054+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95510528 unmapped: 12869632 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:43.798245+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95510528 unmapped: 12869632 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:44.798359+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95510528 unmapped: 12869632 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:45.798520+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95510528 unmapped: 12869632 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:46.798717+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95510528 unmapped: 12869632 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:47.798908+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95510528 unmapped: 12869632 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:48.799066+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95518720 unmapped: 12861440 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:49.799279+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95518720 unmapped: 12861440 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcb34d0/0xdae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:50.799458+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95518720 unmapped: 12861440 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 17:19:24 compute-0 ceph-osd[88140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 17:19:24 compute-0 ceph-osd[88140]: bluestore.MempoolThread(0x559b45919b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182786 data_alloc: 218103808 data_used: 561152
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:51.799653+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: do_command 'config diff' '{prefix=config diff}'
Oct 01 17:19:24 compute-0 ceph-osd[88140]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95174656 unmapped: 13205504 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: do_command 'config show' '{prefix=config show}'
Oct 01 17:19:24 compute-0 ceph-osd[88140]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 01 17:19:24 compute-0 ceph-osd[88140]: do_command 'counter dump' '{prefix=counter dump}'
Oct 01 17:19:24 compute-0 ceph-osd[88140]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 01 17:19:24 compute-0 ceph-osd[88140]: do_command 'counter schema' '{prefix=counter schema}'
Oct 01 17:19:24 compute-0 ceph-osd[88140]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:52.799787+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95584256 unmapped: 12795904 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: tick
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_tickets
Oct 01 17:19:24 compute-0 ceph-osd[88140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T17:18:53.799930+0000)
Oct 01 17:19:24 compute-0 ceph-osd[88140]: prioritycache tune_memory target: 4294967296 mapped: 95313920 unmapped: 13066240 heap: 108380160 old mem: 2845415832 new mem: 2845415832
Oct 01 17:19:24 compute-0 ceph-osd[88140]: do_command 'log dump' '{prefix=log dump}'
Oct 01 17:19:24 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 17:19:25 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14963 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:25 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/604023589' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 01 17:19:25 compute-0 ceph-mon[74273]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 01 17:19:25 compute-0 ceph-mon[74273]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 01 17:19:25 compute-0 ceph-mon[74273]: pgmap v1520: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:19:25 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/659991805' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 01 17:19:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Oct 01 17:19:25 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/877741351' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 01 17:19:25 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Oct 01 17:19:25 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/355484434' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 01 17:19:26 compute-0 ceph-mon[74273]: from='client.14963 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:26 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/877741351' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 01 17:19:26 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/355484434' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 01 17:19:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Oct 01 17:19:26 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3874563980' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 01 17:19:26 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:19:26 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Oct 01 17:19:26 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3547527531' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 01 17:19:26 compute-0 nova_compute[259504]: 2025-10-01 17:19:26.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:19:27 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14973 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:27 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3874563980' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 01 17:19:27 compute-0 ceph-mon[74273]: pgmap v1521: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:19:27 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3547527531' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 01 17:19:27 compute-0 systemd[1]: Starting Hostname Service...
Oct 01 17:19:27 compute-0 systemd[1]: Started Hostname Service.
Oct 01 17:19:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Oct 01 17:19:27 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4003595137' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 01 17:19:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:19:27 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Oct 01 17:19:27 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1551991560' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 01 17:19:28 compute-0 ceph-mon[74273]: from='client.14973 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:28 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/4003595137' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 01 17:19:28 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1551991560' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 01 17:19:28 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14979 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:28 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1522: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:19:28 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Oct 01 17:19:28 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3888368291' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 01 17:19:29 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14983 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:29 compute-0 ceph-mon[74273]: from='client.14979 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:29 compute-0 ceph-mon[74273]: pgmap v1522: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:19:29 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3888368291' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 01 17:19:29 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14985 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:29 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Oct 01 17:19:29 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/359643283' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 01 17:19:30 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Oct 01 17:19:30 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1791952268' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 01 17:19:30 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1523: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:19:30 compute-0 ceph-mon[74273]: from='client.14983 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:30 compute-0 ceph-mon[74273]: from='client.14985 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:30 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/359643283' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 01 17:19:30 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1791952268' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 01 17:19:30 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14991 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:30 compute-0 nova_compute[259504]: 2025-10-01 17:19:30.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:19:30 compute-0 nova_compute[259504]: 2025-10-01 17:19:30.751 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 17:19:30 compute-0 nova_compute[259504]: 2025-10-01 17:19:30.751 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 17:19:30 compute-0 nova_compute[259504]: 2025-10-01 17:19:30.812 2 DEBUG nova.compute.manager [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 17:19:31 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14993 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:31 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:19:31 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 17:19:31 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:19:31 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:19:31 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:19:31 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:19:31 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:19:31 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:19:31 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:19:31 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 01 17:19:31 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:19:31 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005739061380803542 of space, bias 4.0, pg target 0.6886873656964251 quantized to 16 (current 16)
Oct 01 17:19:31 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:19:31 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Oct 01 17:19:31 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:19:31 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 17:19:31 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:19:31 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 17:19:31 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:19:31 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 17:19:31 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 17:19:31 compute-0 ceph-mgr[74571]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 17:19:31 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Oct 01 17:19:31 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3780821554' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 01 17:19:31 compute-0 ceph-mon[74273]: pgmap v1523: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:19:31 compute-0 ceph-mon[74273]: from='client.14991 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:31 compute-0 ceph-mon[74273]: from='client.14993 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:31 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3780821554' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 01 17:19:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Oct 01 17:19:32 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3694863584' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 01 17:19:32 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.14999 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:32 compute-0 ceph-mgr[74571]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:19:32 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/3694863584' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 01 17:19:32 compute-0 ceph-mon[74273]: from='client.14999 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:32 compute-0 ceph-mon[74273]: pgmap v1524: 305 pgs: 305 active+clean; 77 MiB data, 335 MiB used, 60 GiB / 60 GiB avail
Oct 01 17:19:32 compute-0 nova_compute[259504]: 2025-10-01 17:19:32.749 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:19:32 compute-0 nova_compute[259504]: 2025-10-01 17:19:32.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:19:32 compute-0 nova_compute[259504]: 2025-10-01 17:19:32.750 2 DEBUG oslo_service.periodic_task [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 17:19:32 compute-0 nova_compute[259504]: 2025-10-01 17:19:32.782 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:19:32 compute-0 nova_compute[259504]: 2025-10-01 17:19:32.782 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:19:32 compute-0 nova_compute[259504]: 2025-10-01 17:19:32.782 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 17:19:32 compute-0 nova_compute[259504]: 2025-10-01 17:19:32.782 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 17:19:32 compute-0 nova_compute[259504]: 2025-10-01 17:19:32.783 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 17:19:32 compute-0 ceph-mon[74273]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 17:19:32 compute-0 ceph-mgr[74571]: log_channel(audit) log [DBG] : from='client.15001 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 17:19:33 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1306014352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:19:33 compute-0 nova_compute[259504]: 2025-10-01 17:19:33.241 2 DEBUG oslo_concurrency.processutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 17:19:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 01 17:19:33 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/589334095' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 01 17:19:33 compute-0 nova_compute[259504]: 2025-10-01 17:19:33.408 2 WARNING nova.virt.libvirt.driver [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 17:19:33 compute-0 nova_compute[259504]: 2025-10-01 17:19:33.409 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4802MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 17:19:33 compute-0 nova_compute[259504]: 2025-10-01 17:19:33.409 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 17:19:33 compute-0 nova_compute[259504]: 2025-10-01 17:19:33.410 2 DEBUG oslo_concurrency.lockutils [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 17:19:33 compute-0 nova_compute[259504]: 2025-10-01 17:19:33.616 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 17:19:33 compute-0 nova_compute[259504]: 2025-10-01 17:19:33.616 2 DEBUG nova.compute.resource_tracker [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 17:19:33 compute-0 ceph-mon[74273]: from='client.15001 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 17:19:33 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/1306014352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 17:19:33 compute-0 ceph-mon[74273]: from='client.? 192.168.122.100:0/589334095' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 01 17:19:33 compute-0 ceph-mon[74273]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Oct 01 17:19:33 compute-0 ceph-mon[74273]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1111402727' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 01 17:19:33 compute-0 nova_compute[259504]: 2025-10-01 17:19:33.787 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Refreshing inventories for resource provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 01 17:19:33 compute-0 nova_compute[259504]: 2025-10-01 17:19:33.883 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Updating ProviderTree inventory for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 01 17:19:33 compute-0 nova_compute[259504]: 2025-10-01 17:19:33.884 2 DEBUG nova.compute.provider_tree [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Updating inventory in ProviderTree for provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 01 17:19:33 compute-0 nova_compute[259504]: 2025-10-01 17:19:33.900 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Refreshing aggregate associations for resource provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 01 17:19:33 compute-0 nova_compute[259504]: 2025-10-01 17:19:33.937 2 DEBUG nova.scheduler.client.report [None req-304b8284-bbd7-4297-b289-8b57af58afac - - - - - -] Refreshing trait associations for resource provider 2417da73-53f1-4edf-ae4c-fbd9fa470d6b, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_ABM,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AVX2,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_ACCELERATORS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSSE3,HW_CPU_X86_SHA,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
